Cerebrium Free Serverless GPU Credits - $10 Free + Up to $1,000 for Startups
Source: https://www.cerebrium.ai/
Description
Create account to comment on specific lines or Sign in
+ 1 Cerebrium is a serverless AI infrastructure platform (YC-backed) that gives new accounts $10 in free GPU credits with access to 9 GPU types from T4 to H200. The platform features pay-per-second billing with no idle costs, cold starts as low as 2-4 seconds, and automatic scaling from zero to thousands of requests. Qualified startups can apply for up to $1,000 in credits with dedicated engineering support. No credit card required for the free Hobby tier.
No comments on this line yet.
+ 2
No comments on this line yet.
+
3
No comments on this line yet.
+ 4
No comments on this line yet.
+ 6
No comments on this line yet.
+ 7 1. Go to cerebrium.ai and click Get Started
No comments on this line yet.
+ 8 2. Create an account with your email
No comments on this line yet.
+ 9 3. You'll immediately receive $10 in free GPU credits on the Hobby plan
No comments on this line yet.
+
10
4. Install the CLI: pip install cerebrium
No comments on this line yet.
+
11
5. Authenticate: cerebrium login and enter your API key
No comments on this line yet.
+
12
6. Initialize a project: cerebrium init my-project
No comments on this line yet.
+
13
7. Configure GPU in cerebrium.toml under [cerebrium.hardware] and deploy with cerebrium deploy
No comments on this line yet.
+ 14
No comments on this line yet.
+ 15 For startup credits ($1,000):
No comments on this line yet.
+ 16 • Apply through Cerebrium's startup program on their website
No comments on this line yet.
+ 17 • Includes face-time with Cerebrium engineers for onboarding
No comments on this line yet.
+ 18 • Must be a qualified startup (criteria not publicly detailed)
No comments on this line yet.
+ 19
No comments on this line yet.
+
20
No comments on this line yet.
+ 21
No comments on this line yet.
+ 23
No comments on this line yet.
+ 24 GPU ModelIdentifierVRAMPrice/HourPlan Required
No comments on this line yet.
+ 25 NVIDIA T4TURING_T416 GB$0.59Hobby+
No comments on this line yet.
+ 26 NVIDIA L4ADA_L424 GB$0.80Hobby+
No comments on this line yet.
+ 27 NVIDIA A10AMPERE_A1024 GB$1.10Hobby+
No comments on this line yet.
+ 28 NVIDIA A100 (40GB)AMPERE_A100_40GB40 GB$1.45Enterprise
No comments on this line yet.
+ 29 NVIDIA L40SADA_L4048 GB$1.95Hobby+
No comments on this line yet.
+ 30 NVIDIA A100 (80GB)AMPERE_A100_80GB80 GB$2.06Enterprise
No comments on this line yet.
+ 31 NVIDIA H100HOPPER_H10080 GB$2.21Enterprise
No comments on this line yet.
+ 32 NVIDIA H200• 141 GB$3.30Enterprise
No comments on this line yet.
+ 33 AWS Inferentia 2INF232 GB• Hobby+
No comments on this line yet.
+ 34 AWS TrainiumTRN132 GB• Hobby+
No comments on this line yet.
+ 35
No comments on this line yet.
+ 36 All GPUs support up to 8x multi-GPU configurations for distributed workloads.
No comments on this line yet.
+ 37
No comments on this line yet.
+
38
No comments on this line yet.
+ 39
No comments on this line yet.
+ 41
No comments on this line yet.
+ 42 ResourceLimit
No comments on this line yet.
+ 43 Free credits$10 (new accounts)
No comments on this line yet.
+ 44 Startup creditsUp to $1,000 (qualified startups)
No comments on this line yet.
+ 45 Hobby plan cost$0/month
No comments on this line yet.
+ 46 GPUs on Hobby planT4, L4, A10, L40S, Inferentia, Trainium
No comments on this line yet.
+ 47 GPUs requiring EnterpriseA100, H100, H200
No comments on this line yet.
+ 48 Block storage (free)First 100 GB included
No comments on this line yet.
+ 49 Additional storage$5/month per additional block
No comments on this line yet.
+ 50 Cold start time2-4 seconds typical
No comments on this line yet.
+ 51 Billing modelPay-per-second, no idle costs
No comments on this line yet.
+ 52 Credit card requiredNo (Hobby plan)
No comments on this line yet.
+ 53
No comments on this line yet.
+
54
No comments on this line yet.
+ 55
No comments on this line yet.
+ 57
No comments on this line yet.
+ 58 FeatureHobby ($0/mo)Standard ($100/mo)Enterprise (Custom)
No comments on this line yet.
+ 59 Free credits$10• Up to $1,000
No comments on this line yet.
+ 60 GPU accessT4, L4, A10, L40SAll standard GPUsAll GPUs including H100, A100, H200
No comments on this line yet.
+ 61 Multi-GPUUp to 8xUp to 8xUp to 8x
No comments on this line yet.
+ 62 Storage100 GB freeIncludedCustom
No comments on this line yet.
+ 63 SupportCommunityPriorityDedicated engineering
No comments on this line yet.
+ 64
No comments on this line yet.
+
65
No comments on this line yet.
+ 66
No comments on this line yet.
+ 68
No comments on this line yet.
+
69
No comments on this line yet.
+ 71 [cerebrium.hardware]
No comments on this line yet.
+ 72 compute = "AMPERE_A10"
No comments on this line yet.
+ 73 gpu_count = 1
No comments on this line yet.
+ 74 cpu = 4
No comments on this line yet.
+ 75 memory = 16
No comments on this line yet.
+
76
No comments on this line yet.
+ 77
No comments on this line yet.
+ 78 Or via CLI:
No comments on this line yet.
+
79
No comments on this line yet.
+ 80 cerebrium deploy --compute AMPERE_A10 --gpu-count 1
No comments on this line yet.
+
81
No comments on this line yet.
+ 82
No comments on this line yet.
+
83
No comments on this line yet.
+ 84
No comments on this line yet.
+ 86
No comments on this line yet.
+ 87 • $10 goes further than you think: At T4 pricing ($0.59/hour billed per-second), $10 buys about 17 hours of GPU compute. If your inference calls take 5 seconds each, that's roughly 12,000 API calls
No comments on this line yet.
+ 88 • Hobby plan is genuinely useful: Unlike many "free tiers" that are crippled, the Hobby plan gives you access to solid GPUs (T4, L4, A10, L40S) and 100 GB storage
No comments on this line yet.
+ 89 • Cold starts are fast: 2-4 second cold starts are competitive with Modal and faster than most alternatives. Code at the top level of your module runs only on first container spin-up
No comments on this line yet.
+
90
• VRAM calculation: Use the formula modelVRAM = parameters x bytes_per_dtype x 1.5 to estimate GPU requirements. The 1.5x buffer accounts for runtime overhead
No comments on this line yet.
+ 91 • Multi-GPU for large models: You can run up to 8 GPUs in a single deployment, making it possible to serve 70B+ parameter models with tensor parallelism
No comments on this line yet.
+ 92 • YC-backed: Cerebrium is a Y Combinator company, which may give you confidence in platform longevity and may help if you're also a YC company
No comments on this line yet.
+ 93 • Mistral integration: Cerebrium has official Mistral AI deployment guides, making it easy to self-host Mistral models
No comments on this line yet.
+ 94 • Compare with Modal: Modal offers $30/month in free credits vs Cerebrium's one-time $10. But Cerebrium's per-second GPU pricing is often cheaper for sustained workloads
No comments on this line yet.
+ 95 • AWS chip support: Cerebrium uniquely offers AWS Inferentia 2 and Trainium chips on the Hobby plan, which can be significantly cheaper for inference and training respectively
No comments on this line yet.
+ 96
No comments on this line yet.
+
97
No comments on this line yet.
+ 98
No comments on this line yet.
+ 99 Sources:
No comments on this line yet.
+ 100 • Cerebrium Official Site
No comments on this line yet.
+ 101 • Cerebrium Pricing
No comments on this line yet.
+ 102 • Cerebrium GPU Documentation
No comments on this line yet.
+ 103 • Cerebrium on Y Combinator
No comments on this line yet.
+ 104 • Cerebrium Pricing Deep Dive - Skywork AI
No comments on this line yet.
+ 105 • Cerebrium Review - GetDeploying
No comments on this line yet.
+ 106 • Cerebrium on Product Hunt
No comments on this line yet.
+ 107 • Free GPU Cloud Trials 2026 - GMI Cloud
No comments on this line yet.