Nebius Research Credits Program
Source: https://nebius.com/nebius-research-credits-program
Description
Create account to comment on specific lines or Sign in
+ 1 Nebius (European AI-native cloud, NASDAQ: NBIS) awards GPU cloud credits and Token Factory inference tokens to academic researchers. The headline allocation is up to 8 GPUs for one full year for training/HPC workloads, plus up to 10M tokens for inference via Nebius Token Factory (60+ open-source models, OpenAI-compatible API). Applications open in monthly two-week windows with up to 6 winners selected each cycle for the 2026-2027 academic year.
No comments on this line yet.
+ 2
No comments on this line yet.
+ 3 This is the Nebius research credits / research grants program (the two terms are used interchangeably on Nebius' own site). It is a research-track grant — not a startup or hobbyist offer — so eligibility is gated to people affiliated with accredited institutions.
No comments on this line yet.
+ 4
No comments on this line yet.
+
5
No comments on this line yet.
+ 6
No comments on this line yet.
+ 8
No comments on this line yet.
+ 9 Yes, if:
No comments on this line yet.
+ 10 • You are a postgraduate (master's), PhD student, postdoc, or faculty member at an accredited university or non-profit research institution
No comments on this line yet.
+ 11 • You have a concrete AI / ML / data analytics / HPC research proposal that needs GPU compute
No comments on this line yet.
+ 12 • You can describe expected resource usage and outputs (papers, open models, datasets)
No comments on this line yet.
+ 13 • You are OK with Nebius co-marketing the fact that you used their compute
No comments on this line yet.
+ 14
No comments on this line yet.
+ 15 No, if:
No comments on this line yet.
+ 16 • You are an undergraduate without a research role
No comments on this line yet.
+ 17 • You are a startup, hobbyist, or independent researcher unaffiliated with an accredited institution → use Nebius Startups or pay-as-you-go instead
No comments on this line yet.
+ 18 • You have already received a Nebius Research grant (one-shot per person)
No comments on this line yet.
+ 19 • You need credits now — review and notification cycles take weeks
No comments on this line yet.
+ 20
No comments on this line yet.
+
21
No comments on this line yet.
+ 22
No comments on this line yet.
+ 24
No comments on this line yet.
+ 25 ResourceAllocation
No comments on this line yet.
+ 26 GPU computeUp to 8 GPUs for 1 year (training/HPC)
No comments on this line yet.
+ 27 Token Factory inferenceUp to 10,000,000 tokens across 60+ OSS models
No comments on this line yet.
+ 28 Duration12 months from grant activation
No comments on this line yet.
+ 29 Award size"Determined based on the proposal" — 8 GPUs / 10M tokens are the published ceilings
No comments on this line yet.
+ 30
No comments on this line yet.
+ 31 Nebius does not publicly commit to a specific GPU SKU for grant allocations. Their public GPU lineup includes HGX H100, H200, B200, B300, GB200 NVL72, and GB300 NVL72 — H100/H200 are the most common Hopper-class workhorses for academic training. Expect Hopper-class hardware unless your proposal specifically justifies Blackwell access.
No comments on this line yet.
+ 32
No comments on this line yet.
+
33
No comments on this line yet.
+ 34
No comments on this line yet.
+ 36
No comments on this line yet.
+ 37 1. Go to nebius.com/nebius-research-credits-program
No comments on this line yet.
+ 38 2. Wait for the next monthly application window to open (windows are 2 weeks long; check the page for the active window)
No comments on this line yet.
+ 39 3. Click Apply and fill in the proposal form. You will be asked for:
No comments on this line yet.
+ 40 • Your name, institutional email, role (postgrad / PhD / postdoc / faculty), institution
No comments on this line yet.
+ 41 • Project title and abstract
No comments on this line yet.
+ 42 • Research objectives, methodology, expected impact
No comments on this line yet.
+ 43 • Resource estimate — how many GPUs, for how long, and roughly how many inference tokens
No comments on this line yet.
+ 44 • Planned outputs (papers, open-source code, models, datasets)
No comments on this line yet.
+ 45 • Acknowledgement that Nebius support can be referenced in publications / blog posts
No comments on this line yet.
+ 46 4. Submit before the 2-week window closes
No comments on this line yet.
+ 47 5. Nebius Academy reviews submissions over the next ~3 weeks
No comments on this line yet.
+ 48 6. Up to 6 winners per cycle are notified by email
No comments on this line yet.
+ 49 7. On approval, you onboard onto Nebius AI Cloud and Token Factory and credits are activated
No comments on this line yet.
+ 50
No comments on this line yet.
+ 51 Important caveats:
No comments on this line yet.
+ 52 • One application per person, ever — you cannot reapply if rejected or if your grant ends
No comments on this line yet.
+ 53 • Only one person per team can apply for the same project — coordinate with collaborators first
No comments on this line yet.
+ 54 • Awards are at Nebius' sole discretion and program terms can change
No comments on this line yet.
+ 55 • Proposals reviewed throughout summer 2026; some applicants notified late Q3 to early Q4 2026
No comments on this line yet.
+ 56
No comments on this line yet.
+
57
No comments on this line yet.
+ 58
No comments on this line yet.
+ 60
No comments on this line yet.
+ 61 Token Factory is Nebius' inference platform. The 10M tokens can be spent across 60+ open-source models, including:
No comments on this line yet.
+ 62
No comments on this line yet.
+ 63 • Large LLMs: DeepSeek R1 / V3, Llama-3.3-70B, Qwen2.5-72B, GPT OSS, Mistral-Nemo
No comments on this line yet.
+ 64 • Reasoning: QwQ-32B
No comments on this line yet.
+ 65 • Multimodal: DeepSeek V3 multimodal
No comments on this line yet.
+ 66 • Embeddings & safety: BAAI embeddings, Llama-Guard-3-8B
No comments on this line yet.
+ 67 • Smaller open models: Google Gemma-2-27B and others
No comments on this line yet.
+ 68
No comments on this line yet.
+
69
The API is OpenAI-compatible — drop in https://api.studio.nebius.ai/v1 as the base URL with your Nebius API key and standard openai clients work unchanged. For up-to-date model availability and per-token pricing (used to calculate how far 10M tokens stretch), see Nebius Token Factory.
No comments on this line yet.
+ 70
No comments on this line yet.
+
71
No comments on this line yet.
+ 72
No comments on this line yet.
+ 74
No comments on this line yet.
+ 75 Nebius is a European AI-native cloud with public regions in:
No comments on this line yet.
+ 76
No comments on this line yet.
+ 77 Region codeLocation
No comments on this line yet.
+
78
eu-north1Finland (primary, largest capacity)
No comments on this line yet.
+
79
eu-west1Paris, France
No comments on this line yet.
+
80
us-central1Kansas City, Missouri (US)
No comments on this line yet.
+
81
me-west1Israel
No comments on this line yet.
+
82
eu-north2Iceland (private/select-access)
No comments on this line yet.
+
83
uk-south1UK (opened Q4 2025, private/select-access)
No comments on this line yet.
+ 84
No comments on this line yet.
+ 85 For academic projects with EU GDPR or data-sovereignty constraints, this is one of the few independent EU-based GPU clouds at scale.
No comments on this line yet.
+ 86
No comments on this line yet.
+
87
No comments on this line yet.
+ 88
No comments on this line yet.
+ 90
No comments on this line yet.
+ 91 ProgramBest forCadence
No comments on this line yet.
+ 92 Nebius Research Credits (this)EU/global academics needing 1-yr training GPU accessMonthly, 6 winners/cycle
No comments on this line yet.
+ 93 NAIRR PilotUS-based researchersRolling, 3-week turnaround
No comments on this line yet.
+ 94 EuroHPC AI FactoriesEU public-funded research, larger LUMI-class allocationsCalls throughout the year
No comments on this line yet.
+ 95 DOE INCITE Early CareerEarly-career researchers (≤10 yrs post-PhD), exascaleAnnual (Apr-Jun call)
No comments on this line yet.
+ 96 Lambda Research GrantsAcademic AI papers needing on-demand GPUsRolling
No comments on this line yet.
+ 97
No comments on this line yet.
+ 98 If you fit Nebius' eligibility, you can apply to several of these in parallel — they don't conflict.
No comments on this line yet.
+ 99
No comments on this line yet.
+
100
No comments on this line yet.
+ 101
No comments on this line yet.
+ 103
No comments on this line yet.
+ 104 • Specificity wins. A vague "we want to train LLMs" proposal underperforms a concrete "we will fine-tune Llama-3.3-70B on dataset X for task Y, requiring N GPU-days" pitch. Reviewers want to see realistic resource math.
No comments on this line yet.
+ 105 • Lead with impact. Mention target venues (NeurIPS, ICML, etc.), open-sourcing plans, and reproducibility — these all map to "transparent resource utilization" criteria.
No comments on this line yet.
+ 106 • The 8 GPUs / 10M tokens are ceilings, not defaults. Smaller, well-justified asks have a better hit rate than maximum requests with thin justification.
No comments on this line yet.
+ 107 • Co-marketing is part of the deal — be ready to acknowledge Nebius in papers, and possibly contribute a blog post or testimonial. If your university blocks corporate acknowledgements, check first.
No comments on this line yet.
+ 108 • Once-per-lifetime rule. Don't burn your application on a half-baked proposal — wait until you have a polished plan.
No comments on this line yet.
+ 109 • Alternative if rejected: Nebius runs a separate Startups program (different eligibility) and pay-as-you-go GPU pricing starts around $2.95/hr for HGX H100.
No comments on this line yet.
+ 110
No comments on this line yet.
+
111
No comments on this line yet.
+ 112
No comments on this line yet.
+ 113 Sources:
No comments on this line yet.
+ 114 • Nebius Research Credits Program
No comments on this line yet.
+ 115 • Nebius Research Grants Program
No comments on this line yet.
+ 116 • Nebius Token Factory
No comments on this line yet.
+ 117 • Nebius GPU Pricing
No comments on this line yet.
+ 118 • Nebius Hardware & Data Centres
No comments on this line yet.
+ 119 • Nebius Startups Program
No comments on this line yet.