TensorWave Open Source Developer Program (up to $250K AMD GPU Credits)
Source: https://go.tensorwave.com/open-source-developer-program
Description
Create account to comment on specific lines or Sign in
+ 1 TensorWave gives up to $250,000 in free compute credits on AMD Instinct MI300X GPUs to developers who add or improve AMD GPU support in open-source AI/ML projects (think kernels, frameworks, training/inference stacks, devtools). It is a proposal-based grant, not a self-serve free tier — you pitch a concrete OSS contribution and TensorWave decides how many credits to award based on scope, impact and feasibility.
No comments on this line yet.
+ 2
No comments on this line yet.
+ 3 TensorWave is a US neocloud whose entire bet is AMD-only, ROCm-native infrastructure (they raised $100M to build the world's largest liquid-cooled AMD GPU deployment and also run the Beyond CUDA podcast/summit). The OSS Developer Program is essentially their way of paying outside engineers to widen the moat: more PyTorch / vLLM / SGLang / Triton / inference-engine code that runs well on MI300X means more customers for TensorWave.
No comments on this line yet.
+ 4
No comments on this line yet.
+
5
No comments on this line yet.
+ 6
No comments on this line yet.
+ 8
No comments on this line yet.
+ 9 Great fit:
No comments on this line yet.
+ 10 • Maintainers / regular contributors to OSS AI/ML projects (PyTorch, JAX, vLLM, SGLang, llama.cpp, ggml, Triton, Unsloth, TRL, axolotl, ScalarLM, ComfyUI, diffusers, etc.) who want to ship MI300X support, perf fixes, ROCm-native kernels, or bug fixes.
No comments on this line yet.
+ 11 • Researchers porting CUDA-only code paths to ROCm / HIP.
No comments on this line yet.
+ 12 • Independent kernel authors writing AMD-targeted ops (HIP, Composable Kernel, AITER, Triton-on-ROCm).
No comments on this line yet.
+ 13
No comments on this line yet.
+ 14 Bad fit:
No comments on this line yet.
+ 15 • Closed-source commercial products (this is OSS-only).
No comments on this line yet.
+ 16 • "I want to fine-tune a model for my SaaS" — that's a normal paid workload, not an OSS contribution. Use TensorWave's commercial pricing or another grant.
No comments on this line yet.
+ 17 • Pure inference of someone else's model with no upstream contribution. Without the contribution angle there is nothing to evaluate.
No comments on this line yet.
+ 18
No comments on this line yet.
+
19
No comments on this line yet.
+ 20
No comments on this line yet.
+ 22
No comments on this line yet.
+ 23 • Compute credits redeemable on TensorWave's AMD Instinct MI300X infrastructure (cloud GPU instances; bare-metal nodes and managed Kubernetes are also possible depending on the proposal).
No comments on this line yet.
+ 24 • "Up to $250,000" is the maximum — most awards will be a fraction of that, sized to the project. For context, TensorWave's public MI300X price is roughly $1.50–$1.71 / GPU-hour, so $250K ≈ 145,000+ MI300X GPU-hours, and a more typical $5K–$25K grant still buys thousands of GPU-hours.
No comments on this line yet.
+ 25 • 1× MI300X has 192 GB HBM3 — enough VRAM to load a 405B-parameter model on a single 8-GPU node, so the credits go a long way for both training and inference work.
No comments on this line yet.
+ 26 • Implicit access to the TensorWave / AMD ecosystem (engineers, ROCm builds, sometimes cross-promotion of your contribution).
No comments on this line yet.
+ 27
No comments on this line yet.
+ 28 Not included:
No comments on this line yet.
+ 29 • No cash, no equity, no salary — only compute credits.
No comments on this line yet.
+ 30 • No hardware shipped to you — everything runs in TensorWave's cloud.
No comments on this line yet.
+ 31 • Credits are tied to the proposed project; expect them to be scoped to that work, not freely usable for unrelated experiments.
No comments on this line yet.
+ 32
No comments on this line yet.
+
33
No comments on this line yet.
+ 34
No comments on this line yet.
+ 36
No comments on this line yet.
+ 37 1. Have an open-source AI/ML project in mind that needs AMD/ROCm work — either one you already maintain or one where you have credibility (existing PRs, issues filed, etc.).
No comments on this line yet.
+ 38 2. Go to the application page: go.tensorwave.com/open-source-developer-program.
No comments on this line yet.
+ 39 3. Submit a proposal that covers, at minimum: the project, the specific AMD GPU work (kernels? backend port? perf fixes? new feature?), why it matters, an estimate of compute needed (GPU-hours × GPU type), a rough timeline, and proof of your track record (GitHub profile, links to merged PRs).
No comments on this line yet.
+ 40 4. You can submit 1 or multiple proposals — TensorWave explicitly allows batched submissions, so it's fine to pitch "vLLM MoE kernel + SGLang FP8 attention + Triton-ROCm fused softmax" as three separate asks.
No comments on this line yet.
+ 41 5. Wait for TensorWave's team to evaluate. Reviewers care most about: (a) does this actually improve the AMD/ROCm OSS ecosystem, (b) is the contributor able to ship it, (c) is the requested compute reasonable.
No comments on this line yet.
+ 42 6. If approved, you get a TensorWave account + credit grant scoped to the project. Spin up MI300X instances, do the work, ship the upstream PR.
No comments on this line yet.
+ 43
No comments on this line yet.
+ 44 Important:
No comments on this line yet.
+ 45 • The page is sometimes flaky or returns 404 to non-browser fetches — open it in a real browser. If it stays down, the program is also routinely promoted by TensorWave's co-founder Jeff Tatarchuk on LinkedIn / X (@TensorWaveCloud); reach out there if the form is broken.
No comments on this line yet.
+ 46 • No credit card is required to apply, but TensorWave will likely want to verify identity / GitHub before granting compute.
No comments on this line yet.
+ 47 • Expect the team to negotiate scope down — if you ask for the full $250K with a one-line proposal, the realistic outcome is either a much smaller grant or a request for more detail.
No comments on this line yet.
+ 48
No comments on this line yet.
+
49
No comments on this line yet.
+ 50
No comments on this line yet.
+ 52
No comments on this line yet.
+ 53 GPUVRAMNotes
No comments on this line yet.
+ 54 AMD Instinct MI300X192 GB HBM3Primary platform for the program. 8-GPU nodes are standard; a single node fits a 405B-parameter model.
No comments on this line yet.
+ 55 AMD Instinct MI325X / MI355X256 GB / 288 GB HBM3ETensorWave's newer SKUs; access depends on availability and proposal fit.
No comments on this line yet.
+ 56
No comments on this line yet.
+
57
ROCm is the supported software stack (PyTorch and Hugging Face run natively on ROCm 7+; rocm-smi and drivers are pre-loaded on TensorWave images). Bring HIP / Triton / Composable Kernel / AITER if you're writing low-level code.
No comments on this line yet.
+ 58
No comments on this line yet.
+
59
No comments on this line yet.
+ 60
No comments on this line yet.
+ 62
No comments on this line yet.
+ 63 • Add or fix MI300X support in a popular OSS inference engine (vLLM, SGLang, llama.cpp, ExLlamaV2, MLC-LLM, etc.).
No comments on this line yet.
+ 64 • Write performant ROCm kernels for ops that are currently CUDA-only or slow on AMD (FP8 attention, MoE routing, fused linear+activation, custom samplers).
No comments on this line yet.
+ 65 • Port a CUDA-specific training trick (FlashAttention variant, sequence parallel, fp4) to ROCm/HIP.
No comments on this line yet.
+ 66 • Add MI300X CI to an OSS project so regressions get caught upstream instead of in user reports.
No comments on this line yet.
+ 67 • Benchmark + tune an OSS framework on MI300X and contribute the perf fixes back.
No comments on this line yet.
+ 68 • Build / improve OSS evaluation harnesses on AMD hardware.
No comments on this line yet.
+ 69
No comments on this line yet.
+
70
No comments on this line yet.
+ 71
No comments on this line yet.
+ 73
No comments on this line yet.
+ 74 • Ongoing program, no fixed deadline — but credit awards are at TensorWave's discretion and almost certainly capped by an annual budget, so applying earlier in the year is probably better than later.
No comments on this line yet.
+ 75 • OSS license required. Closed-source / source-available / non-commercial-only licenses likely don't qualify. If your project is OSI-approved (Apache-2.0, MIT, BSD, GPL, MPL), you're fine.
No comments on this line yet.
+ 76 • You actually have to ship. If you take credits and never land the upstream PR, do not expect a second grant.
No comments on this line yet.
+ 77 • US-based provider. Workloads run in TensorWave's US data centers — fine for almost everyone but worth knowing if you have data-residency constraints.
No comments on this line yet.
+ 78 • Not a substitute for a normal free tier. If you just want a few hours to try MI300X without writing OSS code, this is the wrong program — try the AMD Developer Cloud instead (link below).
No comments on this line yet.
+ 79 • The landing page has been observed to return 404 to bare HTTP requests (anti-bot / marketing-page behaviour). Use a real browser; if the form is genuinely down, ping TensorWave on LinkedIn / X.
No comments on this line yet.
+ 80
No comments on this line yet.
+
81
No comments on this line yet.
+ 82
No comments on this line yet.
+ 84
No comments on this line yet.
+ 85 • AMD Developer Cloud — AMD's own free MI300X access for developers, lower bar but smaller allocations: amd.com/en/developer/resources/cloud-access/amd-developer-cloud.html.
No comments on this line yet.
+ 86 • AMD Instinct GPU Eval via TensorWave — official AMD-sponsored evaluation track for testing models/workflows on MI300X: amd.com/en/products/accelerators/instinct/eval-request/tensorwave.html.
No comments on this line yet.
+ 87 • Prime Intellect Fast Compute Grants — $500–$100K credit grants for OSS / distributed AI work (hardware-agnostic, includes AMD via aggregated providers).
No comments on this line yet.
+ 88 • Lambda / Nebius / HOSTKEY research grants — broader academic / OSS compute grant programs on NVIDIA hardware.
No comments on this line yet.
+ 89
No comments on this line yet.
+
90
No comments on this line yet.
+ 91
No comments on this line yet.
+ 92 Sources:
No comments on this line yet.
+ 93 • TensorWave Open Source Developer Program
No comments on this line yet.
+ 94 • TensorWave (homepage / company)
No comments on this line yet.
+ 95 • AMD MI300X for AI & ML Workloads on TensorWave
No comments on this line yet.
+ 96 • AMD: Test AI models with Instinct GPUs on TensorWave
No comments on this line yet.
+ 97 • AMD Developer Cloud
No comments on this line yet.
+ 98 • TensorWave raises $100M for liquid-cooled AMD deployment (HPCwire)
No comments on this line yet.
+ 99 • TensorWave MI300X pricing context (getdeploying.com)
No comments on this line yet.
+ 100 • TensorWave Beyond CUDA Summit launch (Morningstar / PR Newswire)
No comments on this line yet.
+ 101 • TensorWave on GitHub
No comments on this line yet.