Anthropic Fellows Program 2026 - Funded AI Safety Research
Source: https://alignment.anthropic.com/2025/anthropic-fellows-program-2026
Description
Create account to comment on specific lines or Sign in
+ 1 Four-month funded research fellowship for engineers and researchers working on Anthropic's highest-priority AI safety and security questions. Fellows receive a $3,850/week stipend (or equivalent in GBP/CAD), approximately $15,000/month in compute funding, direct 1:1 mentorship from Anthropic researchers, and access to shared workspaces in Berkeley or London. No PhD, prior ML experience, or published papers required. Over 80% of previous fellows produced publishable research, and over 40% joined Anthropic full-time afterward.
No comments on this line yet.
+ 2
No comments on this line yet.
+
3
No comments on this line yet.
+ 4
No comments on this line yet.
+ 6
No comments on this line yet.
+ 7 The original seed entry listed the stipend as $2,100/week and compute budget as $10,000/month. The actual figures from Anthropic's official announcement are $3,850 USD/week and ~$15,000/month in compute funding.
No comments on this line yet.
+ 8
No comments on this line yet.
+
9
No comments on this line yet.
+ 10
No comments on this line yet.
+ 12
No comments on this line yet.
+ 13 1. Go to the Anthropic Fellows Program announcement and read the program details
No comments on this line yet.
+ 14 2. Decide which track fits you best: AI Safety Fellow or AI Security Fellow (apply to only one)
No comments on this line yet.
+ 15 3. Click "Apply here" on the announcement page — this redirects to Constellation's application portal (Constellation is Anthropic's official recruiting partner)
No comments on this line yet.
+ 16 4. Fill out the application form with your background, research interests, and motivation
No comments on this line yet.
+ 17 5. Applications go through initial screening, followed by technical assessments and interviews, then a research discussion round
No comments on this line yet.
+ 18 6. If accepted, Anthropic mentors pitch project ideas to you, and you choose and shape your project in collaboration with your mentor
No comments on this line yet.
+ 19 7. The fellowship begins at the cohort start date (May or July 2026)
No comments on this line yet.
+ 20
No comments on this line yet.
+ 21 Important:
No comments on this line yet.
+ 22 • Applications for the May 2026 cohort are closed
No comments on this line yet.
+ 23 • Applications are accepted on a rolling basis for July 2026 and beyond
No comments on this line yet.
+ 24 • The initial application deadline was January 12, 2026 for the first round, but rolling applications continue
No comments on this line yet.
+ 25 • No visa sponsorship — you must already have (or independently obtain) full-time work authorization in the US, UK, or Canada
No comments on this line yet.
+ 26 • You can participate remotely from anywhere within the US, UK, or Canada
No comments on this line yet.
+ 27
No comments on this line yet.
+
28
No comments on this line yet.
+ 29
No comments on this line yet.
+ 31
No comments on this line yet.
+ 32 TrackFocusApplication Link
No comments on this line yet.
+ 33 AI Safety FellowScalable oversight, adversarial robustness, model organisms, mechanistic interpretability, model welfareApply here
No comments on this line yet.
+ 34 AI Security FellowDefensive AI for cybersecurity, vulnerability research, securing code and infrastructure with AIApply here
No comments on this line yet.
+ 35
No comments on this line yet.
+ 36 Both tracks share the same compensation, duration, and structure. Apply to only the one that best matches your interests and background.
No comments on this line yet.
+ 37
No comments on this line yet.
+
38
No comments on this line yet.
+ 39
No comments on this line yet.
+ 41
No comments on this line yet.
+ 43 • Scalable oversight — methods to supervise AI systems as they become more capable
No comments on this line yet.
+ 44 • Adversarial robustness and AI control — preventing misuse and maintaining alignment under adversarial conditions
No comments on this line yet.
+ 45 • Model organisms — studying misalignment in controlled settings
No comments on this line yet.
+ 46 • Mechanistic interpretability — understanding internal model representations and circuits
No comments on this line yet.
+ 47 • Model welfare — investigating the moral status of AI systems
No comments on this line yet.
+ 48
No comments on this line yet.
+ 50 • Defensive AI — using AI models to discover vulnerabilities and secure code
No comments on this line yet.
+ 51 • Blockchain smart contract exploits — AI-assisted security research
No comments on this line yet.
+ 52 • Control evaluations — testing how well safety measures hold up
No comments on this line yet.
+ 53 • Offensive security research — pentesting and vulnerability research applied to AI systems
No comments on this line yet.
+ 54
No comments on this line yet.
+
55
No comments on this line yet.
+ 56
No comments on this line yet.
+ 58
No comments on this line yet.
+ 59 BenefitDetails
No comments on this line yet.
+ 60 Weekly stipend$3,850 USD / £2,310 GBP / $4,300 CAD
No comments on this line yet.
+ 61 Compute funding~$15,000/month for research expenses
No comments on this line yet.
+ 62 Mentorship1:1 guidance from senior Anthropic researchers
No comments on this line yet.
+ 63 WorkspaceShared offices in Berkeley, CA or London, UK
No comments on this line yet.
+ 64 Remote optionAvailable for US, UK, or Canada residents
No comments on this line yet.
+ 65 Duration4 months full-time
No comments on this line yet.
+ 66 Research outputAim to produce a publishable paper
No comments on this line yet.
+ 67 Career pipelineOver 40% of fellows received full-time Anthropic offers
No comments on this line yet.
+ 68
No comments on this line yet.
+
69
No comments on this line yet.
+ 70
No comments on this line yet.
+ 72
No comments on this line yet.
+ 73 Required:
No comments on this line yet.
+ 74 • Fluent Python programming ability
No comments on this line yet.
+ 75 • Available to work full-time for 4 months
No comments on this line yet.
+ 76 • Work authorization in the US, UK, or Canada (no visa sponsorship)
No comments on this line yet.
+ 77 • Strong technical background in computer science, mathematics, physics, cybersecurity, or related fields
No comments on this line yet.
+ 78 • Genuine interest in reducing catastrophic risks from advanced AI
No comments on this line yet.
+ 79
No comments on this line yet.
+ 80 Valued but not mandatory:
No comments on this line yet.
+ 81 • Prior ML research experience
No comments on this line yet.
+ 82 • Published papers or PhD
No comments on this line yet.
+ 83 • Experience with deep learning frameworks
No comments on this line yet.
+ 84 • Open-source contributions (especially in LLM- or security-adjacent repos)
No comments on this line yet.
+ 85 • For Security track: pentesting, CVE reports, bug bounty experience
No comments on this line yet.
+ 86
No comments on this line yet.
+ 87 Key message from Anthropic: They care much more about your ability to execute on research than about credentials. Successful fellows have come from physics, mathematics, computer science, cybersecurity, and other quantitative backgrounds.
No comments on this line yet.
+ 88
No comments on this line yet.
+
89
No comments on this line yet.
+ 90
No comments on this line yet.
+ 92
No comments on this line yet.
+ 94 Mentors are senior Anthropic researchers working on alignment and interpretability (specific names listed on the announcement page).
No comments on this line yet.
+ 95
No comments on this line yet.
+ 97 Potential mentors include Nicholas Carlini, Keri Warr, Evyatar Ben Asher, Keane Lucas, and Newton Cheng.
No comments on this line yet.
+ 98
No comments on this line yet.
+ 99 Mentors pitch project ideas to fellows, who then choose and shape their project collaboratively.
No comments on this line yet.
+ 100
No comments on this line yet.
+
101
No comments on this line yet.
+ 102
No comments on this line yet.
+ 104
No comments on this line yet.
+ 105 CohortStart DateApplication Status
No comments on this line yet.
+ 106 May 2026May 2026Closed
No comments on this line yet.
+ 107 July 2026July 2026Open (rolling basis)
No comments on this line yet.
+ 108 Future cohortsTBDOpen (rolling basis)
No comments on this line yet.
+ 109
No comments on this line yet.
+
110
No comments on this line yet.
+ 111
No comments on this line yet.
+ 113
No comments on this line yet.
+ 114 • Over 80% of fellows in the first cohort produced published papers
No comments on this line yet.
+ 115 • Over 40% joined Anthropic full-time after the fellowship
No comments on this line yet.
+ 116 • Many others were supported to work full-time on AI safety at other organizations
No comments on this line yet.
+ 117 • Published work includes research on agentic misalignment, subliminal learning, rapid response to ASL3 jailbreaks, and open-source circuits
No comments on this line yet.
+ 118 • One fellow developed a method for rapid response to new ASL3 jailbreaks that became a key component of Anthropic's deployment safeguards
No comments on this line yet.
+ 119
No comments on this line yet.
+
120
No comments on this line yet.
+ 121
No comments on this line yet.
+ 123
No comments on this line yet.
+ 124 • This is not a typical internship — it is a structured research program with the expectation of producing publishable work. Come with genuine research motivation, not just a line on your resume
No comments on this line yet.
+ 125 • Constellation manages logistics — they handle applications, interviews, and run the Berkeley workspace. Expect communication from Constellation, not Anthropic, during the application process
No comments on this line yet.
+ 126 • No credit card or payment required — this is a fully funded fellowship; you receive money, not spend it
No comments on this line yet.
+ 127 • Compare with similar programs — the Astra Fellowship by Constellation is a related program worth exploring if this one doesn't fit
No comments on this line yet.
+ 128 • Rolling applications — don't wait until the last minute, but if you missed the January deadline, you can still apply for July 2026 and future cohorts
No comments on this line yet.
+ 129 • Remote is fine — you don't have to relocate to Berkeley or London, but workspace access is available if you want it
No comments on this line yet.
+ 130
No comments on this line yet.
+
131
No comments on this line yet.
+ 132
No comments on this line yet.
+ 133 Sources:
No comments on this line yet.
+ 134 • Anthropic Fellows Program 2026 Announcement
No comments on this line yet.
+ 135 • AI Safety Fellow Application (Greenhouse)
No comments on this line yet.
+ 136 • AI Security Fellow Application (Greenhouse)
No comments on this line yet.
+ 137 • Introducing the Anthropic Fellows Program (2024)
No comments on this line yet.
+ 138 • Anthropic Fellows Program Coverage (AImpactful)
No comments on this line yet.
+ 139 • Anthropic Announcement on X
No comments on this line yet.