AI Safety Training Programs
- Counterint.MATS achieves an exceptional 80% alumni retention rate in AI alignment work, compared to typical academic-to-industry transitions, indicating that intensive mentorship programs may be far more effective than traditional academic pathways for safety research careers.S:4.5I:4.0A:4.5
- Quant.AI safety training programs produce only 100-200 new researchers annually despite over $10 million in annual funding from Coefficient Giving alone, suggesting a severe talent conversion bottleneck rather than a funding constraint.S:4.0I:4.5A:4.0
- GapThe field's talent pipeline faces a critical mentor bandwidth bottleneck, with only 150-300 program participants annually from 500-1000 applicants, suggesting that scaling requires solving mentor availability rather than just funding more programs.S:3.5I:4.5A:4.0
- QualityRated 70 but structure suggests 93 (underrated by 23 points)
- Links18 links could use <R> components
Quick Assessment
Section titled “Quick Assessment”| Dimension | Rating | Notes |
|---|---|---|
| Tractability | High | Known how to train researchers; programs have proven track records |
| Scalability | Medium | Bottlenecked by mentor availability and quality maintenance |
| Current Maturity | Medium-High | Ecosystem established since 2021; 298+ MATS scholars trained |
| Time Horizon | 1-5 years | Trained researchers take 1-3 years to contribute meaningfully |
| Key Proponents | MATS, BlueDot Impact, Anthropic, Coefficient GivingCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100 | |
| Estimated Impact | Medium-High | Produces 100-200 new safety researchers annually |
Overview
Section titled “Overview”The AI safety field faces a critical talent bottleneck. While funding has increased substantially—with Coefficient GivingCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100 committing roughly $50 million to technical AI safety research in 2024—the supply of researchers capable of doing high-quality technical safety work remains constrained. Training programs represent the primary pipeline for addressing this gap, offering structured pathways from general ML expertise to safety-specific research skills.
The landscape has evolved rapidly since 2020. MATS (ML Alignment Theory Scholars) has become the premier research mentorship program, with 78% of surveyed alumni now working in AI alignment. Anthropic launched a Fellows Program specifically for mid-career transitions. BlueDot Impact has trained over 7,000 people since 2022, with hundreds now working at organizations like Anthropic, OpenAI, and the UK AI Safety Institute. Academic programs are emerging at York (SAINTS CDT), Berkeley (CHAI), and Cambridge (CHIA). Independent research programs like SPAR and LASR Labs provide part-time pathways. Together, these programs produce perhaps 100-200 new safety researchers annually—a number that may be insufficient given the pace of AI capabilities advancement.
The strategic importance of training extends beyond individual researcher production. Programs shape research culture, determine which problems receive attention, and create networks that influence the field’s direction. How training programs select participants, what methodologies they emphasize, and which mentors they feature all have downstream effects on AI safety’s trajectory.
Program Comparison
Section titled “Program Comparison”| Program | Duration | Format | Stipend | Selectivity | Key Outcomes |
|---|---|---|---|---|---|
| MATS | 12 weeks + 6mo extension | In-person (Berkeley, London) | Living stipend | ≈5-10% | 78% in alignment work; 75% publish |
| Anthropic Fellows | 6 months | In-person (SF) | $2,100/week | Selective | 40%+ hired full-time at Anthropic |
| LASR Labs | 13 weeks | In-person (London) | £11,000 | Moderate | All 5 Summer 2024 papers at NeurIPS |
| SPAR | 3 months | Remote, part-time | Varies | Moderate | Papers at ICML, NeurIPS; career fair |
| ARENA | 5 weeks | In-person (London) | Housing/travel | Moderate | Alumni at Apollo, METR, UK AISI |
| BlueDot Technical AI Safety | 8 weeks | Online cohorts | None | Low-moderate | 7,000+ trained; hundreds in field |
Major Training Programs
Section titled “Major Training Programs”MATS (ML Alignment Theory Scholars)
Section titled “MATS (ML Alignment Theory Scholars)”MATS is the most established and influential AI safety research program, operating as an intensive mentorship connecting promising researchers with leading safety researchers. Since its inception in late 2021, MATS has supported 298 scholars and 75 mentors.
| Attribute | Details |
|---|---|
| Duration | 12 weeks intensive + 6 months extension |
| Format | In-person (Berkeley, London) |
| Focus | Technical alignment research |
| Mentors | Researchers from Anthropic, DeepMind, Redwood, FAR.AI, ARC |
| Compensation | Living stipend provided |
| Selectivity | ≈5-10% acceptance rate |
| Alumni outcomes | 78% now working in AI alignment |
Research Areas:
- Interpretability and mechanistic understanding
- AI control and containment
- Scalable oversight
- Evaluations and red-teaming
- Robustness and security
Notable Alumni Contributions: MATS fellows have contributed to sparse autoencoders for interpretability, activation engineering research, developmental interpretability, and externalized reasoning oversight. Alumni have published at ICML and NeurIPS on safety-relevant topics. Nina Rimsky received an Outstanding Paper Award at ACL 2024 for “Steering Llama 2 via Contrastive Activation Addition.” Alumni have founded organizations including Apollo Research, Timaeus, Leap Labs, and the Center for AI Policy.
Anthropic Fellows Program
Section titled “Anthropic Fellows Program”Launched in 2024, the Anthropic Fellows Program targets mid-career technical professionals transitioning into AI safety research.
| Attribute | Details |
|---|---|
| Duration | 6 months full-time |
| Format | In-person (San Francisco) |
| Focus | Transition to safety research |
| Compensation | $2,100/week stipend + $15,000/month compute budget |
| Target | Mid-career technical professionals |
| First cohort | March 2025 |
| First cohort outcomes | Over 80% published papers; 40%+ joined Anthropic full-time |
The program addresses a specific gap: talented ML engineers and researchers who want to transition to safety work but lack the mentorship and runway to do so. By providing substantial compensation and direct collaboration with Anthropic researchers, it removes financial barriers to career change. First cohort fellows produced notable research including work on agentic misalignment, attribution graphs for mechanistic interpretability, and autonomous blockchain vulnerability exploitation.
SPAR (Supervised Program for Alignment Research)
Section titled “SPAR (Supervised Program for Alignment Research)”SPAR offers a part-time, remote research fellowship enabling broader participation in safety research without requiring full-time commitment.
| Attribute | Details |
|---|---|
| Duration | 3 months |
| Format | Remote, part-time |
| Focus | AI safety and governance research |
| Target | Students and professionals |
| Output | Research projects culminating in Demo Day with career fair |
| Scale | 130+ projects offered in Spring 2026—largest AI safety fellowship round |
SPAR research has been accepted at ICML and NeurIPS, covered by TIME, and led to full-time job offers. Mentors come from Google DeepMind, RAND, Apollo Research, UK AISI, MIRI, and universities including Cambridge, Harvard, Oxford, and MIT. The program works well for:
- Graduate students exploring safety research
- Professionals testing interest before career change
- Researchers in adjacent fields wanting to contribute
LASR Labs
Section titled “LASR Labs”LASR Labs provides cohort-based technical AI safety research, preparing participants for roles at safety organizations.
| Attribute | Details |
|---|---|
| Duration | 13 weeks |
| Format | In-person (London) |
| Focus | Technical safety research |
| Stipend | £11,000 + office space, food, travel |
| 2024 Outcomes | All 5 Summer 2024 papers accepted to NeurIPS workshops |
| Career Outcomes | Alumni at UK AISI, Apollo Research, OpenAI dangerous capabilities team, Coefficient Giving |
| Satisfaction | 9.25/10 likelihood to recommend; NPS +75 |
Research topics include interpretability (sparse autoencoders, residual streams), AI control, and steganographic collusion in LLMs. Supervisors include researchers from Google DeepMind, Anthropic, and UK AISI.
Global AI Safety Fellowship
Section titled “Global AI Safety Fellowship”Impact Academy’s Global AI Safety Fellowship is a fully funded program (up to 6 months) connecting exceptional STEM talent with leading safety organizations.
| Attribute | Details |
|---|---|
| Duration | Up to 6 months |
| Format | In-person collaboration |
| Partners | CHAI (Berkeley), Conjecture, FAR.AI, UK AISI |
| Funding | Fully funded |
Academic Pathways
Section titled “Academic Pathways”PhD Programs
Section titled “PhD Programs”| Program | Institution | Focus | Status |
|---|---|---|---|
| SAINTS CDT | University of York (UK) | Safe Autonomy | Accepting applications |
| CHAI | UC Berkeley | Human-Compatible AI | Established |
| CHIA | Cambridge | Human-Inspired AI | Active |
| Steinhardt Lab | UC Berkeley | ML Safety | Active |
| Other ML programs | Various | General ML with safety focus | Many options |
University of York - SAINTS CDT: The UK’s first Centre for Doctoral Training specifically focused on AI safety, funded by UKRI. Brings together computer science, philosophy, law, sociology, and economics to train the next generation of safe AI experts. Based at the Institute for Safe Autonomy.
Key Academic Researchers: Prospective PhD students should consider advisors who work on safety-relevant topics:
- Stuart Russell (Berkeley/CHAI) - Human-compatible AI
- Jacob Steinhardt (Berkeley) - ML safety and robustness
- Vincent Conitzer (CMU) - AI alignment theory
- David Duvenaud (Toronto) - Interpretability
- Roger Grosse (Toronto) - Training dynamics
- Victor Veitch (Chicago) - Causal ML, safety
Academic vs. Industry Research
Section titled “Academic vs. Industry Research”| Dimension | Academic Path | Industry Path |
|---|---|---|
| Timeline | 4-6 years | 0-2 years to entry |
| Research freedom | High | Varies |
| Resources | Limited | Often substantial |
| Publication | Expected | Sometimes restricted |
| Salary during training | PhD stipend (≈$10-50K) | Full salary or fellowship |
| Ultimate outcome | Research career | Research career |
| Best for | Deep expertise, theory | Immediate impact, applied |
Upskilling Resources
Section titled “Upskilling Resources”For those not yet ready for formal programs or preferring self-directed learning:
Structured Curricula
Section titled “Structured Curricula”| Resource | Provider | Coverage | Time Investment |
|---|---|---|---|
| AI Safety Syllabus | 80,000 Hours | Comprehensive reading list | 40-100+ hours |
| Technical AI Safety Course | BlueDot Impact | Structured curriculum | 8 weeks |
| AI Safety Operations Bootcamp | BlueDot Impact | Operations roles in AI safety | Intensive |
| ML Safety Course | Dan Hendrycks | Technical foundations | Semester |
| ARENA | ARENA | Technical implementations (mech interp, transformers) | 5 weeks |
BlueDot Impact has become the primary entry point into the AI safety field, training over 7,000 people since 2022 and raising $35M including $25M in 2025. ARENA alumni have gone on to become MATS scholars, LASR participants, and AI safety engineers at Apollo Research, METR, and UK AISI.
Self-Study Path
Section titled “Self-Study Path”Career Transition Considerations
Section titled “Career Transition Considerations”When to Apply to Programs
Section titled “When to Apply to Programs”| Your Situation | Recommended Path |
|---|---|
| Strong ML background, want safety focus | MATS or Anthropic Fellows |
| Exploring interest, employed | SPAR (part-time) |
| Student, want research experience | LASR Labs, SPAR |
| Early career, want PhD | Academic programs |
| Mid-career, want full transition | Anthropic Fellows |
| Strong background, want independence | Self-study + independent research |
Success Factors
Section titled “Success Factors”Based on program outcomes, successful applicants typically have:
| Factor | Importance | How to Develop |
|---|---|---|
| ML technical skills | Critical | Courses, projects, publications |
| Research experience | High | Academic or industry research |
| Safety knowledge | Medium-High | Reading, courses, writing |
| Communication | Medium | Writing, presentations |
| Clear research interests | Medium | Reading, reflection, pilot projects |
Common Failure Modes
Section titled “Common Failure Modes”| Failure Mode | Description | Mitigation |
|---|---|---|
| Premature application | Applying without sufficient ML skills | Build fundamentals first |
| No research output | Nothing demonstrating research capability | Complete pilot project |
| Vague interests | Unable to articulate what you want to work on | Read extensively, form views |
| Poor fit | Mismatch between interests and program | Research programs carefully |
| Giving up early | Rejection discouragement | Multiple applications, iterate |
Talent Pipeline Analysis
Section titled “Talent Pipeline Analysis”Current Capacity
Section titled “Current Capacity”| Stage | Annual Output | Bottleneck |
|---|---|---|
| Interested individuals | Thousands | Conversion |
| Program applicants | 500-1000 | Selectivity |
| Program participants | 150-300 | Capacity |
| Research-productive alumni | 100-200 | Mentorship |
| Long-term field contributors | 50-100 | Retention |
Scaling Challenges
Section titled “Scaling Challenges”| Challenge | Description | Potential Solutions |
|---|---|---|
| Mentor bandwidth | Limited senior researchers available | Peer mentorship, async formats |
| Quality maintenance | Scaling may dilute intensity | Tiered programs |
| Funding | Programs need sustainable funding | Philanthropic, industry, government |
| Coordination | Many programs with unclear differentiation | Better information, specialization |
| Retention | Many trained researchers leave safety | Better career paths, culture |
Strategic Assessment
Section titled “Strategic Assessment”| Dimension | Assessment | Notes |
|---|---|---|
| Tractability | High | Known how to train researchers |
| If AI risk high | High | Need many more researchers |
| If AI risk low | Medium | Still valuable for responsible development |
| Neglectedness | Medium | $50M+ annually from Coefficient GivingCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100 but scaling gaps |
| Timeline to impact | 1-5 years | Trained researchers take time to contribute |
| Grade | B+ | Important but faces scaling limits |
Risks Addressed
Section titled “Risks Addressed”| Risk | Mechanism | Effectiveness |
|---|---|---|
| Inadequate safety research | More researchers doing safety work | High |
| Racing dynamicsRiskRacing DynamicsRacing dynamics analysis shows competitive pressure has shortened safety evaluation timelines by 40-60% since ChatGPT's launch, with commercial labs reducing safety work from 12 weeks to 4-6 weeks....Quality: 72/100 | Safety talent at labs can advocate | Medium |
| Field capture | Diverse training reduces groupthink | Medium |
Complementary Interventions
Section titled “Complementary Interventions”- Field BuildingField Building AnalysisComprehensive analysis of AI safety field-building showing growth from 400 to 1,100 FTEs (2022-2025) at 21-30% annual growth rates, with training programs achieving 37% career conversion at costs o...Quality: 65/100 - Broader ecosystem development
- Corporate InfluenceCruxCorporate InfluenceComprehensive analysis of corporate influence pathways (working inside labs, shareholder activism, whistleblowing) showing mixed effectiveness: safety teams influenced GPT-4 delays and responsible ...Quality: 66/100 - Placing trained researchers at labs
- AI Safety InstitutesPolicyAI Safety Institutes (AISIs)Analysis of government AI Safety Institutes finding they've achieved rapid institutional growth (UK: 0→100+ staff in 18 months) and secured pre-deployment access to frontier models, but face critic...Quality: 69/100 - Employers for trained researchers
Sources
Section titled “Sources”Program Information
Section titled “Program Information”- MATS: matsprogram.org - Official program information; Alumni Impact Analysis (2024)
- Anthropic Fellows: alignment.anthropic.com - Program details; 2026 cohort applications
- SPAR: sparai.org - Supervised Program for Alignment Research
- LASR Labs: lasrlabs.org - London AI Safety Research Labs
- BlueDot Impact: bluedot.org - AI safety courses and career support
- ARENA: arena.education - Alignment Research Engineer Accelerator
- Global AI Safety Fellowship: globalaisafetyfellowship.com
Funding and Ecosystem
Section titled “Funding and Ecosystem”- Coefficient GivingCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100: 2024 Progress and 2025 Plans - $50M committed to technical AI safety in 2024
- Coefficient GivingCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100: Technical AI Safety RFP - $40M+ available
Career Guidance
Section titled “Career Guidance”- 80,000 Hours: “AI Safety Syllabus” and career guide
- Alignment Forum: Career advice threads
- EA Forum: “Rank Best Universities for AI Safety”
Academic Programs
Section titled “Academic Programs”- University of York SAINTS CDT: york.ac.uk/study/postgraduate-research/centres-doctoral-training/safe-ai-training
- Stanford Center for AI Safety: aisafety.stanford.edu
- CHAI (Berkeley): humancompatible.ai
AI Transition Model Context
Section titled “AI Transition Model Context”AI safety training programs improve the Ai Transition Model through multiple factors:
| Factor | Parameter | Impact |
|---|---|---|
| Misalignment PotentialAi Transition Model FactorMisalignment PotentialThe aggregate risk that AI systems pursue goals misaligned with human values—combining technical alignment challenges, interpretability gaps, and oversight limitations. | Safety-Capability GapAi Transition Model ParameterSafety-Capability GapThis page contains no actual content - only a React component reference that dynamically loads content from elsewhere in the system. Cannot evaluate substance, methodology, or conclusions without t... | Produces 100-200 new safety researchers annually to address research talent bottleneck |
| Misalignment PotentialAi Transition Model FactorMisalignment PotentialThe aggregate risk that AI systems pursue goals misaligned with human values—combining technical alignment challenges, interpretability gaps, and oversight limitations. | Alignment RobustnessAi Transition Model ParameterAlignment RobustnessThis page contains only a React component import with no actual content rendered in the provided text. Cannot assess importance or quality without the actual substantive content. | Mentored researchers produce higher-quality alignment work |
| Civilizational CompetenceAi Transition Model FactorCivilizational CompetenceSociety's aggregate capacity to navigate AI transition well—including governance effectiveness, epistemic health, coordination capacity, and adaptive resilience. | Institutional QualityAi Transition Model ParameterInstitutional QualityThis page contains only a React component import with no actual content rendered. It cannot be evaluated for substance, methodology, or conclusions. | Trained researchers staff AI Safety Institutes and governance organizations |
Training programs are critical infrastructure for the field; their effectiveness is bottlenecked by limited mentor bandwidth and retention challenges.