MATS is a well-documented 12-week fellowship program that has successfully trained 213 AI safety researchers with strong career outcomes (80% in alignment work) and research impact (160+ publications, 8000+ citations). The program provides comprehensive support ($27k per scholar) and has produced notable alumni contributions to alignment research.
Other Data
| Dimension | Rating | Evidence | Assessor | |
|---|---|---|---|---|
| career-impact | Very High | 80% of alumni work in AI alignment; placements at Anthropic, OpenAI, DeepMind | editorial | |
| funding-per-scholar | $27k | $15k stipend + $12k compute resources, plus housing and meals | editorial | |
| program-scale | High | 98 scholars and 57 mentors in most recent cohort (MATS 8.0, Summer 2025) | editorial | |
| research-output | Strong | 160+ publications, 8,000+ citations, h-index of 40 over 4 years | editorial | |
| selectivity | Very Competitive | ~15% acceptance rate; 40+ mentors with independent selection | editorial |
Related Wiki Pages
Top Related Pages
AI Alignment
Technical approaches to ensuring AI systems pursue intended goals and remain aligned with human values throughout training and deployment. Current ...
FAR AI
AI safety research nonprofit founded in 2022 by Adam Gleave and Karl Berzins, focusing on adversarial robustness, model evaluation, and alignment r...
AI Control
A defensive safety approach maintaining control over potentially misaligned AI systems through monitoring, containment, and redundancy, offering 40...
Redwood Research
A nonprofit AI safety and security research organization founded in 2021, known for pioneering AI Control research, developing causal scrubbing int...
Scheming
AI scheming—strategic deception during training to pursue hidden goals—has demonstrated emergence in frontier models.