Back
MATS Research Program
webmatsprogram.org·matsprogram.org/
MATS is one of the primary talent pipelines into the AI safety field; wiki users interested in career transitions or field-building efforts should consider this a key institutional reference.
Metadata
Importance: 62/100homepage
Summary
MATS is an intensive fellowship program designed to help researchers transition into AI safety careers, offering structured mentorship from leading researchers, stipends, and community integration. Since 2021, it has trained over 446 researchers who have collectively produced 150+ research papers and gone on to work at top AI safety organizations.
Key Points
- •Structured fellowship program pairing emerging researchers with senior AI safety mentors from organizations like Anthropic, ARC, and Redwood Research.
- •Over 446 participants since 2021, producing 150+ research papers across alignment, interpretability, governance, and related areas.
- •Provides funding (stipends), co-working space, and community support to reduce barriers to entering AI safety research.
- •Key pipeline for field-building: alumni have joined Anthropic, DeepMind, ARC, MIRI, and other leading AI safety organizations.
- •Covers both technical AI safety tracks (interpretability, alignment) and governance/policy tracks.
Review
The MATS (Machine Learning and AI Alignment Training) program represents a strategic approach to addressing the talent gap in AI safety research. By providing a structured 12-week program with in-person cohorts in Berkeley and London, MATS creates a comprehensive ecosystem for emerging researchers to develop technical skills, build networks, and contribute to critical alignment challenges.
The program's distinctive strengths include its holistic support model, offering mentorship from leading researchers, $15k stipends, $12k compute budgets, and workspace infrastructure. With an impressive track record—80% of alumni now working in AI alignment, and 10% founding new organizations—MATS has demonstrated its effectiveness in rapidly upskilling and integrating talent into the AI safety landscape. Its multifaceted approach spans empirical research, policy strategy, theoretical foundations, and technical governance, positioning it as a crucial catalyst in developing human capital for addressing potential risks from advanced AI systems.
Cited by 8 pages
| Page | Type | Quality |
|---|---|---|
| AI Accident Risk Cruxes | Crux | 67.0 |
| Capabilities-to-Safety Pipeline Model | Analysis | 73.0 |
| AI Safety Researcher Gap Model | Analysis | 67.0 |
| Worldview-Intervention Mapping | Analysis | 62.0 |
| Long-Term Future Fund (LTFF) | Organization | 56.0 |
| MATS ML Alignment Theory Scholars program | Organization | 60.0 |
| AI Safety Field Building and Community | Crux | 0.0 |
| AI Safety Training Programs | Approach | 70.0 |
Cached Content Preview
HTTP 200Fetched Apr 21, 202621 KB
MATS Research
Research Mentors About
About
Mid Missouri Region
Kansas City Region
Apply Research Mentors About
About
Launch your career in AI alignment & security
The MATS Program is an independent research and educational seminar program that connects talented researchers with top mentors in the fields of AI alignment , transparency , and security . The program runs for 12 weeks with in-person cohorts in Berkeley and London, where MATS fellows conduct research while attending talks, workshops, and networking events with other members of the AI research community. Top-performing fellows can extend their impactful research for an additional 6 months with continued funding, mentorship, and community support.
Learn more: Summer 2026
Robert Krzyzanowski Poseidon Research
Before MATS, I had a strong interest in alignment generally but few skillsets relevant to the frontier of research and little idea of how to get started. Directly thanks to MATS, I achieved: (1) a relatively complete understanding of the structure of the most important questions and associated communities in in the AI safety space, (2) legible and significant research outputs that gave me the confidence to continue switching into a full-time career in the space, and (3) access to a broad base of present and future collaborators with a very wide range of perspectives. On this third point, the talent exhibited at MATS is fearsome and highly motivated to solve the problems. It would not be at all surprising to me if when the dust settles and the grand project of alignment reaches eventual fruition, it becomes apparent that over a double digit percentage of the credit attribution to the key problems and solutions belongs to MATS alumni.
I am an independent AI safety researcher currently focused on mechanistic interpretability and training process transparency.
Read more
Thomas Larsen AI Futures Project
MATS helped me upskill in alignment at a >3x rate relative to the counterfactual, which was independently learning infra-bayesianism because I liked math and I didn't have an inside view on what parts of alignment was important. MATS caused me to develop a much deeper view of the alignment problem and afterwards I felt like I was able to focus on the most important parts of the problem and biggest sources of confusion within myself.
Thomas took part in the Summer 2022 Cohort with John Wentworth and the Winter 2023 Cohort with Nate Soares. During this time, he wrote a detailed overview of AI Safety approaches. He continued his SERI MATS work at MIRI, before leaving to found the Center for AI Policy, an AI safety advocacy organization. He is currently a researcher at the AI Futures Project and a guest fund manager at the LTFF.
Read more
Nina Panickssery Anthropic
Participating in MATS was a great way to rapidly upskill in AI safety research, learn about the field, and meet other researchers/collaborators.
... (truncated, 21 KB total)Resource ID:
ba3a8bd9c8404d7b | Stable ID: sid_Q23PdF97HE