Skip to content
Longterm Wiki
Back

MATS Summer 2026 Program

web

MATS is a major AI safety fellowship program that has become a significant talent pipeline for the field, connecting emerging researchers with senior mentors at top safety organizations.

Metadata

Importance: 55/100homepage

Summary

MATS (Machine Learning Alignment Theory Scholars) Summer 2026 is a fellowship program running June-August 2026, connecting 120 fellows with 100 mentors from leading AI safety organizations including Anthropic, UK AISI, Redwood Research, and ARC. Fellows collaborate on AI safety research across streams including empirical alignment, interpretability, policy & strategy, technical governance, and compute infrastructure, with potential 6+ month extensions.

Key Points

  • Largest MATS program to date with 120 fellows and 100 mentors across multiple research tracks
  • Research streams cover empirical alignment, interpretability, policy & strategy, technical governance, and compute governance
  • Partner organizations include Anthropic Alignment Science, UK AISI, Redwood Research, ARC, and LawZero
  • Multi-phase structure: general application, evaluations (coding/work tests, interviews), admissions, main program, and optional 6+ month extension
  • Applications for Summer 2026 are closed; Autumn 2026 applications open late April

Cited by 1 page

PageTypeQuality
MATS ML Alignment Theory Scholars programOrganization60.0

Cached Content Preview

HTTP 200Fetched Apr 7, 202625 KB
MATS Summer 2026 
 

 Research Mentors About 
 
 About 
 
 Mid Missouri Region 
 
 Kansas City Region 
 
 Apply Research Mentors About 
 
 About 
 
 
 
 
 
 
 Program 
 
 Summer 2026 MATS Summer 2026

 The Summer 2026 program will run from June through August. It will be largest MATS program to date with 120 fellows and 100 mentors. Fellows will be connected with mentors or organizational research groups, such as  Anthropic's Alignment Science  team,  UK AISI ,  Redwood Research ,  ARC , and  LawZero , to collaborate on a research project over the summer. Some fellows will be offered a 6+ month extension to continue this collaboration.

 Applications are now closed. Applications for upcoming programs: Autumn 2026 will open in late April. Sign up here to be notified when the next round open.

 Apply Expression of interest Summer 2026 Streams Program phases

 Key dates for the application and admissions timeline

 1. Applications General Application (December 16th to January 18th)  

 Applicants fill out a general application which should take 1-2 hours. Applications are due by January 18th. 

 Additional Evaluations (Late January through March) 

 Applicants that are advanced in the applications process go through additional evaluations including reference checks, coding tests, work tests, and interviews. Which evaluations you will undergo depend on the mentors and streams you apply to. 

 Admissions Decisions (Early April) 
Selected applicants are notified of their acceptance and anticipated mentor later in the application cycle.

 Summer 2026 Timeline: 

 2. Main Program 
 
 3. Extension Phase 
 
 4. Post-program 
 
 
Summer 2026 Streams

 Filters 
 
 Empirical

 Policy and Strategy

 Theory

 Technical Governance

 Compute Infrastructure

 MATS supports researchers in a variety of research tracks, which includes technical governance, empirical, policy & strategy, theory, and compute governance. MATS fellows participate in a research stream consisting of their mentor(s) and other mentees. You can specify which tracks and streams to apply to in the general application. Each stream provides its own research agenda, methodology, and mentorship focus. You can also view this list as a grid here . 

 Neel Nanda

 Empirical Neel takes a pragmatic approach to interpretability: identify what stands between where we are now and where we want to be by AGI, and then focus on the subset of resulting research problems that can be tractably studied on today's models. This can look like diving deep into the internals of the model, or simpler black box methods like reading and carefully intervening on the chain of thought - whatever is the right tool for the job. This could look like studying how to detect deception, understanding why a model took a seemingly concerning action, or fixing weak points in other areas of safety, e.g. using interpretability to stop models realising they are being tested. You can learn more about Neel's approach in this

... (truncated, 25 KB total)
Resource ID: 29a77b87ee480244 | Stable ID: sid_dWshE3icFH