Skip to content
Longterm Wiki

MATS ML Alignment Theory Scholars program

Safety Organization
Entity
Wiki
Business
Data

MATS is a well-documented 12-week fellowship program that has successfully trained 213 AI safety researchers with strong career outcomes (80% in alignment work) and research impact (160+ publications, 8000+ citations). The program provides comprehensive support ($27k per scholar) and has produced notable alumni contributions to alignment research.

Other Data

Entity Assessments
5 entries
DimensionRatingEvidenceAssessor
career-impactVery High80% of alumni work in AI alignment; placements at Anthropic, OpenAI, DeepMindeditorial
funding-per-scholar$27k$15k stipend + $12k compute resources, plus housing and mealseditorial
program-scaleHigh98 scholars and 57 mentors in most recent cohort (MATS 8.0, Summer 2025)editorial
research-outputStrong160+ publications, 8,000+ citations, h-index of 40 over 4 yearseditorial
selectivityVery Competitive~15% acceptance rate; 40+ mentors with independent selectioneditorial

Related Wiki Pages

Top Related Pages

Approaches

Representation Engineering

Analysis

Short AI Timeline Policy Implications

Organizations

AnthropicOpenAIApollo ResearchSurvival and Flourishing Fund (SFF)Alignment Research Center (ARC)Coefficient Giving

Concepts

Situational AwarenessSafety Orgs Overview

Other

Scalable OversightInterpretabilityAjeya CotraEvan Hubinger