Also known as: ARC, ARC Alignment
The Alignment Research Center (ARC) was founded in 2021 by Paul Christiano after his departure from OpenAI. ARC represents a distinctive approach to AI alignment: combining theoretical research on fundamental problems (like Eliciting Latent Knowledge) with practical evaluations of frontier models for dangerous capabilities.
Key Metrics
Revenue (ARR)
Facts
10Other Data
| Title | Date | EventType | Description | Significance | |
|---|---|---|---|---|---|
| Christiano appointed Head of AI Safety at US AISI | 2024-04 | leadership-change | Personal appointment at NIST rather than an institutional contract with ARC. | major | |
| ARC Evals formally renamed METR | 2023-12-04 | pivot | Model Evaluation & Threat Research; independent 501(c)(3) nonprofit led by Beth Barnes. | major | |
| ARC Evals spin-out announced | 2023-09-19 | pivot | Growth in ARC Evals' size prompted formalization of separation. | major | |
| ARC Evals incubated; Beth Barnes hired to lead it | 2022 | launch | Began incubating ARC Evals — exploratory work on independent evaluations of frontier AI models. Completed evaluations of GPT-4 (with OpenAI) and Claude (with Anthropic). | major | |
| Open Philanthropy grant ($265K) | 2022 | funding | Documented grant from Coefficient Giving (then Open Philanthropy). | moderate | |
| Returned $1.25M FTX Foundation grant after FTX bankruptcy | 2022 | funding | — | major | |
| Founded by Paul Christiano | 2021 | founding | Founded by Paul Christiano after his departure from OpenAI; nonprofit AI safety research organization focused on theoretical alignment. | major |
Divisions
2Evaluates frontier AI models for dangerous capabilities (e.g., autonomous replication). Spun out as METR in 2024 but ARC continues related eval work.
Theoretical alignment research led by Paul Christiano. Focuses on ELK (Eliciting Latent Knowledge) and foundational alignment theory.
Prediction Markets
12 activeRelated Wiki Pages
Top Related Pages
AI Capability Sandbagging
AI systems strategically hiding or underperforming their true capabilities during evaluation.
Paul Christiano
Founder of ARC, creator of iterated amplification and AI safety via debate. Current risk assessment ~10-20% P(doom), AGI 2030s-2040s. Pioneered pro...
METR
Model Evaluation and Threat Research conducts dangerous capability evaluations for frontier AI models, testing for autonomous replication, cybersec...
Machine Intelligence Research Institute (MIRI)
A pioneering AI safety research organization that shifted from technical alignment research to policy advocacy, founded by Eliezer Yudkowsky in 200...
Situational Awareness
AI systems' understanding of their own nature and circumstances, studied as a capability that may enable context-dependent behavior including strat...