Skip to content
Longterm Wiki

Alignment Research Center (ARC)

Safety Organization
Founded Oct 2021 (4 years old)HQ: Berkeley, CAalignment.org

Also known as: ARC, ARC Alignment

Entity
Wiki
About
Business
Data

The Alignment Research Center (ARC) was founded in 2021 by Paul Christiano after his departure from OpenAI. ARC represents a distinctive approach to AI alignment: combining theoretical research on fundamental problems (like Eliciting Latent Knowledge) with practical evaluations of frontier models for dangerous capabilities.

Revenue
$10M
as of 2023
Annual Expenses
$6.3M
as of 2023
Net Assets
$7.1M
as of 2023

Key Metrics

Revenue (ARR)

$10M2023
Revenue (ARR) chart. Annual run rate: $476K in 2021 to $10M in 2023.$0$2.9M$5.8M$8.7M$12M202120222023

Facts

10
Financial
Net Assets$7.1M
Revenue$10M
Annual Expenses$6.3M
People
Founder (text)Paul Christiano
Founded ByPaul Christiano
Organization
Legal Structure501(c)(3) nonprofit
HeadquartersBerkeley, CA
Founded DateOct 2021
Biographical
Wikipediahttps://en.wikipedia.org/wiki/Alignment_Research_Center
General
Websitehttp://alignment.org/

Other Data

Entity Events
7 entries
TitleDateEventTypeDescriptionSignificance
Christiano appointed Head of AI Safety at US AISI2024-04leadership-changePersonal appointment at NIST rather than an institutional contract with ARC.major
ARC Evals formally renamed METR2023-12-04pivotModel Evaluation & Threat Research; independent 501(c)(3) nonprofit led by Beth Barnes.major
ARC Evals spin-out announced2023-09-19pivotGrowth in ARC Evals' size prompted formalization of separation.major
ARC Evals incubated; Beth Barnes hired to lead it2022launchBegan incubating ARC Evals — exploratory work on independent evaluations of frontier AI models. Completed evaluations of GPT-4 (with OpenAI) and Claude (with Anthropic).major
Open Philanthropy grant ($265K)2022fundingDocumented grant from Coefficient Giving (then Open Philanthropy).moderate
Returned $1.25M FTX Foundation grant after FTX bankruptcy2022fundingmajor
Founded by Paul Christiano2021foundingFounded by Paul Christiano after his departure from OpenAI; nonprofit AI safety research organization focused on theoretical alignment.major

Divisions

2
Team

Evaluates frontier AI models for dangerous capabilities (e.g., autonomous replication). Spun out as METR in 2024 but ARC continues related eval work.

Team

Theoretical alignment research led by Paul Christiano. Focuses on ELK (Eliciting Latent Knowledge) and foundational alignment theory.

Prediction Markets

12 active

Related Wiki Pages

Top Related Pages

Safety Research

Anthropic Core Views

Approaches

Capability ElicitationAI Alignment

Analysis

Deceptive Alignment Decomposition ModelModel Organisms of Misalignment

Policy

EU AI Act

Organizations

Anthropic

Risks

Deceptive Alignment

Other

Scalable OversightAI ControlSam Bankman-FriedARC-AGIARC-AGI-2

Concepts

Ea Epistemic Failures In The Ftx EraFTX Collapse: Lessons for EA Funding ResilienceLarge Language Models

Key Debates

AI Accident Risk CruxesWhy Alignment Might Be HardIs AI Existential Risk Real?

Historical

Mainstream Era