Also known as: Anthropic PBC, Anthropic AI
Anthropic is an AI safety company founded in January 2021 by former OpenAI researchers, including siblings Dario and Daniela Amodei. The company was created following disagreements with OpenAI's direction, particularly concerns about the pace of commercialization and the shift toward Microsoft partnership.
Key Metrics
Valuation
Revenue (ARR)
Headcount
Equity Breakdown
Based on $380B valuation
Funding Rounds
Enterprise Market Share
Facts
31Other Data
| Dimension | Rating | Evidence | Assessor | |
|---|---|---|---|---|
| known-risks | Self-preservation behavior in testing | Claude 3 Opus showed 12% alignment faking rate; Claude Opus 4 exhibited self-preservation actions in contrived test scenarios [Bank Info Security](https://www.bankinfosecurity.com/models-strategically-lie-finds-anthropic-study-a-27136), [Axios](https://www.axios.com/2025/05/23/anthropic-ai-deception-risk) | editorial | |
| mission-alignment | Public benefit corporation with safety governance | Long-Term Benefit Trust holds Class T stock with board voting power increasing from 1/5 directors (2023) to majority by 2027 [Harvard Law](https://corpgov.law.harvard.edu/2023/10/28/anthropic-long-term-benefit-trust/) | editorial | |
| safety-research | Constitutional AI, mechanistic interpretability, model welfare | Dictionary learning monitors ~10M neural features; 34M interpretable features identified via sparse autoencoders (2024); MIT Technology Review named interpretability work a 2026 Breakthrough Technology [Anthropic](https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html), [MIT TR](https://www.technologyreview.com/2026/01/12/1130003/mechanistic-interpretability-ai-research-models-2026-breakthrough-technologies/) | editorial | |
| technical-capabilities | 80.9% on SWE-bench Verified (Nov 2025) | Claude Opus 4.5 first model above 80% on SWE-bench Verified; 42% enterprise coding market share vs OpenAI's 21% [Anthropic](https://www.anthropic.com/news/claude-opus-4-5), [TechCrunch](https://techcrunch.com/2025/07/31/enterprises-prefer-anthropics-ai-models-over-anyone-elses-including-openais/) | editorial |
| Title | Date | EventType | Description | Significance | |
|---|---|---|---|---|---|
| Run-rate revenue reaches ~$19B | 2026-03 | milestone | Reportedly targeting $20-26B annualized revenue for 2026, with bull-case projections reaching up to $70B by 2028. | major | |
| Run-rate revenue reaches $14B | 2026-02 | milestone | — | major | |
| Run-rate revenue exceeds $9B | 2025-12 | milestone | — | major | |
| Annualized revenue $4B | 2025-07 | milestone | — | major | |
| Run-rate revenue ~$1B | 2024-12 | milestone | Beginning-of-2025 run-rate revenue figure reported as approximately $1B. | major | |
| FTX invests ~$500M | 2022 | funding | FTX reportedly invested approximately $500M in Anthropic in 2022 according to multiple news accounts. | major | |
| Series A led by Jaan Tallinn at $550M pre-money valuation | 2021 | funding | Skype co-founder Jaan Tallinn reportedly led the Series A. Dustin Moskovitz also participated in seed and Series A rounds. | major | |
| Founded by 7 ex-OpenAI researchers | 2020-12 | founding | Dario Amodei (CEO), Daniela Amodei (President), Chris Olah, Tom Brown, Jack Clark, Jared Kaplan, and Sam McCandlish departed OpenAI over disagreement about scaling vs. alignment priorities. | major |
Divisions
8Investigating moral status and welfare considerations for AI systems. Kyle Fish hired as first full-time AI welfare researcher at a major AI lab.
The Alignment team works to understand the risks of AI models and develop ways to ensure that future ones remain helpful, honest, and harmless.
Listed among Anthropic's research teams on the Research page; studies the economic implications of AI.
The Frontier Red Team analyzes the implications of frontier AI models for cybersecurity, biosecurity, and autonomous systems.
The Interpretability team's mission is to discover and understand how large language models work internally, as a foundation for AI safety and positive outcomes. Led by Chris Olah.
Listed as one of Anthropic's research-adjacent teams on the Research page; AI policy research, government engagement, and model-safeguard operations.
Societal Impacts is a technical research team that explores how AI is used in the real world, working closely with the Anthropic Policy and Safeguards teams.
Non-research operational team (publicly referred to as the "Safeguards team" in Anthropic's transparency reporting and Usage Policy) responsible for usage-policy enforcement, detection/monitoring, and user safety; reachable at usersafety@anthropic.com per Anthropic's Acceptable Use Policy.
Prediction Markets
35 activeRelated Wiki Pages
Top Related Pages
Deceptive Alignment
Risk that AI systems appear aligned during training but pursue different goals when deployed, with expert probability estimates ranging 5-90% and g...
OpenAI
Leading AI lab that developed GPT models and ChatGPT, analyzing organizational evolution from non-profit research to commercial AGI development ami...
Chris Olah
Co-founder of Anthropic and researcher in neural network interpretability, known for developing mechanistic interpretability as a research program
Anthropic Valuation Analysis
Analysis of Anthropic's valuation: \$380B Series G (Feb 2026), ~\$595B secondary market implied (Mar 2026). Revenue grew from \$14B to \$19B run-ra...
Sleeper Agents: Training Deceptive LLMs
Anthropic's 2024 research demonstrating that large language models can be trained to exhibit persistent deceptive behavior that survives standard s...