Anthropic
Frontier LabAlso known as: Anthropic PBC, Anthropic AI
Founded Jan 2021 (5 years old) by Dario Amodei, Daniela Amodei, Tom Brown, Chris Olah, Jared Kaplan, Sam McCandlish, and Jack Clark
Anthropic is an AI safety company founded in January 2021 by former OpenAI researchers, including siblings Dario and Daniela Amodei. The company was created following disagreements with OpenAI's direction, particularly concerns about the pace of commercialization and the shift toward Microsoft partnership.
Funding History
17Investor Participation
19Model Releases
11Agent teams, PowerPoint generation; flagship model
anthropic.com (opens in new tab)Same pricing as Sonnet 4.5; default for free and Pro users
cnbc.com (opens in new tab)80.9% SWE-bench Verified — first model above 80%; 42% enterprise coding share
anthropic.com (opens in new tab)Smallest model in 4.5 family
Mid-generation refresh
Opus 4 was first model under ASL-3; improved coding capabilities
anthropic.com (opens in new tab)Updated Sonnet with Computer Use beta; new Haiku model
Outperformed larger Opus model; launched with Artifacts feature
anthropic.com (opens in new tab)Three-tier model family; Opus was most capable model at launch
anthropic.com (opens in new tab)Major capability upgrade
Initial Claude model release
Products
8Safety Milestones
11Introduces Risk Reports every 3-6 months; mandatory external review for redacted reports
anthropic.com (opens in new tab)Full constitution published under CC0 1.0 license; primary author Amanda Askell
anthropic.com (opens in new tab)First-ever activation of ASL-3 for Claude Opus 4 due to elevated CBRN capabilities
anthropic.com (opens in new tab)Showed Claude has a shared conceptual space where reasoning happens before language translation
transformer-circuits.pub (opens in new tab)300K+ messages, ~3700 hours of effort; 4 participants found jailbreaks, 1 universal
anthropic.com (opens in new tab)First empirical example of a production model engaging in alignment faking without training
anthropic.com (opens in new tab)Applied sparse autoencoders to Claude 3 Sonnet; identified ~34M interpretable features
transformer-circuits.pub (opens in new tab)Showed deceptive LLM behaviors can persist through safety training
arxiv.org (opens in new tab)Original RSP framework introducing AI Safety Levels (ASL)
anthropic.com (opens in new tab)Foundational paper on training AI systems to follow principles through self-critique
arxiv.org (opens in new tab)Strategic Partnerships
5Other Data
| Pledger | Pledge |
|---|---|
| Dario Amodei | 80% |
| Daniela Amodei | 80% |
| Chris Olah | 80% |
| jack-clark | 80% |
| tom-brown | 80% |
| jared-kaplan | 80% |
| sam-mccandlish | 80% |
| Jaan Tallinn | 90% |
| Dustin Moskovitz | 95% |
| — | 25%–50% |
| Name | Description | Team Size | Started |
|---|---|---|---|
| Alignment Science | Scalable oversight, weak-to-strong generalization, robustness to jailbreaks | — | May 2024 |
| Sleeper Agents Research | Investigating whether AI systems can maintain hidden behaviors through training | — | Jan 2024 |
| AI Welfare Research | Investigating moral status and welfare considerations for AI systems | — | Jan 2024 |
| Responsible Scaling Policy | Framework for evaluating and mitigating risks at each capability level | — | Sep 2023 |
| Constitutional AI | Training AI systems to follow principles through self-critique and RLAIF | — | Dec 2022 |
| Mechanistic Interpretability | Understanding neural network internals through reverse-engineering | 50 | Jan 2021 |
Key People
15Board of Directors
6Facts27
Equity Positions
15| Holder | Stake | As Of |
|---|---|---|
| source | 12%–18% | |
| source | 13%–15% | |
| source | 6%–10% | |
| Dario Amodeisource | 1.5%–2.5% | |
| Daniela Amodeisource | 1.5%–2.5% | |
| Dustin Moskovitzsource | 0.8%–2.5% | |
| Chris Olahsource | 1%–2% | |
| Jack Clarksource | 1%–2% | |
| Tom Brownsource | 1%–2% | |
| Jared Kaplansource | 1%–2% | |
| Sam McCandlishsource | 1%–2% | |
| Jaan Tallinnsource | 0.6%–1.7% | |
| Microsoft AIsource | ||
| NVIDIAsource | ||
| source |
AI Models (14)
View all models →| Model | Released | Pricing (in/out) | Context |
|---|---|---|---|
| Claude Sonnet 4.6 | Feb 2026 | $3 / $15 | 200K tokens |
| Claude Opus 4.6 | Feb 2026 | $5 / $25 | 1M tokens |
| Claude Opus 4.5 | Nov 2025 | $5 / $25 | 200K tokens |
| Claude Haiku 4.5 | Oct 2025 | $1 / $5 | 200K tokens |
| Claude Sonnet 4.5 | Sep 2025 | $3 / $15 | 200K tokens |
| Claude Opus 4.1 | Aug 2025 | $15 / $75 | 200K tokens |
| Claude Sonnet 4 | May 2025 | $3 / $15 | 200K tokens |
| Claude Opus 4 | May 2025 | $15 / $75 | 200K tokens |
| Claude 3.7 Sonnet | Feb 2025 | $3 / $15 | 200K tokens |
| Claude 3.5 Haiku | Nov 2024 | $0.8 / $4 | 200K tokens |
| Claude 3.5 Sonnet | Jun 2024 | $3 / $15 | 200K tokens |
| Claude 3 Opus | Mar 2024 | $15 / $75 | 200K tokens |
| Claude 3 Sonnet | Mar 2024 | $3 / $15 | 200K tokens |
| Claude 3 Haiku | Mar 2024 | $0.25 / $1.25 | 200K tokens |