Skip to content
Longterm Wiki

Anthropic

Frontier AI Lab
Founded Jan 2021 (5 years old)HQ: San Francisco, CAanthropic.com

Also known as: Anthropic PBC, Anthropic AI

Entity
Wiki
About
Business
Policy & Governance
Output & Research
Data

Anthropic is an AI safety company founded in January 2021 by former OpenAI researchers, including siblings Dario and Daniela Amodei. The company was created following disagreements with OpenAI's direction, particularly concerns about the pace of commercialization and the shift toward Microsoft partnership.

Revenue
$19B
as of Mar 2026
Valuation
$380B
as of Feb 2026
Headcount
4,074
as of Jan 2026
Total Funding Raised
$67B
as of Feb 2026
AI Models
15

Key Metrics

Valuation

$380BFeb 2026
Valuation chart. Post-money valuation: $550M in 2021 to $380B in 2026.$0$109B$218B$328B$437B202120222023202420252026Series ASeries BSeries DSeries ESeries FSeries GSeries ASeries BSeries CSeries DSeries ESeries FSeries G

Revenue (ARR)

$19BMar 2026
Revenue (ARR) chart. Annual run rate: $100M in 2023 to $19B in 2026.$0$5.5B$11B$16B$22B2023202420252026

Headcount

4.1KJan 2026
Headcount chart. Employees: 192 in 2022 to 4.1K in 2026.01.2K2.3K3.5K4.7K20222023202420252026

Equity Breakdown

$380Bvaluation
Employee Equity Pool12–18%$57B
Google / Alphabet13–15%$53B
Sam McCandlish2–3%$9.5B
Daniela Amodei2–3%$9.5B
Jared Kaplan2–3%$9.5B
Dario Amodei2–3%$9.5B
Jack Clark2–3%$9.5B
Chris Olah2–3%$9.5B
Tom Brown2–3%$9.5B
Dustin Moskovitz0.8–2.5%$6.3B
Jaan Tallinn0.6–1.7%$4.4B

Based on $380B valuation

Funding Rounds

$48Btotal raised
Funding Rounds. Series A (May 2021): $124M raised; Series B (Apr 2022): $580M raised; Series C (May 2023): $450M raised; Series D (Feb 2024): $750M raised; Series E (Mar 2025): $3.5B raised; Series F (Sep 2025): $13B raised; Series G (Feb 2026): $30B raised. Total: $48B.$0$8.6B$17B$26B$35B202120222023202420252026Series ASeries BSeries CSeries DSeries ESeries FSeries G
Per round
Total

Enterprise Market Share

32%Anthropic

Facts

31
Financial
Gross Margin63%
Revenue$19B
Secondary Market Valuation$595B
Equity Value$57B
Equity Stake2.5%
Product Revenue$2.5B
Valuation$380B
Employee Tender Offer$5.5B
Total Funding Raised$67B
Headcount4,074
Revenue Guidance$20B
Infrastructure Investment$50B
Annual Cash Burn$3B
Enterprise Market Share32%
Customer Concentration25%
Retention Rate88%
Biographical
Wikipediahttps://en.wikipedia.org/wiki/Anthropic
Products & Usage
Monthly API Calls25 billion
Business Customers300,000
Monthly Active Users18.9 million
Organization
Founded DateJan 2021
CountryUnited States
Legal StructurePublic benefit corporation
HeadquartersSan Francisco, CA
Safety & Research
AI Safety LevelASL-3 (Opus 4, Opus 4.5, Opus 4.6), ASL-2 (Sonnet, Haiku)
Safety Researchers265
Interpretability Team Size50
Safety Staffing Ratio25%
General
Websitehttps://www.anthropic.com/

Other Data

Entity Assessments
4 entries
DimensionRatingEvidenceAssessor
known-risksSelf-preservation behavior in testingClaude 3 Opus showed 12% alignment faking rate; Claude Opus 4 exhibited self-preservation actions in contrived test scenarios [Bank Info Security](https://www.bankinfosecurity.com/models-strategically-lie-finds-anthropic-study-a-27136), [Axios](https://www.axios.com/2025/05/23/anthropic-ai-deception-risk)editorial
mission-alignmentPublic benefit corporation with safety governanceLong-Term Benefit Trust holds Class T stock with board voting power increasing from 1/5 directors (2023) to majority by 2027 [Harvard Law](https://corpgov.law.harvard.edu/2023/10/28/anthropic-long-term-benefit-trust/)editorial
safety-researchConstitutional AI, mechanistic interpretability, model welfareDictionary learning monitors ~10M neural features; 34M interpretable features identified via sparse autoencoders (2024); MIT Technology Review named interpretability work a 2026 Breakthrough Technology [Anthropic](https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html), [MIT TR](https://www.technologyreview.com/2026/01/12/1130003/mechanistic-interpretability-ai-research-models-2026-breakthrough-technologies/)editorial
technical-capabilities80.9% on SWE-bench Verified (Nov 2025)Claude Opus 4.5 first model above 80% on SWE-bench Verified; 42% enterprise coding market share vs OpenAI's 21% [Anthropic](https://www.anthropic.com/news/claude-opus-4-5), [TechCrunch](https://techcrunch.com/2025/07/31/enterprises-prefer-anthropics-ai-models-over-anyone-elses-including-openais/)editorial
Entity Events
8 entries
TitleDateEventTypeDescriptionSignificance
Run-rate revenue reaches ~$19B2026-03milestoneReportedly targeting $20-26B annualized revenue for 2026, with bull-case projections reaching up to $70B by 2028.major
Run-rate revenue reaches $14B2026-02milestonemajor
Run-rate revenue exceeds $9B2025-12milestonemajor
Annualized revenue $4B2025-07milestonemajor
Run-rate revenue ~$1B2024-12milestoneBeginning-of-2025 run-rate revenue figure reported as approximately $1B.major
FTX invests ~$500M2022fundingFTX reportedly invested approximately $500M in Anthropic in 2022 according to multiple news accounts.major
Series A led by Jaan Tallinn at $550M pre-money valuation2021fundingSkype co-founder Jaan Tallinn reportedly led the Series A. Dustin Moskovitz also participated in seed and Series A rounds.major
Founded by 7 ex-OpenAI researchers2020-12foundingDario Amodei (CEO), Daniela Amodei (President), Chris Olah, Tom Brown, Jack Clark, Jared Kaplan, and Sam McCandlish departed OpenAI over disagreement about scaling vs. alignment priorities.major

Divisions

8
Team

Investigating moral status and welfare considerations for AI systems. Kyle Fish hired as first full-time AI welfare researcher at a major AI lab.

Team

The Alignment team works to understand the risks of AI models and develop ways to ensure that future ones remain helpful, honest, and harmless.

Team

Listed among Anthropic's research teams on the Research page; studies the economic implications of AI.

Team

The Frontier Red Team analyzes the implications of frontier AI models for cybersecurity, biosecurity, and autonomous systems.

Team

The Interpretability team's mission is to discover and understand how large language models work internally, as a foundation for AI safety and positive outcomes. Led by Chris Olah.

Team

Listed as one of Anthropic's research-adjacent teams on the Research page; AI policy research, government engagement, and model-safeguard operations.

Team

Societal Impacts is a technical research team that explores how AI is used in the real world, working closely with the Anthropic Policy and Safeguards teams.

Team

Non-research operational team (publicly referred to as the "Safeguards team" in Anthropic's transparency reporting and Usage Policy) responsible for usage-policy enforcement, detection/monitoring, and user safety; reachable at usersafety@anthropic.com per Anthropic's Acceptable Use Policy.

Prediction Markets

35 active

Related Wiki Pages

Top Related Pages

Safety Research

Anthropic Core Views

Approaches

Weak-to-Strong GeneralizationConstitutional AI

Analysis

Anthropic (Funder)

Policy

Voluntary AI Safety CommitmentsCalifornia SB 53

Other

Anthropic StakeholdersDario AmodeiScalable OversightInterpretabilityClaude

Organizations

US AI Safety Institute (now CAISI)

Historical

Anthropic-Pentagon Standoff (2026)Mainstream Era

Concepts

AI Welfare and Digital MindsAgentic AISituational AwarenessLarge Language Models

Key Debates

AI Alignment Research AgendasTechnical AI Safety Research