Edited 4 weeks ago980 words3 backlinksUpdated biweeklyOverdue by 15 days
60QualityGood •Quality: 60/100LLM-assigned rating of overall page quality, considering depth, accuracy, and completeness.Structure suggests 8785ImportanceHighImportance: 85/100How central this topic is to AI safety. Higher scores mean greater relevance to understanding or mitigating AI risk.85ResearchHighResearch Value: 85/100How much value deeper investigation of this topic could yield. Higher scores indicate under-explored topics with high insight potential.
Content7/12
SummarySummaryBasic text summary used in search results, entity link tooltips, info boxes, and related page cards.ScheduleScheduleHow often the page should be refreshed. Drives the overdue tracking system.EntityEntityYAML entity definition with type, description, and related entries.Edit history4Edit historyTracked changes from improve pipeline runs and manual edits.
–Tables3/ ~5TablesData tables for structured comparisons and reference material.Add data tables to the pageDiagrams0DiagramsVisual content — Mermaid diagrams, charts, or Squiggle estimate models.Add Mermaid diagrams or Squiggle modelsInt. links23/ ~5Int. linksLinks to other wiki pages. More internal links = better graph connectivity.Ext. links8/ ~3Ext. linksLinks to external websites, papers, and resources outside the wiki.Footnotes0/ ~2FootnotesFootnote citations [^N] with source references at the bottom of the page.Add [^N] footnote citationsReferences6/ ~2ReferencesCurated external resources linked via <R> components or cited_by in YAML.Quotes0QuotesSupporting quotes extracted from cited sources to back up page claims.crux citations extract-quotes <id>Accuracy0AccuracyCitations verified against their sources for factual accuracy.crux citations verify <id>RatingsN:3 R:5 A:7 C:6RatingsSub-quality ratings: Novelty, Rigor, Actionability, Completeness (0-10 scale).Backlinks3BacklinksNumber of other wiki pages that link to this page. Higher backlink count means better integration into the knowledge graph.
Change History4
Enhanced Anthropic stakeholders table with HoverCards, entity previews, full-bleed layout2 months ago
Multi-round enhancement of the Anthropic stakeholders page. Session 1: added 5 canonical facts, refactored table into server+client split, column toggles, pledge ranges. Session 2: replaced FactRef hash badges with Radix HoverCard popovers on values (interactive, stays open); added EntityPreviewLink hover cards on stakeholder names; added live valuation input so users can recalculate at custom valuations; added `contentFormat: table` + `hideSidebar: true` to hide left sidebar and give table full container width; added `prose-constrain-text` CSS to keep text at 65rem while table component uses full 90rem container; fixed EA connections (Chris Olah and Jack Clark upgraded to "Moderate"; Tom/Jared/Sam updated to "Weak/unknown" with reasoning); added Methodology & Assumptions section explaining how all estimates were derived.
Updated PR #295 description to check off the already-satisfied CI lint item and annotate remaining test plan items as human-action-required. Updated `anthropic-stakeholders.mdx` frontmatter: raised `readerImportance` and `researchImportance` to 85, set `update_frequency` to 14 days. Ran full gate check (7/7 passed). Marked issue #294 done.
sonnet-4 · ~15min
Create dedicated Anthropic stakeholder page2 months ago
Created a new dedicated `anthropic-stakeholders` page with the most shareable ownership tables (all stakeholders with stakes, values, EA alignment), added a condensed stakeholder summary to the top of the main Anthropic page, and wrote 4 proposed GitHub issues for broader system changes (datasets infrastructure, importance metrics rethink, concrete data expansion, continuous maintenance).
opus-4-6 · ~30min
Enhance fact system on Anthropic stakeholders page2 months ago
Added 8 new canonical facts to `data/facts/anthropic.yaml` (Google stake %, Google/Amazon investments, founder equity totals, EA-aligned capital range), a new `equity-stake-percent` measure, and built a programmatic `AnthropicStakeholdersTable` React component with derived "Exp. Donated" and "Exp. EA-Effective" columns that auto-scale with the current valuation fact. Updated the MDX page to use `<F>` components for volatile inline figures.
sonnet-4 · ~30min
Issues2
QualityRated 60 but structure suggests 87 (underrated by 27 points)
Links7 links could use <R> components
Anthropic Stakeholders
Data as of March 2026
All values at $380 billionValuation$380 billionAs of: Feb 2026Series G post-money valuation; second-largest venture deal ever behind OpenAI's $40BSource: reuters.comsid_mK9pX3rQ7n.valuation → post-money valuation (AnthropicOrganizationAnthropicComprehensive reference page on Anthropic covering financials ($380B valuation, $14B ARR at Series G growing to $19B by March 2026), safety research (Constitutional AI, mechanistic interpretability...Quality: 74/100 Series G, Feb 2026). Secondary/derivatives markets imply ≈$595B (Ventuals, Mar 2026) — at that pricing, all dollar values below would scale by approximately 1.57x. A $5-6B employee tender offer launched Feb 2026 at $350B pre-money. For detailed analysis of EA-aligned capital flows, see Anthropic (Funder)AnalysisAnthropic (Funder)Comprehensive model of EA-aligned philanthropic capital at Anthropic. At $380B valuation (Series G, Feb 2026, $30B raised): $27-76B risk-adjusted EA capital expected. Total funding raised exceeds $...Quality: 65/100. For valuation scenarios and secondary market breakdown, see Anthropic Valuation AnalysisAnalysisAnthropic Valuation AnalysisValuation analysis updated March 2026. Series G closed at $380B (Feb 2026) with $14B run-rate; by March 2026, secondary/derivatives markets price Anthropic at ~$595B implied (Ventuals), a 57% premi...Quality: 72/100.
Stakeholder Ownership & Philanthropy
The table below shows each stakeholder's estimated equity stake, the fraction pledged to charitable giving ("Pledge %"), the estimated probability those donations go to EA-aligned causes ("EA Align %"), and two derived columns: expected donated dollars and expected EA-effective giving. All values scale automatically with the current valuation.
Anthropic Stakeholder Ownership & Philanthropyas of 2026-02
Dollar values at $380B valuation. Pledge % = fraction of equity pledged to charity. EA Align % = estimated probability donations go to EA-aligned causes.
Sam McCandlishAlignment researcher; no publicly documented EA connections
Investor
2–3%
$7.6B–$11.4B
—
—
—
—
Dustin Moskovitz$500M already in Good Ventures nonprofit vehicle
Investor
0.8–2.5%
$3.0B–$9.5B
—
—
—
—
Employee Equity PoolPledge rates vary: pre-2025 hires up to 50% with 3:1 match; post-2024 hires 25% with 1:1 match.
Employees
12–18%
$45.6B–$68.4B
25–50%
40–70% · Medium
$11.4B–$34.2B
$4.6B–$23.9B
Daniela AmodeiMarried to Holden Karnofsky (GiveWell co-founder)
Investor
2–3%
$7.6B–$11.4B
—
—
—
—
Jared KaplanScaling laws pioneer; safety-motivated co-founder; no documented EA pledge
Investor
2–3%
$7.6B–$11.4B
—
—
—
—
Dario AmodeiGWWC signatory; early GiveWell supporter
Investor
2–3%
$7.6B–$11.4B
—
—
—
—
Jack ClarkFormer OpenAI Policy Director; responsible AI advocate; EA-adjacent framing
Investor
2–3%
$7.6B–$11.4B
—
—
—
—
Chris OlahInterpretability pioneer; participated in EA events; safety-focused
Investor
2–3%
$7.6B–$11.4B
—
—
—
—
Series G InstitutionalGIC, Coatue, D.E. Shaw, Dragoneer, Founders Fund, ICONIQ, MGX
Institutional
Undisclosed
—
—
—
—
—
Google / Alphabet$3.3B invested across 3 rounds; no philanthropic pledge
Investor
13–15%
$49.4B–$57.0B
—
—
—
—
Tom BrownGPT-3 lead author; chose Anthropic's safety mission over other options
Investor
2–3%
$7.6B–$11.4B
—
—
—
—
Jaan TallinnLed Series A; Skype co-founder; major AI safety funder
Investor
0.6–1.7%
$2.3B–$6.5B
—
—
—
—
Totals (pledged stakeholders)
$11.4B–$34.2B
$4.6B–$23.9B
Ranges reflect uncertainty. Pledges are not legally binding.
Total funding raised: $67 billionTotal Funding Raised$67 billionAs of: Feb 2026Total funding raised as of Series G (Feb 2026), per Reuters. Includes Series A-G equity rounds plus Amazon's $10.75B and Google's $3.3B strategic investments. Excludes Microsoft/Nvidia 'up to $15B' commitment (not fully deployed). The FTX ~$500M investment (2022) was sold to creditors after FTX's collapse and is not counted in the live total.Source: reuters.comsid_mK9pX3rQ7n.total-funding → across 17 rounds. Current valuation: $380 billion (as of Feb 2026)Valuation$380 billionAs of: Feb 2026Series G post-money valuation; second-largest venture deal ever behind OpenAI's $40BSource: reuters.comsid_mK9pX3rQ7n.valuation → (Series G).
Founder Donation Pledges
All seven co-founders have pledged to donate 80% of their equity. Fortune At current valuations, that's $43–64B if fully honored — derived from $53–80B in combined founder equity.
Founder
EA Connection
Pledge Fulfillment Likelihood
Dario AmodeiPersonDario AmodeiComprehensive biographical profile of Anthropic CEO Dario Amodei documenting his competitive safety development philosophy, 10-25% catastrophic risk estimate, 2026-2030 AGI timeline, and Constituti...Quality: 41/100
Strong — GWWC signatory, early GiveWell supporter, former roommate of Holden KarnofskyPersonHolden KarnofskyHolden Karnofsky directed $300M+ in AI safety funding through Coefficient Giving (formerly Open Philanthropy), growing the field from ~20 to 400+ FTE researchers and developing influential framewor...Quality: 40/100
High
Daniela AmodeiPersonDaniela AmodeiBiographical profile of Anthropic's President covering her education, early career, roles at Stripe and OpenAI, and her operational and commercial leadership at Anthropic. Includes fundraising hist...Quality: 21/100
Strong — married to Holden KarnofskyPersonHolden KarnofskyHolden Karnofsky directed $300M+ in AI safety funding through Coefficient Giving (formerly Open Philanthropy), growing the field from ~20 to 400+ FTE researchers and developing influential framewor...Quality: 40/100 (GiveWell co-founder, now at Anthropic)
High
Chris OlahPersonChris OlahComprehensive biographical profile of Chris Olah covering his unconventional career path, foundational contributions to mechanistic interpretability (feature visualization, circuit analysis, sparse...Quality: 27/100
Moderate — interpretability pioneer; has participated in EA events; work closely aligned with EA safety priorities
Medium
Jack ClarkPersonJack ClarkA comprehensive biographical profile of Jack Clark covering his career from journalism through OpenAI to Anthropic co-founder, with reasonable balance including criticisms of RSP vagueness and lobb...
Moderate — responsible AI advocate; founded Import AI newsletter; EA-adjacent framing in policy work
Medium
Tom BrownPersonTom BrownA biographical wiki page on Tom B. Brown covering his foundational contributions to GPT-3, RLHF, and AI alignment; reasonably thorough but hampered by opaque sourcing (no URLs, just 'research data'...
Weak/unknown — GPT-3 lead author; chose Anthropic's safety mission; no documented EA pledge
Low-Medium
Jared KaplanPersonJared KaplanComprehensive biographical profile of Jared Kaplan covering his scaling laws research, Anthropic co-founder role, and Responsible Scaling Officer appointment, with notable coverage of RSP enforceme...
Weak/unknown — scaling laws pioneer; safety-motivated co-founder; no documented EA pledge
Low-Medium
Sam McCandlishPersonSam McCandlishA reasonably comprehensive biographical profile of Sam McCandlish covering his research contributions, roles at Anthropic, and relevant criticisms, but undermined by a significant sourcing problem:...
Weak/unknown — alignment researcher; no publicly documented EA connections
Low-Medium
Only 2 of 7 founders have documented strong EA connections. The remaining 5 all chose to leave OpenAI specifically to found a safety-focused lab, suggesting at minimum EA-adjacent motivations — but no documented pledges or EA Forum activity for Brown, Kaplan, and McCandlish. Founder pledges are not legally binding — enforcement relies on reputational cost. Historical Giving PledgeOrganizationGiving PledgeThe Giving Pledge, while attracting 250+ billionaire signatories since 2010, has a disappointing track record with only 36% of deceased pledgers actually meeting their commitments and living pledge...Quality: 68/100 data shows only 36% of deceased pledgers met their commitment. IPS
EA-Aligned Capital Summary
Source
Gross Value
Risk-Adjusted Value
Reliability
Strongly EA-aligned founders (Dario, Daniela)
$11-17B
$6-12B
Pledge-dependent
Safety-focused founders (Olah, Clark)
$11-17B
$3-8B
Uncertain cause direction
Non-EA founders (Brown, Kaplan, McCandlish)
$17-25B
$2-7B
Unlikely EA
Jaan TallinnPersonJaan TallinnProfile of Jaan Tallinn documenting $150M+ lifetime AI safety giving (86% of $51M in 2024), primarily through SFF ($34.33M distributed in a 2025 grant round). Co-founded CSER (2012) and FLI (2014),...Quality: 53/100
$2-6B
$1.4-5.4B
Very high (>90%)
Dustin MoskovitzPersonDustin MoskovitzDustin Moskovitz and Cari Tuna have given $4B+ since 2011, with ~$336M (12% of total) directed to AI safety through Coefficient Giving (formerly Open Philanthropy), making them the largest individu...Quality: 49/100
$3-9B
$2.7-9B
Certain (already committed)
Employee pledges + matching
$20-40B
$16-38B
Legally bound (in DAFs)
Non-pledged EA employees
≈$2B
$0.4-0.8B
Moderate
Total
$66-116B
$27-76B
—
Employee capital in donor-advised funds ($27–76B risk-adjusted) is the most reliable source — legally bound, though donors retain discretion over which charities receive grants. The historical 3:1 matching program (employees pledge up to 50%, Anthropic matches 3x) has been reduced to 1:1 at 25% for employees hired after 2024. EA ForumAnthropic Careers
Funding Timeline
Funding History
7 entries
Date
Raised
Valuation
Lead Investor
Notes
Source check
2024-02
$750 million
$18.4 billion
Menlo Ventures
Series D led by Menlo Ventures via an SPV structure. Pre-money valuation of ~$15B, post-money approximately $18.4B. Menlo had first invested in Anthropic in the Series C.
2023-05
$450 million
—
Spark Capital
Series C led by Spark Capital. Other participants included Google, Menlo Ventures, Salesforce Ventures, Microsoft, and others. Google separately agreed to invest up to $2B in 2023 to acquire ~10% stake.
2025-03-03
$3.5 billion
$61.5 billion
Lightspeed Venture Partners
Series E led by Lightspeed Venture Partners ($1B contributed). Participants included Bessemer Venture Partners, Cisco Investments, D1 Capital Partners, Fidelity Management & Research, General Catalyst, Jane Street, Menlo Ventures, and Salesforce Ventures. Post-money valuation of $61.5B.
2022-04
$580 million
$4 billion
FTX / Sam Bankman-Fried
Series B led by Sam Bankman-Fried and Caroline Ellison of FTX. Post-money valuation of $4B. FTX subsequently collapsed in 2022.
2025-09-02
$13 billion
$183 billion
ICONIQ
Series F led by ICONIQ, co-led by Fidelity Management & Research and Lightspeed Venture Partners. Significant investors include Altimeter, Baillie Gifford, BlackRock, Blackstone, Coatue, GIC, Goldman Sachs Alternatives, General Catalyst, Insight Partners, Jane Street, Ontario Teachers' Pension Plan, Qatar Investment Authority, TPG, T. Rowe Price, and XN. Post-money valuation of $183B.
2021-05
$124 million
$550 million
Dustin Moscovitz / Jaan Tallinn
Dustin Moscovitz (Facebook co-founder) and Jaan Tallinn (Skype co-founder) were among seed investors and led the Series A. Pre-money valuation of $550M.
2026-02-12
$30 billion
$380 billion
GIC / Coatue
Series G led by GIC and Coatue, co-led by D.E. Shaw Ventures, Dragoneer, Founders Fund, ICONIQ, and MGX. Round includes a portion of previously announced investments from Microsoft and NVIDIA. Significant investors include Accel, Sequoia Capital, Bessemer, General Catalyst, Goldman Sachs, JPMorganChase, Lightspeed, Menlo, Morgan Stanley, Temasek, TPG, and others. Post-money valuation of $380B — second-largest private venture funding deal of all time.
Total raised: $67 billionTotal Funding Raised$67 billionAs of: Feb 2026Total funding raised as of Series G (Feb 2026), per Reuters. Includes Series A-G equity rounds plus Amazon's $10.75B and Google's $3.3B strategic investments. Excludes Microsoft/Nvidia 'up to $15B' commitment (not fully deployed). The FTX ~$500M investment (2022) was sold to creditors after FTX's collapse and is not counted in the live total.Source: reuters.comsid_mK9pX3rQ7n.total-funding → across 17 rounds. Anthropic
Google has invested $3.3B across 3 rounds for approximately ~14% of equity. Amazon has invested $10.75B and serves as Anthropic's primary cloud partner; its exact stake is undisclosed.
Capital Deployment Timeline
Source
Earliest
Peak Flow
Status
Employee DAFs
2025-2026
2027-2030
Already transferring
Moskovitz
2026-2027
2027-2030
$500M in nonprofit vehicle
Tallinn
2027-2028
2028-2032
Likely post-IPO
Founders
2028-2030
2030-2040
Depends on IPO + pledge timing
IPO expected 2026-2027 (Kalshi: 72% chance Anthropic IPOs before OpenAI). Lock-up periods typically delay capital 6-12 months post-IPO. See Anthropic IPOAnalysisAnthropic IPOAnthropic is actively preparing for a potential 2026 IPO with concrete steps like hiring Wilson Sonsini and conducting bank discussions, though timeline uncertainty remains with prediction markets ...Quality: 65/100.
Methodology & Assumptions
All figures in this table are estimates with significant uncertainty. Key assumptions:
Equity stakes:
Founders (2–3% each): Estimated from reported total founder equity (~14–21% for 7 co-founders). No individual founder stakes have been publicly disclosed; range reflects typical dilution across 7 people with differential contributions.
Google (~14%): Derived from Google's $3.3B total investment and Anthropic's $18B Series D valuation; cross-checked against Bloomberg reporting.
Tallinn (0.6–1.7%), Moskovitz (0.8–2.5%): Estimated from Series A lead position and subsequent dilution; no public disclosures.
Employee pool (12–18%): Typical early-stage AI lab employee equity range; diluted from founding allocations through Series G.
Pledge rates:
Founders (80%): Documented in Fortune reporting on the public pledge announcement.
Employee pool (25–50%): Lower bound from current 1:1 matching at 25% for post-2024 hires; upper bound from historical 3:1 matching program at 50% for pre-2025 hires. See EA Forum.
Tallinn (90%), Moskovitz (95%): Based on documented giving track records and explicit commitments.
EA alignment estimates (most uncertain column):
Based on: public statements, organizational affiliations, EA Forum activity, documented giving patterns, and whether donations have gone to EA-identified causes.
Dario/Daniela: GWWC signatories with documented EA community ties → 80–90%.
Chris OlahPersonChris OlahComprehensive biographical profile of Chris Olah covering his unconventional career path, foundational contributions to mechanistic interpretability (feature visualization, circuit analysis, sparse...Quality: 27/100: Interpretability work is core EA-aligned; EA event participation → 40–60%.
Jack Clark: EA-adjacent responsible AI framing, less documented EA cause alignment → 30–50%.
Brown/Kaplan/McCandlish: Safety motivation for founding Anthropic but no documented EA cause preferences → 15–30%.
Employee pool: Blended estimate across a population with high EA representation at senior levels → 40–70%.
These are Fermi estimates for planning purposes, not verified figures. The "EA-Effective" column represents expected value and should be treated with wide error bars.
Related Pages
Anthropic (Funder)AnalysisAnthropic (Funder)Comprehensive model of EA-aligned philanthropic capital at Anthropic. At $380B valuation (Series G, Feb 2026, $30B raised): $27-76B risk-adjusted EA capital expected. Total funding raised exceeds $...Quality: 65/100 — Full analysis of EA-aligned capital, Squiggle models, and scenario analysis
AnthropicOrganizationAnthropicComprehensive reference page on Anthropic covering financials ($380B valuation, $14B ARR at Series G growing to $19B by March 2026), safety research (Constitutional AI, mechanistic interpretability...Quality: 74/100 — Company overview, products, safety research
Anthropic Valuation AnalysisAnalysisAnthropic Valuation AnalysisValuation analysis updated March 2026. Series G closed at $380B (Feb 2026) with $14B run-rate; by March 2026, secondary/derivatives markets price Anthropic at ~$595B implied (Ventuals), a 57% premi...Quality: 72/100 — Valuation scenarios and competitive positioning
Anthropic IPOAnalysisAnthropic IPOAnthropic is actively preparing for a potential 2026 IPO with concrete steps like hiring Wilson Sonsini and conducting bank discussions, though timeline uncertainty remains with prediction markets ...Quality: 65/100 — IPO timeline and liquidity analysis
Pre-IPO DAF TransfersAnalysisAnthropic Pre-IPO DAF TransfersAnalyzes Anthropic charitable giving mechanisms from both the equity holder's and philanthropic community's perspective. The employee matching program (3:1 at 50% historically, 1:1 at 25% currently...Quality: 58/100 — Tax optimization and DAF mechanics
Founder Pledge InterventionsAnalysisAnthropic Founder Pledges: Interventions to Increase Follow-ThroughEvaluates interventions to make Anthropic founders' 80% donation pledges more likely to be fulfilled. Distinguishes collaborative interventions founders would welcome (DAF tax planning, foundation ...Quality: 45/100 — Interventions to increase pledge fulfillment
This Fortune article reported on Google's approximately 10% investment stake in Anthropic, reflecting the trend of major tech companies making significant investments in leading AI safety-focused labs. The page is no longer accessible, returning a 404 error.
Anthropic secured a $4 billion investment from Amazon, expanding their cloud partnership and making Amazon Web Services the primary cloud provider for Anthropic's AI workloads. This deal significantly increases Anthropic's total funding and deepens the company's ties to Amazon's infrastructure, reinforcing the role of major cloud providers in shaping the competitive AI landscape.
Anthropic's co-founders, including CEO Dario Amodei, have committed to donating 80% of their wealth to philanthropic causes, with a focus on addressing inequality and navigating the societal impacts of the AI revolution. The pledge reflects the effective altruism-influenced values that have shaped Anthropic's culture and mission. This signals how leading AI lab founders are thinking about their responsibilities given potential large-scale wealth creation from AI.
A critical analysis from the Institute for Policy Studies evaluating the Giving Pledge's 15-year track record, finding that most original signatories have grown wealthier rather than giving away their fortunes, with contributions largely warehoused in private foundations and donor-advised funds. The report argues the Pledge is structurally unfulfillable and insufficient as a mechanism for wealth redistribution or addressing societal challenges.
Anthropic's careers page outlines the company's mission to build safe and beneficial AI, highlighting their guiding principles and inviting researchers, engineers, and builders to join their work on Claude and AI safety. The page emphasizes a 'race to the top' on safety culture and a commitment to ensuring powerful AI benefits humanity.
Anthropic announced a major Series G funding round, reflecting significant investor confidence in safety-focused AI development. The round highlights the growing capital flowing into frontier AI labs and the commercial viability of safety-oriented AI research organizations.