Skip to content

Long-Term Benefit Trust (Anthropic)

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:70 (Good)
Importance:78 (High)
Last edited:2026-02-03 (3 days ago)
Words:2.6k
Structure:
📊 3📈 0🔗 17📚 1413%Score: 13/15
LLM Summary:Anthropic's Long-Term Benefit Trust represents an innovative but potentially limited governance mechanism where financially disinterested trustees can appoint board members to balance public benefit with profit, though critics question whether stockholder override provisions and unclear enforcement mechanisms render it effectively powerless.
Critical Insights (4):
  • ClaimAnthropic's Trust can be amended by stockholder supermajority without trustee consent, potentially allowing major investors like Amazon and Google to override the governance mechanism designed to constrain them.S:3.0I:3.5A:3.0
  • Counterint.Despite having authority to appoint 3 of 5 board members by late 2024, Anthropic's Long-Term Benefit Trust had only appointed 1 director, suggesting either strategic restraint or undisclosed constraints on trustee power.S:3.5I:3.0A:2.5
  • Counterint.Trustees cannot independently enforce the Trust Agreement—only stockholders can, meaning the very parties meant to be constrained hold enforcement power over their own constraints.S:3.5I:3.0A:2.5
Issues (1):
  • Links1 link could use <R> components
AspectAssessment
What it isIndependent body of five trustees with authority to elect a growing portion (ultimately majority) of Anthropic’s board
Key innovationCreates “different kind of stockholder” insulated from financial incentives to balance public benefit with profit
Current powerCan appoint up to 3 of 5 board members, though has only appointed 1 as of late 2025
TimelineDesigned to reach majority board control within 4 years of establishment (≈2027)
Main limitationCan be amended by stockholder supermajority, creating potential override mechanism
StatusExperimental governance structure in operation since 2023
SourceLink
Official Websiteanthropic.com
Wikipediaen.wikipedia.org

The Long-Term Benefit Trust (LTBT) is an independent governance mechanism established by Anthropic to align corporate decision-making with the long-term benefit of humanity alongside traditional stockholder interests. The Trust comprises five financially disinterested trustees who hold special Class T Common Stock, granting them authority to elect and remove an increasing portion of Anthropic’s board of directors—ultimately a majority within four years of establishment.12

Paired with Anthropic’s Delaware Public Benefit Corporation status, the LTBT represents an experimental approach to addressing what the company characterizes as “unprecedentedly large externalities” from AI development, including national security risks, economic disruption, and fundamental threats to humanity.1 The structure is designed to insulate key governance decisions from short-term profit pressures at “key junctures where we expect the consequences of our decisions to reach far beyond Anthropic.”1

The Trust operates as what Anthropic calls “a different kind of stockholder,” creating accountability mechanisms independent of financial returns while maintaining a working relationship with company leadership through consultation requirements and information-sharing agreements.12 However, the structure has faced significant criticism within the AI safety community regarding its actual enforcement power and the company’s decision not to publish the full Trust Agreement.34

The Long-Term Benefit Trust emerged from concerns among Anthropic’s founders, including siblings Daniela Amodei (President) and Dario Amodei (CEO), about the lack of external constraints on AI development comparable to those governing other powerful technologies.15 The founders believed that while AI safety aligned with long-term profitability, the potential for extreme events and catastrophic risks required governance mechanisms that could appropriately weigh public interests against commercial pressures.1

An earlier version called the “Long-Term Benefit Committee” was outlined in Anthropic’s Series A investment documents in 2021, but its activation was delayed to allow refinement into the current LTBT structure.12 This delay enabled what Anthropic describes as a year-long search process and legal “red-teaming” to improve the governance framework.1

The LTBT is organized as a Delaware “purpose trust”—a trust managed for achieving a purpose rather than benefiting specific beneficiaries.2 This legal form allows the Trust to pursue the mission of “responsibly develop[ing] and maintain[ing] advanced AI for the long-term benefit of humanity” without being constrained by traditional beneficiary-focused trust law.2

At the close of Anthropic’s Series C funding round, the company amended its corporate charter to create Class T Common Stock held exclusively by the Trust.12 This special class of shares grants trustees the power to elect directors according to a phased timeline: initially one of five board members, increasing to two, and eventually three (a majority) based on time and fundraising milestones.12

  • 2021: Anthropic founded as Delaware Public Benefit Corporation; Long-Term Benefit Committee outlined in Series A documents15
  • 2021-2022: Year-long trustee search and legal structure refinement1
  • ~2023: LTBT formally launched with initial five trustees16
  • July 2024: Trust representation scheduled to increase to two of five board members3
  • November 2024: Trust representation scheduled to increase to three of five board members3
  • December 2023: Jason Matheny stepped down to avoid conflicts with RAND Corporation policy work3
  • April 2024: Paul Christiano stepped down to become Head of AI Safety at U.S. AI Safety Institute37
  • 2025: Richard Fontaine appointed as trustee; Mariano-Florentino Cuéllar appointed78

The LTBT comprises five voting trustees selected for expertise in AI safety, national security, public policy, and social enterprise.12 Trustees are explicitly insulated from financial interests in Anthropic—they hold no equity and receive no compensation tied to company performance.15 This financial disinterest is central to the Trust’s design, intended to ensure decisions appropriately balance public benefit against profit maximization.1

Initial trustees were appointed by Anthropic’s board, but subsequent trustees are selected by vote of existing trustees, with consultation requirements ensuring company input.12 Trustees serve only one-year terms, a design choice intended to enable frequent reevaluation while maintaining continuity of oversight.2

NameRole/ExpertiseStatusNotes
Neil Buddy ShahCEO, Clinton Health Access InitiativeCurrent (Chair)Initial trustee, still serving as of early 202615
Kanika BahlCEO & President, Evidence ActionCurrentInitial trustee; CEO of GiveWell top charity17
Zach RobinsonCEO, Centre for Effective AltruismCurrentInitial trustee17
Richard FontaineCEO, Center for a New American SecurityAppointed 2025National security expert7
Mariano-Florentino CuéllarFormer California Supreme Court JusticeAppointed Jan 2026Global AI governance expert78
Paul ChristianoFounder, Alignment Research CenterDeparted April 2024Left to join U.S. AI Safety Institute137
Jason MathenyCEO, RAND CorporationDeparted December 2023Left to avoid conflicts with RAND policy work13

The Class T shares grant trustees authority to elect an increasing number of Anthropic’s board members according to a phased schedule. The Trust was designed to elect one director initially, increasing to two and eventually three (a majority of five) within four years or upon certain fundraising milestones.12

Critically, despite having authority to appoint up to three directors by late 2024, the Trust had only appointed one board member as of the analysis conducted in late 2024.3 This gap between potential and exercised power has contributed to skepticism about the Trust’s effectiveness.34

The certificate of incorporation also grants trustees advance notice of “certain key actions by the board that may materially affect the business of the company or its organization,” though the specific threshold for such notice is not publicly disclosed.2

Under a carefully structured agreement, trustees hold broad power to request “any information or resources that are reasonably appropriate to the accomplishment of the Trust’s purpose.”2 However, Anthropic may withhold information or resources for specified reasons, including preserving confidential customer information or avoiding “clearly unreasonable expense or effort that manifestly exceeds the benefit to be gained by the Trust.”2

This balance reflects the tension between trustee independence and operational practicality, giving trustees substantial but not unlimited access to company information and decision-making processes.2

The Trust Agreement, certificate of incorporation, and key agreements between Trust and company use harmonized amendment processes that balance durability with flexibility.12 Amendments can occur through:

  1. Consent of voting trustees and stockholders2
  2. Consent of voting trustees and company directors (prior to trustees gaining majority board control)2
  3. Supermajority of stockholders (without trustee consent)2

The third mechanism—stockholder supermajority amendment without trustee consent—operates as what Anthropic describes as a “failsafe against the actions of the Voting Trustees” that “safeguards the interests of stockholders.”2 The required supermajority percentage increases over time to reflect accumulating experience and the growing need for commitment as AI technology becomes more powerful.2

As permitted by Delaware’s purpose trust statute, the Trust Agreement authorizes enforcement by both the company and “groups of the company’s stockholders who have held a sufficient percentage of the company’s equity for a sufficient period of time.”2 Notably, this enforcement structure does not grant trustees themselves the power to enforce the Trust Agreement—a design choice that has drawn criticism for potentially undermining trustee independence.34

Integration with Responsible Scaling Policy

Section titled “Integration with Responsible Scaling Policy”

The LTBT is designed to work alongside Anthropic’s Responsible Scaling Policy (RSP), which establishes AI Safety Levels (ASL) modeled on biosafety standards.910 The RSP framework pauses training or deployment of powerful models if safety measures lag behind capabilities, with evaluation intervals and safety margins designed to incentivize alignment research progress.9

According to Anthropic, the Trust can “ensure that the organizational leadership is incentivized to carefully evaluate future models for catastrophic risks or ensure they have nation-state level security, rather than prioritizing being the first to market above all other objectives.”1 The extent to which trustees actually receive substantive input on RSP decisions versus pro forma consultation remains unclear from public documentation.9

The most substantial criticism of the LTBT centers on whether it provides meaningful oversight or represents what one analysis characterizes as a “powerless” governance mechanism.34 Key concerns include:

Enforcement structure: The Trust can be enforced by stockholders holding “a sufficient percentage of the company’s equity for a sufficient period of time” rather than by trustees themselves, suggesting trustees lack independent enforcement authority.24 If trustees make decisions stockholders oppose, stockholders—not trustees—hold the legal power to enforce or challenge those decisions.4

Supermajority amendment: Stockholders can amend the Trust and its powers by supermajority vote without trustee consent.24 Critics note this could be easily achieved if a small number of major investors (such as Amazon and Google, who have made substantial investments in Anthropic) control large share percentages.4

Exercised versus potential power: Despite having authority to appoint three of five board members by November 2024, the Trust had only appointed one director as of late 2024.3 This suggests either trustees are choosing not to exercise their full authority or face constraints not apparent in public documentation.3

Anthropic has declined to publish the full Trust Agreement, limiting independent assessment of the Trust’s actual authority.34 Critics within the AI safety community view this opacity as evidence that the governance mechanism is weaker than Anthropic’s public positioning suggests.34 The company’s characterization of the LTBT as “an experiment” and “an early iteration that we will build on” may reflect genuine uncertainty about effectiveness rather than confidence in the current design.1

The LTBT introduces potential friction between trustees and company leadership in balancing mission integrity against operational agility in a competitive AI development landscape.7 Some analyses suggest this tension could cause delays in partnerships, funding decisions, or deployment timelines, creating trade-offs between safety oversight and commercial viability.7

Not all community analysis accepts the “powerless” framing. Some observers argue the evidence suggests the Trust has significant powers to appoint board members, with the key question being the magnitude of constraints rather than their complete absence.11 One commenter estimated the probability of the Trust being trivially overridable by simple majority shareholders at less than 5%.11

Anthropic’s own framing emphasizes that the Trust is not intended to intervene in “day-to-day decisions” or “ordinary commercial strategy,” but rather to address “extreme events and the need to handle them with humanity’s interests in mind.”1 By this standard, the Trust’s effectiveness should be judged by its influence at critical decision points rather than ongoing operations.1

Comparison with Other AI Governance Structures

Section titled “Comparison with Other AI Governance Structures”

For detailed analysis of OpenAI’s governance structure, see OpenAI Foundation.

The LTBT shares conceptual similarities with OpenAI’s earlier nonprofit-controlled structure, where a nonprofit foundation held control over a for-profit subsidiary to balance mission and profit motives.12 However, OpenAI’s governance crisis in November 2023—when the nonprofit board briefly removed CEO Sam Altman before reversing course under investor pressure—raised questions about whether mission-focused governance can withstand commercial pressures in practice.12

The LTBT attempts to address this challenge through phased power accumulation, financial disinterest of trustees, and supermajority stockholder failsafe provisions. Whether this design proves more durable than OpenAI’s structure remains an open empirical question.1

The LTBT builds on Anthropic’s Delaware Public Benefit Corporation status, which already grants directors legal authority to balance public benefit with stockholder returns.12 Some critics question whether the Trust adds meaningful constraint beyond what PBC status already provides, particularly given that PBC directors can consider but are not strictly bound by public benefit considerations.6

Anthropic’s position is that while PBC status provides “legal latitude,” it does not create direct accountability mechanisms or align director incentives with public interests—gaps the LTBT is designed to fill.1

Several initial trustees had connections to the effective altruism movement, reflecting Anthropic’s origins within EA-adjacent AI safety communities.3 Paul Christiano, founder of the Alignment Research Center, was an initial trustee before departing to join the U.S. AI Safety Institute.13 Zach Robinson’s role as Interim CEO of Effective Ventures US represented another direct EA connection.13

The transition in 2024-2026 from trustees with explicit EA ties (Christiano, Robinson) to figures like Richard Fontaine (national security expert) and Mariano-Florentino Cuéllar (global AI governance) has been characterized as a shift from “ideologically driven” to “operationally focused” trustees amid geopolitical and regulatory challenges.7 Whether this represents intentional diversification or coincidental turnover remains unclear from public information.

Several fundamental questions about the LTBT remain unresolved:

Actual enforcement power: Can trustees meaningfully override stockholder preferences on critical decisions, or does the stockholder supermajority amendment provision render the Trust ultimately subordinate to investor interests?34

Exercise of authority: Why has the Trust appointed only one board member despite having authority for three by late 2024?3 Does this reflect strategic choice, informal constraints, or evidence of limited practical power?

Critical decision-making: What constitutes the “key junctures” and “extreme events” where trustees are expected to intervene?1 Without public examples of trustee influence on major decisions, effectiveness remains speculative.

Amendment thresholds: What specific supermajority percentages are required to amend the Trust at different time points?2 These details could determine whether small numbers of large investors effectively control amendment power.

Information access: What information has the company withheld from trustees under the “clearly unreasonable expense” provision, and have trustees challenged such withholding?2

Long-term durability: Will the Trust maintain independence and effectiveness as Anthropic grows, faces competitive pressures, or pursues additional funding that dilutes existing stockholders?

Anthropic explicitly acknowledges the experimental nature of the LTBT, stating it is “an early iteration that we will build on” and emphasizing the company’s empiricist approach to observing how the structure functions in practice.1 The ultimate test will be whether the Trust demonstrates meaningful influence on consequential AI development and deployment decisions in the years ahead.

  • Anthropic — Company overview and safety research
  • Anthropic (Funder) — Funding history, founder pledges, and philanthropic implications
  • Anthropic IPO — IPO timeline and preparation status
  • Responsible Scaling Policy — Anthropic’s safety framework that integrates with LTBT governance
  • OpenAI — Competitor with contrasting governance structure
  1. Anthropic: The Long-Term Benefit Trust 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35

  2. Harvard Law School Forum on Corporate Governance: Anthropic Long-Term Benefit Trust 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27

  3. LessWrong: Maybe Anthropic’s Long-Term Benefit Trust is Powerless 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

  4. EA Forum: Maybe Anthropic’s Long-Term Benefit Trust is Powerless 2 3 4 5 6 7 8 9 10 11

  5. Wikipedia: Anthropic 2 3 4

  6. The Stakehold: The Anthropic Long-Term Benefit Trust 2

  7. AI Invest: Anthropic Long-Term Benefit Trust Structural Shift in AI Governance 2 3 4 5 6 7 8 9 10

  8. Anthropic: Mariano-Florentino Long-Term Benefit Trust 2

  9. LessWrong: Anthropic’s Responsible Scaling Policy and Long-Term Benefit Trust 2 3

  10. Alignment Forum: Anthropic’s Responsible Scaling Policy and Long-Term Benefit Trust

  11. EA Forum comment thread 2

  12. Harvard Law Review: Amoral Drift in AI Corporate Governance 2