Long-Term Benefit Trust (Anthropic)
Long-Term Benefit Trust (Anthropic)
Anthropic's Long-Term Benefit Trust represents an innovative but potentially limited governance mechanism where financially disinterested trustees can appoint board members to balance public benefit with profit, though critics question whether stockholder override provisions and unclear enforcement mechanisms render it effectively powerless.
Quick Assessment
| Dimension | Assessment |
|---|---|
| What it is | Independent body of five trustees with authority to elect a growing portion (ultimately majority) of Anthropic's board |
| Key innovation | Creates "different kind of stockholder" insulated from financial incentives to balance public benefit with profit |
| Current power | Can appoint up to 3 of 5 board members, though has only appointed 1 as of late 2025 |
| Timeline | Designed to reach majority board control within 4 years of establishment (≈2027) |
| Main limitation | Can be amended by stockholder supermajority, creating potential override mechanism |
| Status | Experimental governance structure in operation since 2023 |
Key Links
| Source | Link |
|---|---|
| Official Website | anthropic.com |
| Wikipedia | en.wikipedia.org |
Overview
The Long-Term Benefit Trust (LTBT) is an independent governance mechanism established by Anthropic to align corporate decision-making with the long-term benefit of humanity alongside traditional stockholder interests. The Trust comprises five financially disinterested trustees who hold special Class T Common Stock, granting them authority to elect and remove an increasing portion of Anthropic's board of directors—ultimately a majority within four years of establishment.12
Paired with Anthropic's Delaware Public Benefit Corporation status, the LTBT represents an experimental approach to addressing what the company characterizes as "unprecedentedly large externalities" from AI development, including national security risks, economic disruption, and fundamental threats to humanity.1 The structure is designed to insulate key governance decisions from short-term profit pressures at "key junctures where we expect the consequences of our decisions to reach far beyond Anthropic."1
The Trust operates as what Anthropic calls "a different kind of stockholder," creating accountability mechanisms independent of financial returns while maintaining a working relationship with company leadership through consultation requirements and information-sharing agreements.12 However, the structure has faced significant criticism within the AI safety community regarding its actual enforcement power and the company's decision not to publish the full Trust Agreement.34
History and Development
Origins and Motivation
The Long-Term Benefit Trust emerged from concerns among Anthropic's founders, including siblings Daniela Amodei (President) and Dario Amodei (CEO), about the lack of external constraints on AI development comparable to those governing other powerful technologies.15 The founders believed that while AI safety aligned with long-term profitability, the potential for extreme events and catastrophic risks required governance mechanisms that could appropriately weigh public interests against commercial pressures.1
An earlier version called the "Long-Term Benefit Committee" was outlined in Anthropic's Series A investment documents in 2021, but its activation was delayed to allow refinement into the current LTBT structure.12 This delay enabled what Anthropic describes as a year-long search process and legal "red-teaming" to improve the governance framework.1
Legal Structure
The LTBT is organized as a Delaware "purpose trust"—a trust managed for achieving a purpose rather than benefiting specific beneficiaries.2 This legal form allows the Trust to pursue the mission of "responsibly develop[ing] and maintain[ing] advanced AI for the long-term benefit of humanity" without being constrained by traditional beneficiary-focused trust law.2
At the close of Anthropic's Series C funding round, the company amended its corporate charter to create Class T Common Stock held exclusively by the Trust.12 This special class of shares grants trustees the power to elect directors according to a phased timeline: initially one of five board members, increasing to two, and eventually three (a majority) based on time and fundraising milestones.12
Timeline of Key Events
- 2021: Anthropic founded as Delaware Public Benefit Corporation; Long-Term Benefit Committee outlined in Series A documents15
- 2021-2022: Year-long trustee search and legal structure refinement1
- ~2023: LTBT formally launched with initial five trustees16
- July 2024: Trust representation scheduled to increase to two of five board members3
- November 2024: Trust representation scheduled to increase to three of five board members3
- December 2023: Jason Matheny stepped down to avoid conflicts with RAND Corporation policy work3
- April 2024: Paul Christiano stepped down to become Head of AI Safety at U.S. AI Safety Institute37
- 2025: Richard Fontaine appointed as trustee; Mariano-Florentino Cuéllar appointed78
Governance Structure and Powers
Trustee Composition and Independence
The LTBT comprises five voting trustees selected for expertise in AI safety, national security, public policy, and social enterprise.12 Trustees are explicitly insulated from financial interests in Anthropic—they hold no equity and receive no compensation tied to company performance.15 This financial disinterest is central to the Trust's design, intended to ensure decisions appropriately balance public benefit against profit maximization.1
Initial trustees were appointed by Anthropic's board, but subsequent trustees are selected by vote of existing trustees, with consultation requirements ensuring company input.12 Trustees serve only one-year terms, a design choice intended to enable frequent reevaluation while maintaining continuity of oversight.2
Current and Former Trustees
| Name | Role/Expertise | Status | Notes |
|---|---|---|---|
| Neil Buddy Shah | CEO, Clinton Health Access Initiative | Current (Chair) | Initial trustee, still serving as of early 202615 |
| Kanika Bahl | CEO & President, Evidence Action | Current | Initial trustee; CEO of GiveWell top charity17 |
| Zach Robinson | CEO, Centre for Effective Altruism | Current | Initial trustee17 |
| Richard Fontaine | CEO, Center for a New American Security | Appointed 2025 | National security expert7 |
| Mariano-Florentino Cuéllar | Former California Supreme Court Justice | Appointed Jan 2026 | Global AI governance expert78 |
| Paul Christiano | Founder, Alignment Research Center | Departed April 2024 | Left to join U.S. AI Safety Institute137 |
| Jason Matheny | CEO, RAND Corporation | Departed December 2023 | Left to avoid conflicts with RAND policy work13 |
Board Appointment Powers
The Class T shares grant trustees authority to elect an increasing number of Anthropic's board members according to a phased schedule. The Trust was designed to elect one director initially, increasing to two and eventually three (a majority of five) within four years or upon certain fundraising milestones.12
Critically, despite having authority to appoint up to three directors by late 2024, the Trust had only appointed one board member as of the analysis conducted in late 2024.3 This gap between potential and exercised power has contributed to skepticism about the Trust's effectiveness.34
The certificate of incorporation also grants trustees advance notice of "certain key actions by the board that may materially affect the business of the company or its organization," though the specific threshold for such notice is not publicly disclosed.2
Information Access and Resources
Under a carefully structured agreement, trustees hold broad power to request "any information or resources that are reasonably appropriate to the accomplishment of the Trust's purpose."2 However, Anthropic may withhold information or resources for specified reasons, including preserving confidential customer information or avoiding "clearly unreasonable expense or effort that manifestly exceeds the benefit to be gained by the Trust."2
This balance reflects the tension between trustee independence and operational practicality, giving trustees substantial but not unlimited access to company information and decision-making processes.2
Amendment and Enforcement Mechanisms
Amendment Processes
The Trust Agreement, certificate of incorporation, and key agreements between Trust and company use harmonized amendment processes that balance durability with flexibility.12 Amendments can occur through:
- Consent of voting trustees and stockholders2
- Consent of voting trustees and company directors (prior to trustees gaining majority board control)2
- Supermajority of stockholders (without trustee consent)2
The third mechanism—stockholder supermajority amendment without trustee consent—operates as what Anthropic describes as a "failsafe against the actions of the Voting Trustees" that "safeguards the interests of stockholders."2 The required supermajority percentage increases over time to reflect accumulating experience and the growing need for commitment as AI technology becomes more powerful.2
Enforcement Authority
As permitted by Delaware's purpose trust statute, the Trust Agreement authorizes enforcement by both the company and "groups of the company's stockholders who have held a sufficient percentage of the company's equity for a sufficient period of time."2 Notably, this enforcement structure does not grant trustees themselves the power to enforce the Trust Agreement—a design choice that has drawn criticism for potentially undermining trustee independence.34
Integration with Responsible Scaling Policy
The LTBT is designed to work alongside Anthropic's Responsible Scaling Policy (RSP), which establishes AI Safety Levels (ASL) modeled on biosafety standards.910 The RSP framework pauses training or deployment of powerful models if safety measures lag behind capabilities, with evaluation intervals and safety margins designed to incentivize alignment research progress.9
According to Anthropic, the Trust can "ensure that the organizational leadership is incentivized to carefully evaluate future models for catastrophic risks or ensure they have nation-state level security, rather than prioritizing being the first to market above all other objectives."1 The extent to which trustees actually receive substantive input on RSP decisions versus pro forma consultation remains unclear from public documentation.9
Criticisms and Concerns
Questions About Actual Power
The most substantial criticism of the LTBT centers on whether it provides meaningful oversight or represents what one analysis characterizes as a "powerless" governance mechanism.34 Key concerns include:
Enforcement structure: The Trust can be enforced by stockholders holding "a sufficient percentage of the company's equity for a sufficient period of time" rather than by trustees themselves, suggesting trustees lack independent enforcement authority.24 If trustees make decisions stockholders oppose, stockholders—not trustees—hold the legal power to enforce or challenge those decisions.4
Supermajority amendment: Stockholders can amend the Trust and its powers by supermajority vote without trustee consent.24 Critics note this could be easily achieved if a small number of major investors (such as Amazon and Google, who have made substantial investments in Anthropic) control large share percentages.4
Exercised versus potential power: Despite having authority to appoint three of five board members by November 2024, the Trust had only appointed one director as of late 2024.3 This suggests either trustees are choosing not to exercise their full authority or face constraints not apparent in public documentation.3
Transparency and Documentation
Anthropic has declined to publish the full Trust Agreement, limiting independent assessment of the Trust's actual authority.34 Critics within the AI safety community view this opacity as evidence that the governance mechanism is weaker than Anthropic's public positioning suggests.34 The company's characterization of the LTBT as "an experiment" and "an early iteration that we will build on" may reflect genuine uncertainty about effectiveness rather than confidence in the current design.1
Governance Friction and Trade-offs
The LTBT introduces potential friction between trustees and company leadership in balancing mission integrity against operational agility in a competitive AI development landscape.7 Some analyses suggest this tension could cause delays in partnerships, funding decisions, or deployment timelines, creating trade-offs between safety oversight and commercial viability.7
Counterarguments and Defenses
Not all community analysis accepts the "powerless" framing. Some observers argue the evidence suggests the Trust has significant powers to appoint board members, with the key question being the magnitude of constraints rather than their complete absence.11 One commenter estimated the probability of the Trust being trivially overridable by simple majority shareholders at less than 5%.11
Anthropic's own framing emphasizes that the Trust is not intended to intervene in "day-to-day decisions" or "ordinary commercial strategy," but rather to address "extreme events and the need to handle them with humanity's interests in mind."1 By this standard, the Trust's effectiveness should be judged by its influence at critical decision points rather than ongoing operations.1
Comparison with Other AI Governance Structures
OpenAI Foundation Model
For detailed analysis of OpenAI's governance structure, see OpenAI Foundation.
The LTBT shares conceptual similarities with OpenAI's earlier nonprofit-controlled structure, where a nonprofit foundation held control over a for-profit subsidiary to balance mission and profit motives.12 However, OpenAI's governance crisis in November 2023—when the nonprofit board briefly removed CEO Sam Altman before reversing course under investor pressure—raised questions about whether mission-focused governance can withstand commercial pressures in practice.12
The LTBT attempts to address this challenge through phased power accumulation, financial disinterest of trustees, and supermajority stockholder failsafe provisions. Whether this design proves more durable than OpenAI's structure remains an open empirical question.1
Public Benefit Corporation Baseline
The LTBT builds on Anthropic's Delaware Public Benefit Corporation status, which already grants directors legal authority to balance public benefit with stockholder returns.12 Some critics question whether the Trust adds meaningful constraint beyond what PBC status already provides, particularly given that PBC directors can consider but are not strictly bound by public benefit considerations.6
Anthropic's position is that while PBC status provides "legal latitude," it does not create direct accountability mechanisms or align director incentives with public interests—gaps the LTBT is designed to fill.1
Effective Altruism Connections
Several initial trustees had connections to the effective altruism movement, reflecting Anthropic's origins within EA-adjacent AI safety communities.3 Paul Christiano, founder of the Alignment Research Center, was an initial trustee before departing to join the U.S. AI Safety Institute.13 Zach Robinson's role as Interim CEO of Effective Ventures US represented another direct EA connection.13
The transition in 2024-2026 from trustees with explicit EA ties (Christiano, Robinson) to figures like Richard Fontaine (national security expert) and Mariano-Florentino Cuéllar (global AI governance) has been characterized as a shift from "ideologically driven" to "operationally focused" trustees amid geopolitical and regulatory challenges.7 Whether this represents intentional diversification or coincidental turnover remains unclear from public information.
Key Uncertainties
Several fundamental questions about the LTBT remain unresolved:
Actual enforcement power: Can trustees meaningfully override stockholder preferences on critical decisions, or does the stockholder supermajority amendment provision render the Trust ultimately subordinate to investor interests?34
Exercise of authority: Why has the Trust appointed only one board member despite having authority for three by late 2024?3 Does this reflect strategic choice, informal constraints, or evidence of limited practical power?
Critical decision-making: What constitutes the "key junctures" and "extreme events" where trustees are expected to intervene?1 Without public examples of trustee influence on major decisions, effectiveness remains speculative.
Amendment thresholds: What specific supermajority percentages are required to amend the Trust at different time points?2 These details could determine whether small numbers of large investors effectively control amendment power.
Information access: What information has the company withheld from trustees under the "clearly unreasonable expense" provision, and have trustees challenged such withholding?2
Long-term durability: Will the Trust maintain independence and effectiveness as Anthropic grows, faces competitive pressures, or pursues additional funding that dilutes existing stockholders?
Anthropic explicitly acknowledges the experimental nature of the LTBT, stating it is "an early iteration that we will build on" and emphasizing the company's empiricist approach to observing how the structure functions in practice.1 The ultimate test will be whether the Trust demonstrates meaningful influence on consequential AI development and deployment decisions in the years ahead.
Sources
Footnotes
-
Anthropic: The Long-Term Benefit Trust — Anthropic: The Long-Term Benefit Trust ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7 ↩8 ↩9 ↩10 ↩11 ↩12 ↩13 ↩14 ↩15 ↩16 ↩17 ↩18 ↩19 ↩20 ↩21 ↩22 ↩23 ↩24 ↩25 ↩26 ↩27 ↩28 ↩29 ↩30 ↩31 ↩32 ↩33 ↩34 ↩35
-
Harvard Law School Forum on Corporate Governance: Anthropic Long-Term Benefit Trust — Harvard Law School Forum on Corporate Governance: Anthropic Long-Term Benefit Trust ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7 ↩8 ↩9 ↩10 ↩11 ↩12 ↩13 ↩14 ↩15 ↩16 ↩17 ↩18 ↩19 ↩20 ↩21 ↩22 ↩23 ↩24 ↩25 ↩26 ↩27
-
LessWrong: Maybe Anthropic's Long-Term Benefit Trust is Powerless — LessWrong: Maybe Anthropic's Long-Term Benefit Trust is Powerless ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7 ↩8 ↩9 ↩10 ↩11 ↩12 ↩13 ↩14 ↩15 ↩16 ↩17 ↩18 ↩19 ↩20
-
EA Forum: Maybe Anthropic's Long-Term Benefit Trust is Powerless — EA Forum: Maybe Anthropic's Long-Term Benefit Trust is Powerless ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7 ↩8 ↩9 ↩10 ↩11
-
The Stakehold: The Anthropic Long-Term Benefit Trust — The Stakehold: The Anthropic Long-Term Benefit Trust ↩ ↩2
-
Citation rc-3786 (data unavailable — rebuild with wiki-server access) ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7 ↩8 ↩9 ↩10
-
Anthropic: Mariano-Florentino Long-Term Benefit Trust — Anthropic: Mariano-Florentino Long-Term Benefit Trust ↩ ↩2
-
LessWrong: Anthropic's Responsible Scaling Policy and Long-Term Benefit Trust — LessWrong: Anthropic's Responsible Scaling Policy and Long-Term Benefit Trust ↩ ↩2 ↩3
-
Alignment Forum: Anthropic's Responsible Scaling Policy and Long-Term Benefit Trust — Alignment Forum: Anthropic's Responsible Scaling Policy and Long-Term Benefit Trust ↩
-
Harvard Law Review: Amoral Drift in AI Corporate Governance — Harvard Law Review: Amoral Drift in AI Corporate Governance ↩ ↩2
References
“Anthropic has an unconventional governance mechanism: an independent "Long-Term Benefit Trust" elects some of its board.”
The claim that the Trust operates as "a different kind of stockholder" is not directly stated in the source, but is an interpretation of the Trust's function. The claim that the Trust creates accountability mechanisms independent of financial returns is an interpretation of the Trust's function, not a direct quote. The claim that the Trust maintains a working relationship with company leadership through consultation requirements and information-sharing agreements is not explicitly stated in the source, but is implied. The source mentions criticism within the AI safety community regarding the Trust's actual enforcement power, but does not explicitly state that this criticism is 'significant'. The source mentions the company's decision not to publish the full Trust Agreement, but does not directly link this to criticism within the AI safety community.
“Announcements Mariano-Florentino Cuéllar appointed to Anthropic’s Long-Term Benefit Trust Jan 21, 2026 Anthropic’s Long-Term Benefit Trust announced the appointment of Mariano-Florentino (Tino) Cuéllar as a new member of the Trust.”
WRONG DATE: The source states that Mariano-Florentino Cuéllar was appointed in 2026, not 2025. UNSUPPORTED: The source does not mention Richard Fontaine.
“The drama that played out at OpenAI, where powerful investor-supplier Microsoft and an irreplaceable labor force successfully reinstated Sam Altman after he was fired by the nonprofit board, 18 suggests that OpenAI may already have drifted substantially from its initial commitments despite its novel structure.”
4LessWrong: Anthropic's Responsible Scaling Policy and Long-Term Benefit Trustlesswrong.com·Blog post▸
“Today, we’re publishing our Responsible Scaling Policy (RSP) – a series of technical and organizational protocols that we’re adopting to help us manage the risks of developing increasingly capable AI systems.”
5Alignment Forum: Anthropic's Responsible Scaling Policy and Long-Term Benefit Trustalignmentforum.org·Blog post▸
“They established the “Anthropic Long-Term Benefit Trust,” a novel arrangement that empowers experiments in artificial intelligence and ethics to gradually pick a majority of Anthropic’s board of directors.”
The source does not state that the LTBT was formally launched in 2023. It only mentions the date of the article's publication. The source does not specify the number of initial trustees.
7EA Forum: Maybe Anthropic's Long-Term Benefit Trust is Powerlessforum.effectivealtruism.org·Blog post▸
“But the Trust's details have not been published and some information Anthropic has shared is concerning.”
The claim that the Trust operates as what Anthropic calls 'a different kind of stockholder' is not directly supported by the source. The source does mention that Anthropic emphasizes the Trust as an experiment and argues that Anthropic will be able to promote safety and benefit-sharing over profit. The claim that the Trust has consultation requirements and information-sharing agreements is not explicitly mentioned in the source. The source does not mention that the criticism comes specifically from within the AI safety community.
“I would say that the LTBT is powerless iff it can be trivially prevented from accomplishing its primary function—overriding the financial interests of the for-profit Anthropic investors—by those investors, such as with a simple majority (which is the normal standard of corporate control). I think this is very unlikely to be true, p<5%.”
“The Trust is an independent body of five financially disinterested members with an authority to select and remove a portion of our Board that will grow over time (ultimately, a majority of our Board).”