Skip to content

OpenAI Foundation

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:87 (Comprehensive)
Importance:85 (High)
Last edited:2026-02-04 (2 days ago)
Words:9.7k
Structure:
📊 25📈 0🔗 23📚 18621%Score: 13/15
LLM Summary:The OpenAI Foundation holds 26% equity (~\$130B) in OpenAI Group PBC with governance control, but detailed analysis of board member incentives reveals strong bias toward capital preservation over philanthropic deployment. Of nine board members, only two (Desmond-Hellmann, Seligman) have meaningful nonprofit experience, while four (Taylor, Summers, Ogunlesi, Altman) have direct financial conflicts favoring stock appreciation. Base case spending projection is \$3-8B over 10 years (2-6% of stake), far below the \$25B pledge. Includes cost-effectiveness analysis of pressure strategies, finding advocacy may be among the most leveraged philanthropic opportunities at ~\$0.0001/\$ deployed.
Critical Insights (4):
  • NeglectedThe foundation's Safety and Security Committee consists of only four part-time volunteers with no dedicated staff, tasked with overseeing development of potentially transformative AGI systems.S:3.5I:4.0A:3.5
  • NeglectedThe OpenAI Foundation holds $130 billion in equity but faces a fundamental incentive problem: selling shares to fund philanthropy would depress the stock price, creating pressure to hold assets indefinitely rather than deploy them for charitable purposes.S:3.0I:3.5A:3.0
  • ClaimEight of nine OpenAI Foundation board members also serve on the for-profit board, creating structural conflicts where the same individuals oversee both commercial success and public benefit obligations.S:2.5I:3.5A:3.0
AspectDescription
StructureNonprofit foundation holding 26% equity in for-profit OpenAI Group PBC
Valuation$130 billion stake at $500B+ OpenAI valuation (October 2025)
Governance RightsAppoints all board directors of OpenAI Group; can replace them at any time
Initial Commitment$25 billion focused on health/curing diseases and AI resilience
EstablishedOctober 28, 2025 (restructured from original 2015 nonprofit OpenAI, Inc.)
Key PeopleBret Taylor (Board Chair), Sam Altman (CEO), nine other board members
ComparisonSimilar to Anthropic’s Long-Term Benefit Trust but with direct equity ownership vs. pledge-based control
SourceLink
Official Websiteopenai.com/foundation
Structure Detailsopenai.com/our-structure
Wikipediaen.wikipedia.org/wiki/OpenAI

This table provides a comprehensive overview of the OpenAI Foundation’s structure, governance, spending trajectory, and accountability mechanisms.

CategoryKey FactsAssessment
Assets$130B equity stake (26% of OpenAI at $500B valuation)Paper wealth only; shares illiquid until IPO (2026-27)
Announced Commitment$25 billion for health + AI resilienceNo timeline; vague language creates no binding obligation
Actual Spending (2025)$50 million (People-First AI Fund)0.04% of assets — far below 5% private foundation minimum
Legal ClassificationUnclear (filed Form 990, not 990-PF)If public charity, no mandatory payout requirement
Board Composition9 members; 8 also serve on for-profit boardOnly 2 have philanthropic experience; 4 have high financial conflicts
Governance ControlAppoints all OpenAI Group board membersCan remove directors at any time; retains ultimate control
Competitor StakeMicrosoft holds 27% (larger than foundation)Microsoft has no philanthropic obligation; commercial interests dominate
IPO TimelineFiling H2 2026; listing 2027Foundation faces lockup periods, insider restrictions post-IPO
ProfitabilityOpenAI projects profitability by 2030$9B losses in 2025; $47B peak cash burn in 2028
Valuation Risk$207B funding shortfall (HSBC estimate)If AI bubble deflates, foundation wealth could evaporate
Scenario10-Year Spending% of $130BLikelihood
Status Quo (current trajectory)$500M - $1B0.4-0.8%High (60%)
Corporate Foundation (typical rate)$2B - $5B1.5-4%Medium (25%)
5% Compliance (if private foundation)$50B - $65B38-50%Low (10%)
Gates-Level (aggressive philanthropy)$70B - $90B54-69%Very Low (5%)
RiskDescriptionMitigation Potential
Share-Selling DisincentiveSelling depresses stock; foundation incentivized to hold indefinitelyRegulatory pressure for diversification
Board Conflicts4 of 9 members have direct financial interest in OpenAI stock appreciationAdd independent directors
No Payout RequirementMay avoid 5% rule if classified as public charityIRS classification challenge
Marketing Over MissionGrants timed for PR value; avoid funding critical safety researchIndependent grant review
Valuation Dependency$130B exists only if OpenAI maintains valueDiversification; early liquidation
MechanismEst. CostSuccess RatePotential ImpactCost per $1 Deployed
CA Attorney General$500K65%$2-5B additional spending$0.0001-0.00025
EyesOnOpenAI Coalition$250K50%$1-3B additional spending$0.00008-0.00025
Investigative Journalism$150K55%$500M-2B additional spending$0.00008-0.0003
Musk Lawsuit (ongoing)N/A45%Precedent-settingN/A
IRS Whistleblower$50K30%$200M-500M additional spending$0.0001-0.00025

Bottom Line: Advocacy for OpenAI Foundation accountability may be among the most cost-effective philanthropic opportunities available, with potential returns of $1,000-10,000 per dollar invested in pressure campaigns.

The OpenAI Foundation is a nonprofit organization that serves as the controlling entity over OpenAI Group PBC, the for-profit public benefit corporation developing advanced AI systems including ChatGPT and GPT-series models. Established through a major restructuring completed on October 28, 2025, the foundation holds a 26% equity stake in the for-profit entity valued at approximately $130 billion, making it one of the world’s most well-resourced philanthropic organizations.12

The foundation’s structure represents an attempt to balance OpenAI’s original nonprofit mission—“to ensure that artificial general intelligence benefits all of humanity”—with the capital requirements of developing cutting-edge AI systems. Through special voting and governance rights, the foundation appoints all members of OpenAI Group’s board of directors and can remove them at any time, maintaining ultimate control despite holding a minority equity position compared to Microsoft’s 27% stake.34

The foundation has committed $25 billion initially across two priority areas: health and curing diseases (accelerating breakthroughs in diagnostics, treatments, and cures) and technical solutions for AI resilience (maximizing AI benefits and minimizing risks). In 2025, it launched the People-First AI Fund with an initial $50 million commitment to support nonprofits and mission-focused organizations.15

OpenAI was founded on December 11, 2015, as a nonprofit research organization in Delaware by Sam Altman, Elon Musk, Greg Brockman, Ilya Sutskever, and others. The founding team pledged over $1 billion in total funding, with the explicit mission of advancing artificial general intelligence “unconstrained by a need to generate financial return.”67 The nonprofit structure was designed to avoid private gain and prioritize humanity-wide AGI benefits.

The idea emerged from private “founding dinners” in 2015 attended by key figures from Silicon Valley, amid concerns about AI risks and the accelerating pace of AI development following breakthroughs like AlexNet. Early donors included Peter Thiel, Reid Hoffman, Amazon Web Services, and Infosys.8

By March 2019, OpenAI recognized that achieving its goals would require substantially more capital than initially anticipated—“on the order of $10 billion” according to the organization’s own projections.9 This led to the creation of OpenAI LP, a “capped-profit” for-profit subsidiary under nonprofit control, where investor returns were capped at 100 times their investment.

This structure allowed OpenAI to secure major investments, including an initial $1 billion from Microsoft in 2019, while maintaining nominal nonprofit oversight. Microsoft secured broad rights to license and use OpenAI’s intellectual property, except for technology related to artificial general intelligence (AGI).1011

However, the hybrid structure proved increasingly difficult to maintain as OpenAI’s valuation soared and the company required even more capital for infrastructure and computational resources. By 2024, OpenAI’s reported valuation had reached approximately $500 billion, making it effectively impossible for a for-profit entity to purchase the nonprofit arm’s assets at fair market value, as would typically be required for a full conversion.12

Public Benefit Corporation Restructuring (2025)

Section titled “Public Benefit Corporation Restructuring (2025)”

On October 28, 2025, OpenAI completed a comprehensive restructuring that transformed the organization into two distinct entities:

  1. OpenAI Foundation (nonprofit) - Holds 26% equity in the for-profit entity with special governance rights
  2. OpenAI Group PBC (for-profit public benefit corporation) - Operates the business while maintaining mission alignment

The restructuring was approved by the California and Delaware Attorneys General after extensive negotiations. California AG Rob Bonta imposed 20 requirements for safety and security to remain under nonprofit control, while Delaware AG Kathy Jennings issued a non-objection statement contingent on fair valuation.1314

Under the new structure, equity distribution is as follows:

  • OpenAI Foundation: 26% ($130 billion at October 2025 valuation)
  • Microsoft: 27% ($135 billion)
  • Employees and early investors: 26%
  • Recent investors (SoftBank, others): 15%
  • Other investors: 6%15

The foundation also secured a warrant allowing it to receive additional shares if OpenAI Group’s valuation increases more than tenfold (to approximately $5 trillion) after 15 years, positioning the foundation as “the single largest long-term beneficiary of OpenAI’s success.”16

The OpenAI Foundation’s governance power derives not from majority ownership but from special voting and governance rights that allow it to appoint all members of the OpenAI Group PBC board of directors and remove them at any time.17 This structure is intended to ensure that the for-profit entity remains aligned with the foundation’s mission even as it pursues commercial objectives.

Class N Shares vs. Equity: Understanding the Governance Mechanics

Section titled “Class N Shares vs. Equity: Understanding the Governance Mechanics”

A common misconception is that the Foundation’s 26% equity stake is what gives it control. In reality, governance and financial ownership are legally separate:

Asset TypeWhat It IsWhat It Does
Class N Common StockSpecial governance sharesExclusive power to appoint/remove all PBC board members, block mission changes
26% Equity StakeOrdinary shares worth ≈$130BFinancial interest only (dividends, sale proceeds, no voting power)

Why this matters for philanthropy: The Foundation can theoretically sell its entire 26% equity stake to fund charitable work without losing governance control. Random buyers of those shares would get no say in board appointments—governance power stays with whoever holds the Class N shares.

The problem: The Class N shares are controlled by the same 8 people who run the company:

Class N Shares (governance) ──owned by──► Foundation Board (8 people)
26% Equity (financial) ──────owned by──► Foundation Board (same 8 people)
OpenAI Group PBC ────────────run by────► 7 of those same 8 people

External capture vs. internal capture:

RiskStructure’s ProtectionEffectiveness
External capture (hostile shareholders take control)Class N shares can’t be bought; governance stays with FoundationStrong
Internal capture (board prioritizes profit over mission)None—same people control both entitiesNone

The structure is clever legal engineering that protects against outsiders gaining control. But it does nothing to address the conflict of interest when the same tech/finance executives govern both the nonprofit mission and the for-profit company they personally benefit from.

A better design would separate the Class N shareholders from the for-profit beneficiaries—for example, having Class N shares held by an independent board with no financial stake in OpenAI (similar to what Anthropic’s Long-Term Benefit Trust attempts). OpenAI chose not to build this separation.

The foundation is governed by a board of directors chaired by Bret Taylor, former co-CEO of Salesforce and co-creator of Google Maps. The board consists of nine members:18

All current foundation directors also serve on OpenAI Group’s board, except Dr. Kolter, who serves as a non-voting observer.19 This dual-board structure has raised concerns about conflicts of interest, with critics arguing that board members may prioritize the for-profit entity’s business interests over the foundation’s public-serving mission.20

CA Attorney General Board Independence Requirement

Section titled “CA Attorney General Board Independence Requirement”

The October 2025 agreement with California AG Rob Bonta includes a provision requiring board composition to change over time. The actual requirement is even weaker than typically reported:

What the requirement actually says: Within one year (by October 2026), one additional existing Foundation director must transition to serving exclusively on the Foundation board (as a non-voting observer to the Group board), joining Dr. Kolter who already serves in this role.21

This does NOT require:

  • Adding new independent members
  • Members with philanthropic experience
  • Members without financial conflicts

It simply requires one existing board member to stop voting on for-profit matters—they remain on the foundation board.

MetricCurrentAfter Requirement
Board size8 members (after Summers resignation)8 members
Voting on for-profit7 of 86 of 8
Foundation-only members1 (Kolter, non-voting)2
New independent voices00

Why this changes almost nothing:

  • No new perspectives are added—existing members just change their voting role
  • 6 of 8 members still vote on both foundation and for-profit matters
  • The 2 “independent” members (Kolter + 1) are still people originally selected by OpenAI
  • Nothing requires adding philanthropic expertise

Comparison: The Gates Foundation board has 4 members—all with philanthropic backgrounds. The Ford Foundation board has 16 members with term limits and significant independent representation. Moving one existing director to non-voting status is cosmetic.

Understanding whether the foundation will deploy its $130 billion requires examining each board member’s background, financial interests, and track record on philanthropy versus capital preservation.

FactorAssessment
Primary RoleCEO of Sierra AI (enterprise AI chatbot startup, $4.5B valuation)
Direct ConflictSierra is an OpenAI customer and competitor in enterprise AI
Philanthropic ExperienceNone identified
Financial IncentiveHigher OpenAI valuation legitimizes entire enterprise AI market, boosting Sierra’s value

Taylor became Chair during the November 2023 Altman crisis. While he has promised to “recuse himself whenever there is potential for overlap” with Sierra, this arrangement is more permissive than what forced Reid Hoffman off the board for similar conflicts.22 His entire career has been in commercial technology (Google Maps, Facebook CTO, Salesforce Co-CEO), with no identified foundation board experience or significant philanthropic giving.

Notable January 2026 statement: At Davos, Taylor publicly stated that AI is “probably” a bubble and expects a market correction in coming years. “When everyone knows that AI is going to have a huge impact… money is plentiful. However, that rush won’t last forever.”23 This is significant: the chair of a foundation whose $130B in assets depends on OpenAI’s valuation is publicly acknowledging that valuation may be inflated.

Spending Inclination: Low. Primary financial interest is in AI ecosystem growth, not charitable deployment.

FactorAssessment
Primary RoleFormer CEO, Bill & Melinda Gates Foundation (2014-2020)
Philanthropic Track RecordLed foundation spending 10-18% of endowment annually
Management Style”Occasionally needing to say ‘Not yet’ or ‘Not so fast‘“
Current ConflictsAdvisory board member at GV (Google Ventures)

Desmond-Hellmann is the only board member with substantial foundation leadership experience. Under her tenure, the Gates Foundation spent well above the 5% minimum, though this pace was largely set by Bill and Melinda Gates themselves rather than the CEO. She described her role as balancing ambitious vision with pragmatism. Her GV advisory role creates some tech industry alignment but less direct than other board members.24

Spending Inclination: Moderate. Most likely to push for meaningful disbursement, but within conservative bounds. Health focus aligns with her background.

FactorAssessment
StatusResigned November 2025
Prior RoleEconomist, former U.S. Treasury Secretary, former Harvard President
Endowment PhilosophyAt Harvard, pushed aggressive investment over spending, lost ≈$2B in 2008 crisis
Reason for DepartureEpstein email revelations; said he was “deeply ashamed”

Impact on foundation governance: Summers’ departure removes one of the board members with the strongest capital-preservation bias and most direct conflicts (advisor to a16z, D.E. Shaw). His replacement could shift dynamics—or OpenAI could appoint another finance-oriented director. As of February 2026, no replacement announced.

FactorAssessment
Primary RoleFounder/Chairman of Global Infrastructure Partners; Senior Managing Director at BlackRock
Investment PhilosophyCareer built on maximizing long-term infrastructure asset value
Personal PhilanthropyFamily foundation (≈$22M assets) gave only $1M in 2023 (4.5% payout)
Direct ConflictBlackRock invested in OpenAI; acquired GIP for $12.5B in January 2024

Ogunlesi’s entire career is in private equity and infrastructure investment—maximizing asset appreciation, not deploying capital for social good. His personal foundation gives at barely above the legal minimum rate. His BlackRock executive role creates direct financial interest in OpenAI’s success.26

Spending Inclination: Very Low. Private equity mindset prioritizes asset growth. Personal philanthropy minimal.

FactorAssessment
Primary RoleFormer President, Sony Corporation of America; former EVP/General Counsel
Nonprofit LeadershipChair of The Doe Fund (NYC homeless services); Co-Chair of Schwarzman Animal Medical Center
Financial ConflictsNo obvious direct stake in OpenAI valuation
Experience TypeLegal/governance expertise; direct-service nonprofit leadership

Seligman has the most relevant hands-on nonprofit experience on the board, though at much smaller scale than $130 billion. Her Doe Fund work demonstrates actual engagement with direct-service charitable organizations. She appears less financially conflicted than most board members.27

Spending Inclination: Moderate. Genuine nonprofit experience, fewer conflicts. May support responsible disbursement.

FactorAssessment
Primary RoleFormer NSA Director (2018-2024); former Commander, U.S. Cyber Command
Appointment RationaleSafety and Security Committee member; national security expertise
Philanthropic ExperienceNone identified
Spending RelevanceAppointed for AI safety/security, not philanthropic expertise

Nakasone joined the board in June 2024 explicitly for safety and security oversight, not philanthropic guidance. His stated rationale: “I believe that [AI] truly is the capability—the technology—that must continue to be led by the United States.” If anything, his perspective would favor funding security R&D within OpenAI rather than external grants.28

Spending Inclination: Not applicable. Role is AI security, not philanthropy.

MemberCurrent RoleCareer BackgroundPhilanthropic ExperienceFinancial ConflictsSpending Bias
Bret Taylor (Chair)CEO, Sierra AIGoogle Maps co-creator → Facebook CTO → Salesforce Co-CEO → Twitter ChairNone identifiedHigh: Sierra AI is OpenAI customer/competitor; benefits from AI market growthLow
Sue Desmond-HellmannBoard member; GV advisorGenentech President → UCSF Chancellor → Gates Foundation CEO (2014-2020)Strong: Led Gates Foundation spending 10-18%/year; managed $50B+ corpusLow-Medium: GV advisory roleModerate
Larry SummersRESIGNED Nov 2025Treasury Secretary → Harvard President → Epstein scandalN/AN/AN/A
Adebayo OgunlesiSr. Managing Director, BlackRockCredit Suisse EVP → Founded GIP → BlackRock acquisition ($12.5B)Minimal: Family foundation gives ≈4.5%/year (barely above minimum)High: BlackRock invested in OpenAI; career in asset appreciationVery Low
Nicole SeligmanNonprofit board chairWilliams & Connolly partner → Sony Corp. EVP/General Counsel → Sony America PresidentModerate: Chair of Doe Fund (homeless services); AMC Co-ChairLow: No obvious OpenAI financial stakeModerate
Paul NakasoneNational security advisorNSA Director (2018-2024) → U.S. Cyber Command CommanderNone identifiedLow: Appointed for security oversight, not investmentN/A
Adam D’AngeloCEO, QuoraFacebook CTO → Quora co-founder/CEOUnknownMedium: Tech entrepreneur; benefits from AI ecosystem growthUnknown
Sam AltmanCEO, OpenAIY Combinator President → OpenAI CEONone identifiedVery High: Direct beneficiary of OpenAI commercial success; $10B+ net worth tied to OpenAILow
Zico KolterCMU Professor (non-voting)AI/ML researcher; Safety Committee chairAcademicLow: No commercial stakeUnknown

Board Composition Analysis (as of February 2026):

  • Current members: 8 (after Summers resignation; no replacement announced)
  • Members with meaningful philanthropic experience: 2 of 8 (Desmond-Hellmann, Seligman)
  • Members with high financial conflicts: 3 of 8 (Taylor, Ogunlesi, Altman)
  • Members likely to support aggressive spending: 0-2 of 8
  • Members likely to prioritize stock preservation: 4-5 of 8

Overall Assessment: The board is heavily weighted toward individuals with financial interests in OpenAI’s stock appreciation and careers in asset growth rather than philanthropic deployment. Only Desmond-Hellmann and Seligman have meaningful nonprofit experience, and neither has experience at this scale. The structural composition suggests low probability of aggressive charitable spending.

The foundation established a Safety and Security Committee (SSC) chaired by Dr. Kolter. However, critics note that this committee consists of only four part-time volunteers with no dedicated staff, tasked with overseeing the development of potentially transformative AGI systems amid intense commercial pressures.29

In 2025, the OpenAI Foundation launched its first major philanthropic initiative: the People-First AI Fund with an initial $50 million commitment. The fund issued an open call for applications from September 8 through October 8, 2025, receiving nearly 3,000 applications from U.S.-based 501(c)(3) nonprofits with annual operating budgets between $500,000 and $10 million.3031

By December 2025, the foundation announced it would distribute:

  • $40.5 million in unrestricted grants to 208 nonprofits across the United States
  • $9.5 million in board-directed grants for organizations advancing transformative AI work in health and other areas32

The $40.5 million initial disbursement represents a 440% increase over the foundation’s previous year’s grantmaking of $7.5 million (when it was still operating as the original nonprofit OpenAI, Inc.).33

Grant recipients span diverse sectors including AI literacy programs, rural healthcare initiatives, journalism organizations, youth development, veteran support, faith-based community groups, and Native-led media and STEM programs. Examples include:34

  • Camai Community Health Center (Alaska) - AI applications in primary care for remote communities
  • Tribal Education Departments National Assembly - AI literacy programs emphasizing tribal sovereignty
  • FabNewport and Social Enterprise Greenhouse (Rhode Island) - Community innovation
  • Martin Luther King Jr. Center for Nonviolent Social Change (Georgia) - Civic engagement

The foundation announced it will “initially focus on a $25 billion commitment” across two primary areas:35

  1. Health and Curing Diseases - Accelerating breakthroughs in diagnostics, treatments, and cures; funding scientists; creating open-source datasets
  2. Technical Solutions to AI Resilience - Maximizing AI benefits and minimizing risks through cybersecurity-like protections for AI systems

However, the foundation has not provided a timeline for when it will disburse this $25 billion or whether it will structure itself as a private foundation subject to IRS rules requiring 5% annual distribution of assets. As of fiscal year 2023 (before the restructuring), the entity disbursed only $2.6 million in grants.36

If the OpenAI Foundation were required to follow the 5% rule on its $130 billion corpus, it would need to disburse approximately $6.5 billion annually—placing it in the same tier as the Gates Foundation, which averaged $7 billion per year from 2019-2023.37

In addition to the People-First AI Fund, OpenAI’s safety systems organization launched an AI and Mental Health Grant Program in 2025, awarding up to $2 million in grants for interdisciplinary research on AI-mental health intersections. The program focuses on risks, benefits, cultural and language variations in distress expressions, lived experiences, and healthcare provider usage. Applications were accepted through December 19, 2025, with notifications by January 15, 2026.38

Comparison with Anthropic’s Long-Term Benefit Trust

Section titled “Comparison with Anthropic’s Long-Term Benefit Trust”

The OpenAI Foundation’s structure invites comparison with Anthropic’s Long-Term Benefit Trust, both of which aim to preserve mission alignment as for-profit AI companies scale. However, the mechanisms differ significantly:

AspectOpenAI FoundationAnthropic Long-Term Benefit Trust
Control MechanismDirect equity ownership (26%) + special voting rightsPledge-based governance without equity ownership
Board AppointmentFoundation appoints all OpenAI Group directorsTrust holds majority of board seats via shareholder pledge
Financial Interest$130B equity stake creates financial incentiveNo financial stake; pure governance role
Legal BasisCorporate governance rights in PBC structureContractual shareholder agreement
PermanenceEquity ownership is legally bindingDependent on continued shareholder commitment to pledge
Conflicts of InterestBoard members serve on both nonprofit and for-profit boardsSeparate governance structures

Anthropic’s structure, established in 2023, relies on a shareholder pledge where investors agree to vote their shares according to the Trust’s recommendations. The Trust is designed to hold a majority of board seats and maintain control over decisions related to AI safety and the company’s public benefit mission. This approach avoids the potential conflicts of interest created by financial ownership but depends on the continued voluntary participation of shareholders.

OpenAI’s approach, by contrast, gives the foundation direct financial returns from the for-profit entity’s success, which critics argue creates incentives to prioritize growth and profitability over safety and public benefit.39 However, the foundation’s special voting rights provide legally enforceable control that does not depend on voluntary commitments.

The OpenAI Foundation has faced substantial criticism from AI safety advocates, nonprofit watchdogs, and legal experts who question whether the restructuring genuinely preserves the original nonprofit mission or represents an abandonment of public commitments.

Critics argue the restructuring fundamentally betrays OpenAI’s core purpose. As one former employee noted, “the entire philanthropic theory of change here was: we’re going to put guardrails on profit motives so we can develop this tech safely.”40 The shift to a for-profit-controlled structure, they argue, removes these guardrails.

Public Citizen, a consumer advocacy organization, released a strongly worded statement arguing that “the OpenAI Foundation will function as a corporate foundation, doing some good work but for the underlying purpose of advancing the interests of OpenAI For-Profit. The problem is, that’s not how OpenAI Nonprofit was formed or what it is required to do—the for-profit was (dubiously) created to advance the mission of the nonprofit, not the reverse.”41

Tyler Johnston, executive director of AI watchdog The Midas Project, characterized the foundation’s 26% stake as representing “humanity surrendering tens of hundreds of billions of dollars it was already entitled to,” noting that the public will have far lower stake and control compared to the original structure where investors’ returns were capped.42

Legal scholar Luís Calderón Gómez from Cardozo School of Law explained that OpenAI could not simply abandon its nonprofit status because doing so would have required the for-profit entity to purchase the nonprofit’s assets “for fair market value, something that it was unlikely to be able to do” given OpenAI’s $500 billion valuation. The restructuring, he suggested, was a creative workaround to this legal constraint.43

Conflicts of Interest and Governance Concerns

Section titled “Conflicts of Interest and Governance Concerns”

The dual-board structure, where eight of nine foundation board members also serve on OpenAI Group’s board, has raised significant concerns about conflicts of interest. Fred Blackwell, CEO of the San Francisco Foundation, questioned whether board members would genuinely prioritize the foundation’s public-serving mission over the for-profit entity’s commercial interests: “Will the foundation be subject to the 5% rule, and will it spend beyond that requirement?”44

Judith Bell, Chief Impact Officer at the San Francisco Foundation, emphasized that proper oversight requires “independent valuation and governance of nonprofit assets” and urged the foundation to “anchor AI development in human values” rather than market imperatives.45

The foundation’s initial philanthropic focus areas—health and AI resilience—have been criticized as potentially serving OpenAI’s business interests. As Inside Philanthropy noted, OpenAI has a “financial interest in selling its technology to medical research entities,” raising questions about whether health-focused grants constitute genuine public benefit or research and development for the commercial entity.46

Similarly, the AI resilience program faces challenges managing conflicts of interest, as noted by Legal Advocates for Safe Science and Technology: OpenAI might be “biased against valuable technical AI safety work that could identify risks in OpenAI’s models."47

The foundation’s philanthropic initiatives have been dismissed by some critics as “calculated PR stunts” designed to placate regulators and the nonprofit sector. The $50 million People-First AI Fund, while substantial, has been characterized as “obvious pandering” to California regulators, particularly given the timing of its announcement during the restructuring approval process.48

Nick Moës, Executive Director of The Future Society, called October 28, 2025, a “dark day for philanthropy,” arguing that “the public will actually benefit less from OpenAI’s immense profits” under the new structure compared to the original nonprofit model.49

Beyond structural concerns, critics have accused OpenAI of prioritizing market speed over safety. The organization disbanded its Superalignment and AGI Readiness teams—critical to its founding commitment to safe AI development—and “dramatically reduced safety testing times,” releasing powerful models like DeepResearch and GPT-4.1 without promised safety reports.50

These concerns gained public attention following incidents such as a teenager’s death allegedly involving ChatGPT functioning as a “suicide coach,” prompting calls for child safety regulations.51 The Safety and Security Committee’s structure—four part-time volunteers with no staff overseeing the development of potentially transformative AGI systems—has been criticized as grossly inadequate.52

The restructuring faces ongoing legal challenges, most notably from Elon Musk, who filed a lawsuit arguing that his $44 million investment was contingent on OpenAI remaining a nonprofit organization in perpetuity. Musk’s lawsuit alleges that the restructuring violates founding agreements and represents a breach of the terms under which he and others provided initial funding.53 A trial is scheduled for fall 2025.

The EyesOnOpenAI coalition, led by the San Francisco Foundation, has continued to raise concerns about asset valuation and independence even after the California and Delaware Attorneys General approved the restructuring with conditions. In January 2025, the coalition sent an open letter to California AG Rob Bonta calling for action to protect OpenAI’s charitable assets and ensure proper safeguards.54

Projecting the OpenAI Foundation’s actual charitable deployment requires analyzing structural constraints, incentive dynamics, and historical precedents. The $130 billion stake exists only on paper—translating it into charitable impact faces multiple barriers.

The foundation’s wealth cannot be deployed until shares become liquid:

MilestoneProjected TimelineImplication
OpenAI IPO FilingH2 2026 (target)Enables public market for shares
IPO Lockup Period6-12 months post-IPOFoundation cannot sell during lockup
OpenAI Profitability2030 (projected)Until profitable, valuation is speculative
Peak Cash Burn2028 ($47B projected)Company may need capital infusions, not distributions

OpenAI projects $9 billion in operating losses for 2025, rising to $14 billion in 2026. Cash burn is expected to peak at $47 billion in 2028 before reaching positive free cash flow around 2030.55 The company has committed $1.4 trillion in infrastructure spending over 8 years and faces a $207 billion funding shortfall according to HSBC analysis.56

Bottom line: The foundation cannot meaningfully liquidate its stake until at least 2027-2028, and doing so before OpenAI achieves profitability (2030) would signal lack of confidence in the company’s prospects.

ScenarioAnnual Spending% of $130BCumulative 10-YearAssessment
Status Quo$50-100 million0.04-0.08%$500M-1BCurrent trajectory
Corporate Foundation$200-500 million0.15-0.38%$2-5BTypical corporate foundation rate
5% Minimum Compliance$6.5 billion5%$65BIf classified as private foundation
Gates-Level$9 billion6.9%$90BMatching Gates Foundation 2025 record
Aggressive Spend-Down$13 billion10%$130BAtlantic Philanthropies model

Current trajectory ($50 million in 2025) represents approximately 0.04% of stated assets—orders of magnitude below both the 5% legal minimum for private foundations and the $25 billion pledge spread over any reasonable timeframe.

Historical Precedents: Foundations with Concentrated Stock

Section titled “Historical Precedents: Foundations with Concentrated Stock”

The most relevant historical parallel. Ford Foundation held 88% of Ford Motor Company stock until 1956.

AspectFord FoundationOpenAI Foundation
Initial Ownership88% of company26% of company
Divestiture Timeline19 years (1955-1974)Unknown
CatalystRecognized “too exposed to cyclical auto business”No stated diversification plan
OutcomeBecame independent, broadly diversified endowmentTBD

The Ford Foundation’s divestiture enabled both the foundation’s independence and Ford Motor’s 1956 IPO. The process took nearly two decades. The OpenAI Foundation has announced no comparable diversification strategy.57

Zuckerberg deliberately chose an LLC structure rather than a foundation to avoid:

  • The 5% mandatory payout rule
  • Excess business holdings restrictions
  • Public disclosure requirements
  • Loss of voting control over Facebook shares

This structure allows indefinite wealth accumulation with “philanthropic” framing but no legal obligation to deploy capital. OpenAI’s foundation structure is nominally more restrictive, but the classification question (public charity vs. private foundation) remains unresolved.58

The Giving Pledge provides sobering data on philanthropic commitments by tech billionaires. Original pledgers (2010) became 166% wealthier (inflation-adjusted) by 2024 while their philanthropic activity remained modest relative to wealth growth. The pledge creates positive PR without binding legal commitment.59

A central concern is whether the foundation will function primarily as brand management rather than genuine philanthropy. Corporate foundations typically:

  1. Fund safe, non-controversial causes that generate positive press
  2. Avoid work critical of the parent company or its industry
  3. Time grants for strategic PR value (e.g., during regulatory scrutiny)
  4. Serve strategic business objectives (policy relationships, academic partnerships)

Evidence of this pattern:

  • Grant timing: The $50 million People-First AI Fund was announced September 2025—during the restructuring approval process with California AG
  • Grant profile: Recipients are broadly popular causes (rural healthcare, veterans, faith groups, STEM education) rather than technical AI safety research that might identify risks in OpenAI’s models
  • Focus area vagueness: “AI resilience” can encompass work promoting AI adoption (serving OpenAI’s commercial interests) as easily as work identifying AI risks
  • Health grants: OpenAI has direct commercial interest in selling AI tools to medical research entities, raising questions about whether health grants constitute R&D subsidy or genuine philanthropy60

Inside Philanthropy noted: “Managing conflicts of interest will be particularly challenging for the nonprofit’s AI resilience program. The organization might be biased against valuable technical AI safety work that could identify risks in OpenAI’s models.”61

The foundation’s legal classification determines whether it faces mandatory spending requirements:

Classification5% Payout Required?Excess Holdings LimitPublic Disclosure
Public Charity (501(c)(3))NoNoAnnual Form 990
Private FoundationYes20-35% of companyForm 990-PF + grants

The original OpenAI nonprofit filed Form 990 (public charity form), not Form 990-PF (private foundation form).62 If the new foundation maintains public charity status—which requires receiving broad public support or operating programs—it has no legal obligation to disburse any funds.

San Francisco Foundation CEO Fred Blackwell directly asked: “Will the foundation be subject to the 5% rule, and will it spend beyond that requirement?”63 This question remains unanswered.

Given structural constraints, board composition, and historical precedents, the most likely spending trajectory:

PeriodAnnual SpendingRationale
2025-2027$50-150 millionPre-IPO; shares illiquid; “proof of concept” grants
2028-2030$200-500 millionPost-IPO but pre-profitability; cautious selling
2031-2035$500M-2 billionIf profitable, modest diversification begins
2036+UnknownDepends on IPO success, leadership changes, pressure

10-year cumulative projection (base case): $3-8 billion deployed, representing 2-6% of initial $130B stake—far below the announced $25 billion commitment and orders of magnitude below what the 5% rule would require.

Upside scenario (private foundation classification + public pressure): $40-60 billion over 10 years.

Downside scenario (AI bubble deflation, no 5% rule): $1-3 billion over 10 years, primarily in brand-aligned grants.

Will the Foundation Deliver? A Critical Analysis

Section titled “Will the Foundation Deliver? A Critical Analysis”

The OpenAI Foundation represents an unprecedented experiment in AI governance: a $130 billion nonprofit entity controlling one of the world’s most valuable technology companies. Whether this structure produces genuine philanthropic impact or serves primarily as reputation management depends on several structural factors.

Scale of Resources: At $130 billion, the foundation’s equity stake exceeds the Gates Foundation’s $86 billion endowment, potentially making it the world’s wealthiest charity.64 If structured as a private foundation subject to the 5% distribution rule, it would need to disburse approximately $6.5 billion annually—transformative scale for any cause area.

Governance Authority: Unlike most corporate foundations, the OpenAI Foundation holds genuine control through its power to appoint and remove all OpenAI Group board members at any time. This represents more than advisory influence—it is legal authority to redirect the company’s strategy.

Long-Term Warrant: The foundation secured a warrant for additional shares if OpenAI’s valuation exceeds $5 trillion after 15 years, aligning its interests with long-term company success rather than short-term extraction.65

Public Accountability: The restructuring required approval from both California and Delaware Attorneys General, who imposed 20 safety and security requirements. Ongoing regulatory scrutiny may constrain the foundation’s ability to simply serve OpenAI’s commercial interests.66

Early Grantmaking: The 440% increase in grantmaking (from $7.5 million to $40.5 million in 2025) and the breadth of People-First AI Fund recipients suggest some genuine philanthropic intent beyond pure PR.67

No Disbursement Timeline: The foundation announced a “$25 billion commitment” but provided no timeline for deployment. The vague language—“will initially focus on”—creates no binding obligation. As of fiscal year 2023, the entity disbursed only $2.6 million in grants.68

Corporate Foundation Dynamics: San Francisco Foundation CEO Fred Blackwell articulated the core concern: “Will it have the independence to be bold, take risks, and challenge the industry—or will it look and operate more like a corporate foundation that makes safe grants that merely advance the OpenAI brand?”69 Corporate foundations typically serve their parent companies’ interests rather than pursuing independent charitable missions.

Structural Conflicts of Interest: Eight of nine foundation board members also serve on OpenAI Group’s board. This dual-board structure means the same individuals who benefit from OpenAI’s commercial success are responsible for ensuring the foundation serves public interests. As one critic noted, “A profit incentive is a conflict of interest.”70

Biased Against Critical Safety Work: The Midas Project’s executive director warned the foundation might be “biased against valuable technical AI safety work that could identify risks in OpenAI’s models.”71 A foundation funded by OpenAI equity has structural incentives to avoid research that could harm the company’s reputation or products.

Equity Stake Valuation Dependency: The foundation’s $130 billion valuation exists only on paper—it depends entirely on OpenAI maintaining or increasing its market value. This creates perverse incentives:

  • Reluctance to sell: Selling significant equity would signal lack of confidence and potentially depress the stock price, reducing the foundation’s remaining holdings’ value
  • Pro-growth bias: The foundation benefits financially from OpenAI’s commercial success, creating pressure to prioritize growth over safety constraints that might slow development
  • Valuation fragility: OpenAI is currently valued at $500 billion despite projections it won’t turn a profit until 2030 and faces a $207 billion funding shortfall by then.72 If the AI bubble deflates, the foundation’s “wealth” could evaporate before being deployed

IRS Classification Uncertainty: If classified as a private foundation, IRS excess business holdings rules (Section 4943) would limit the foundation to owning 20% of a business enterprise (or 35% with certain conditions).73 The foundation currently holds 26%. While a 5-year grace period exists for gifts and bequests, eventual compliance would require selling shares—potentially at unfavorable prices if done under regulatory pressure.

“Dark Day for Philanthropy”: The Future Society’s Nick Moës characterized the restructuring as “a dark day for philanthropy,” arguing “the public will actually benefit less from OpenAI’s immense profits” under the new structure compared to the original nonprofit model where all surplus would have gone to charitable purposes.74

A fundamental tension exists in the foundation’s structure: its wealth is tied to assets it has strong incentives never to sell.

Selling Depresses Value: Large institutional shareholders selling significant stakes typically signal bearish sentiment to markets. If the foundation began liquidating its 26% stake to fund charitable work, the sale itself would likely reduce OpenAI’s valuation—and thus the value of remaining foundation holdings.

Holding Preserves Paper Wealth: By holding shares indefinitely, the foundation can claim $130+ billion in “assets” without ever deploying capital to charitable purposes. This allows it to claim the prestige of being one of the world’s largest philanthropies while functioning primarily as a governance mechanism for OpenAI.

IPO Complications: If OpenAI pursues an IPO (potentially in 2027-2028), the foundation would face insider trading restrictions, lockup periods, and disclosure requirements that could further complicate share sales. Large sales by a controlling nonprofit shareholder could spook public markets.

Comparison to Giving Pledge Dynamics: The Giving Pledge demonstrated that wealthy individuals often grow their fortunes faster than they give them away—original pledgers became 166% wealthier (inflation-adjusted) since 2010.75 A similar dynamic may apply here: the foundation’s paper wealth may compound while actual charitable disbursements remain minimal relative to assets.

Historical precedent suggests caution. Corporate foundations typically:

  1. Fund safe, brand-aligned causes that generate positive PR without challenging the parent company’s interests
  2. Avoid controversial or critical work that might create regulatory, legal, or reputational complications
  3. Maintain modest disbursement rates while accumulating endowment assets
  4. Serve strategic business objectives such as cultivating relationships with policymakers, academics, and potential partners

The foundation’s announced focus areas—health and “AI resilience”—fit this pattern. Health is universally appealing and generates positive coverage. “AI resilience” is sufficiently vague to encompass work that promotes AI adoption (serving OpenAI’s commercial interests) rather than work that identifies risks in OpenAI’s specific systems.

Several observable factors will indicate whether the foundation becomes a genuine philanthropic force or primarily serves OpenAI’s interests:

IndicatorPositive SignalNegative Signal
Disbursement rate≥5% of assets annually<1% or declining relative to asset growth
Grant independenceFunding critical AI safety research, including work on OpenAI risksOnly funding “AI adoption” or brand-aligned causes
Board independenceAdding independent directors not affiliated with OpenAIMaintaining complete board overlap with for-profit
Equity diversificationSelling shares to fund operations, diversifying into other assetsHolding concentrated OpenAI position indefinitely
TransparencyPublishing detailed grant databases, safety assessments, governance decisionsMinimal disclosure, vague annual reports
Willingness to constrainFoundation blocking product releases or redirecting resources for safetyFoundation rubber-stamping all commercial decisions

The OpenAI Foundation’s emergence as a potentially $130+ billion philanthropic entity has significant implications for AI safety funding and governance.

If the foundation operates as a traditional private foundation distributing 5% annually, it would represent by far the largest source of dedicated AI-related philanthropic funding, dwarfing current major funders like Open Philanthropy, the Survival and Flourishing Fund, and the Future of Life Institute. Even the announced $25 billion commitment, if deployed over a decade, would represent $2.5 billion annually—an unprecedented scale for AI safety and related work.

However, the foundation’s focus areas—health and AI resilience—may not directly translate to support for independent AI safety research, especially work that might identify risks in OpenAI’s own systems. The potential for conflicts of interest raises questions about whether the foundation will fund critical, independent safety research or primarily support work that advances OpenAI’s commercial objectives.

Other major AI companies have taken different approaches to safety funding and governance:

  • Anthropic established its Long-Term Benefit Trust with governance control but no financial stake, avoiding direct conflicts of interest
  • DeepMind (now Google DeepMind) operates within Google’s corporate structure with safety teams but no independent oversight entity
  • Various companies contribute to external AI safety organizations but retain full corporate control over their own development

The OpenAI Foundation represents a middle path: substantial financial resources coupled with governance control, but embedded within a structure where the same individuals oversee both the nonprofit mission and for-profit operations.

The OpenAI Foundation’s structure may set a precedent for how other AI companies balance mission preservation with capital requirements. If successful in maintaining genuine mission alignment while enabling necessary investment, it could provide a model for other organizations developing transformative AI systems. If, however, the foundation proves to prioritize commercial interests over public benefit, it may serve as a cautionary tale about the limits of nonprofit oversight in for-profit AI development.

The ultimate test will be whether the foundation demonstrates willingness to constrain OpenAI’s commercial activities when they conflict with safety or public benefit considerations—for example, by blocking product releases deemed insufficiently safe or redirecting resources toward safety research that might slow commercialization.

Several fundamental questions about the OpenAI Foundation remain unresolved:

  1. Distribution Requirements: Will the foundation be structured as a private foundation subject to the 5% annual distribution rule, or will it adopt a different structure with more flexible requirements?

  2. Timeline for Grantmaking: When will the foundation begin deploying the announced $25 billion commitment, and at what pace?

  3. Independence of Grantmaking: Will the foundation support independent AI safety research that might identify risks in OpenAI’s models, or will it primarily fund work aligned with OpenAI’s commercial objectives?

  4. Governance in Practice: Will the foundation’s board exercise its authority to constrain OpenAI Group’s activities when they conflict with safety or public benefit considerations, or will the dual-board structure lead to prioritization of commercial interests?

  5. Long-Term Control: As OpenAI Group pursues a potential IPO and future funding rounds, will the foundation maintain effective control, or will dilution and investor pressure erode its governance authority?

  6. Legal Resolution: How will the Elon Musk lawsuit and potential other legal challenges be resolved, and might they force changes to the foundation’s structure or operations?

  7. Comparison with Alternatives: Will the OpenAI Foundation’s equity-based control model prove more or less effective at preserving mission alignment than Anthropic’s pledge-based trust structure or other governance approaches?

These uncertainties reflect the unprecedented nature of the foundation’s structure and the inherent tensions in attempting to maintain nonprofit mission alignment while operating as one of the world’s most valuable technology companies.

Pressuring the Foundation: Strategies and Cost-Effectiveness

Section titled “Pressuring the Foundation: Strategies and Cost-Effectiveness”

Given the structural incentives for the foundation to sit on its $130 billion rather than deploy it, external pressure may be necessary to ensure meaningful philanthropic impact. This section analyzes available pressure mechanisms, their costs, likelihood of success, and potential impact.

MechanismCost RangeSuccess LikelihoodTimelineCost per $1B Deployed
California AG engagement$50K-$2M60-70%6mo-3yr$0.1-2M
Elon Musk lawsuit (ongoing)N/A (already funded)40-50%Trial Mar 2026N/A
EyesOnOpenAI coalition$100K-$500K/yr40-55%1-3yr$0.2-1M
Investigative journalism$50K-$500K50-65%1-6mo$0.5-5M
IRS whistleblower programFree-$50K25-40%1-5yr<$0.1M
Academic research$50K-$200K25-55%1-3yrIndirect
Post-IPO ESG activism$500K-$5M35-50%2027+$1-10M

High-Priority: California Attorney General

Section titled “High-Priority: California Attorney General”

How it works: The California AG’s Charitable Trusts Section has oversight authority over all charitable corporations operating in California. Under the Nonprofit Integrity Act of 2004, the AG can:

  • Audit organizations and review annual filings
  • Investigate fraud, asset diversion, or self-dealing
  • Require court approval for significant asset transfers
  • Bring enforcement actions to protect charitable assets

Current status: AG Rob Bonta approved the restructuring in October 2025 with 20 requirements. The EyesOnOpenAI coalition continues advocating for stronger enforcement.

Historical precedent:

  • Trump Foundation (NY): AG secured $2 million in damages and restrictions
  • Pelletier Foundation (CA): AG recovered $600,000 in misappropriated assets
  • Noble Foundation (WA): $1.4 million judgment against directors

Cost-effectiveness estimate: If $500K in advocacy spending increases foundation disbursement by 1 percentage point ($1.3B), the cost per dollar deployed is approximately $0.0004 — among the most cost-effective philanthropic interventions possible.76

Recommended actions:

  1. Support the San Francisco Foundation’s coalition efforts ($50K-$100K contribution)
  2. Fund legal analysis of IRS classification question ($25K-$75K)
  3. Commission independent valuation analysis ($50K-$150K)

High-Priority: Monitor and Support Musk Lawsuit

Section titled “High-Priority: Monitor and Support Musk Lawsuit”

How it works: Elon Musk’s lawsuit (filed August 2024) alleges OpenAI violated its nonprofit founding agreement. The lawsuit seeks:

  • Declaration that OpenAI breached fiduciary duties
  • Disgorgement of profits to the nonprofit
  • Injunction against for-profit conversion

Status: Trial scheduled for March 2026. In February 2025, a Musk-led consortium offered $97.4 billion to acquire the nonprofit.77

Why this matters: A favorable ruling could establish precedent that:

  • Original donor intent constrains restructuring
  • Nonprofits cannot simply convert to for-profit entities
  • Charitable assets must be protected in corporate transitions

Recommended actions:

  1. Consider filing amicus brief supporting nonprofit mission preservation ($25K-$75K)
  2. Fund research documenting original nonprofit commitments ($10K-$30K)
  3. Publicize trial proceedings and implications ($20K-$50K)

How it works: Organizations like ProPublica, CalMatters, and Inside Philanthropy can sustain public attention on foundation governance.

Why effective:

  • ProPublica’s 2024 reporting won Pulitzer Prize for exposing Supreme Court justice gifts
  • Media coverage creates pressure on board members and regulators
  • Investigative pieces provide evidence for legal and regulatory action

Cost-effectiveness: Funding a $100K investigative series that generates sustained media coverage could shift foundation behavior by exposing governance failures—potentially one of the highest-leverage interventions.

Recommended actions:

  1. Provide documentation to investigative journalists (staff time)
  2. Fund dedicated AI philanthropy reporting ($50K-$150K grant to journalism nonprofit)
  3. Create searchable database of foundation governance issues ($25K-$50K)

Medium-Priority: IRS Whistleblower Program

Section titled “Medium-Priority: IRS Whistleblower Program”

How it works: The IRS Whistleblower Office awards 15-30% of collected proceeds to individuals who report tax violations. Protections under the Taxpayer First Act of 2019 include:

  • Anti-retaliation provisions
  • Confidentiality protections
  • Right to reinstatement and compensatory damages

Potential violations to investigate:

  • Private benefit/inurement if foundation primarily serves OpenAI commercial interests
  • Excess business holdings if classified as private foundation (26% exceeds 20% limit)
  • Self-dealing if board members benefit from foundation decisions

Cost: Filing a whistleblower complaint is free. Legal representation typically works on contingency (25-40% of award).

Timeline: IRS investigations typically take 2-5+ years, but the investigation itself creates pressure.

How it works: The EyesOnOpenAI coalition, co-chaired by San Francisco Foundation CEO Fred Blackwell and LatinoProsperity, includes 50+ organizations advocating for foundation accountability.

Effectiveness:

  • Already achieved AG investigation and restructuring conditions
  • Represents significant philanthropic sector voice
  • Can mobilize media attention and public pressure

Recommended actions:

  1. Join or support the coalition ($10K-$50K membership/contribution)
  2. Recruit additional foundation members
  3. Fund coalition coordination and legal support ($100K-$250K)

How it works: Once OpenAI becomes publicly traded (projected 2026-2027), shareholder activism tools become available:

Current limitation: OpenAI is not yet publicly traded, so direct shareholder activism is not available. However, Microsoft (27% stake) is publicly traded and could be targeted indirectly.

Cost: Shareholder proposals cost $5K-$25K; full proxy campaigns cost $500K-$5M.

Timeline: Likely not available until 2027-2028.

Assuming the goal is increasing foundation spending toward the $6.5B/year that would be required under private foundation rules:

InterventionEstimated CostProbability of SuccessExpected $ DeployedCost per $ Deployed
CA AG advocacy$500K65%$2-5B over 10yr$0.0001-0.00025
Journalism funding$150K55%$500M-2B$0.00008-0.0003
Coalition support$250K50%$1-3B$0.00008-0.00025
Legal research$100K40%$500M-1B$0.0001-0.0002
IRS whistleblower$50K30%$200M-500M$0.0001-0.00025

These cost-effectiveness estimates suggest that advocacy for OpenAI Foundation accountability may be among the most leveraged philanthropic opportunities available. Even modest investments in legal advocacy, journalism, and coalition building could potentially unlock billions in charitable spending that would otherwise remain as paper wealth.

A comprehensive $1-2 million campaign over 2-3 years could include:

  1. Legal Track ($300K-$500K)

    • Commission IRS classification analysis
    • Support Musk lawsuit as amicus
    • Fund enforcement petition to CA AG
  2. Media Track ($200K-$400K)

    • Support investigative journalism
    • Create public-facing accountability dashboard
    • Document governance failures
  3. Coalition Track ($200K-$400K)

  4. Research Track ($100K-$200K)

    • Commission academic research on foundation governance
    • Document historical precedents
    • Analyze spending patterns

Total estimated cost: $800K-$1.5M over 2-3 years

Potential impact: If this campaign increases foundation spending from the projected 2-6% to even 10-15% of assets, the additional $10-15 billion deployed would represent a cost-effectiveness ratio of approximately $0.0001 per dollar deployed—among the most cost-effective interventions in philanthropy.

  • OpenAI Foundation Governance Paradox — Why same-board structure creates governance theater, not real accountability
  • Musk v. OpenAI Lawsuit — $79-134B legal challenge; Foundation assets at risk
  • Long-Term Benefit Trust — Anthropic’s contrasting pledge-based governance model
  • Anthropic (Funder) — Analysis of Anthropic’s funding and investor ecosystem
  • Giving Pledge — Historical track record of billionaire philanthropic commitments
  • OpenAI — The for-profit entity governed by the foundation
  1. OpenAI Foundation - Official Website 2

  2. Built to Benefit Everyone - OpenAI Blog

  3. OpenAI’s restructuring: how the company’s structure works - NBC News

  4. Our Structure - OpenAI

  5. People-First AI Fund Grantees - OpenAI Blog

  6. OpenAI - Britannica

  7. OpenAI: When and Why It Was Founded - Data Studios

  8. OpenAI - Wikipedia

  9. OpenAI Company Research - Contrary Research

  10. OpenAI Timeline - Time Magazine

  11. A nonprofit on top, billions below: Inside OpenAI’s new corporate balancing act - NBC News

  12. OpenAI Now Has a Foundation. We Have Some Questions - Inside Philanthropy

  13. OpenAI Restructure - San Francisco Foundation

  14. 80,000 Hours Podcast - OpenAI Restructuring Discussion

  15. OpenAI - Wikipedia

  16. OpenAI Structure - NBC News

  17. Our Structure - OpenAI

  18. OpenAI Leadership - Wikipedia

  19. Our Structure - OpenAI

  20. OpenAI Foundation Questions - Inside Philanthropy

  21. CalMatters - OpenAI restructuring deal

  22. Fortune - OpenAI Chair Bret Taylor Promises to Recuse on Sierra Overlap

  23. CNBC - OpenAI chair Bret Taylor says AI is ‘probably’ a bubble

  24. Marketplace - Gates Foundation CEO on Big Philanthropy

  25. NBC News - Larry Summers resigns from OpenAI board

  26. Inside Philanthropy - Meet the Ogunlesis

  27. The Doe Fund - Board of Directors

  28. Axios - OpenAI Board Adds Former NSA Director Nakasone

  29. OpenAI Restructuring Discussion - 80,000 Hours

  30. People-First AI Fund Grantees - OpenAI

  31. How to Apply for OpenAI’s $50M People-First AI Fund - The Class Consulting Group

  32. People-First AI Fund Grantees - OpenAI

  33. A Look Under the Hood of the OpenAI Foundation’s People-First AI Fund - Inside Philanthropy

  34. People-First AI Fund Grantees - OpenAI

  35. OpenAI Foundation - Official Website

  36. OpenAI Foundation Questions - Inside Philanthropy

  37. OpenAI Foundation Questions - Inside Philanthropy

  38. AI and Mental Health Research Grants - OpenAI

  39. OpenAI Foundation Questions - Inside Philanthropy

  40. Inside OpenAI’s Controversial Plan to Abandon Its Nonprofit - EA Forum

  41. NBC News on OpenAI Restructuring

  42. OpenAI Foundation Questions - Inside Philanthropy

  43. NBC News on OpenAI Restructuring

  44. OpenAI Foundation Questions - Inside Philanthropy

  45. OpenAI Restructure - San Francisco Foundation

  46. OpenAI Foundation Questions - Inside Philanthropy

  47. OpenAI Foundation Questions - Inside Philanthropy

  48. Inside OpenAI’s Controversial Plan - EA Forum

  49. OpenAI Foundation Questions - Inside Philanthropy

  50. Inside OpenAI’s Controversial Plan - EA Forum/LessWrong

  51. 5 Policy Questions Prompted by OpenAI’s Restructuring - Tech Policy Press

  52. OpenAI Restructuring Discussion - 80,000 Hours

  53. Inside OpenAI’s Controversial Plan - EA Forum

  54. Coalition Requests Attorney General Action - San Francisco Foundation

  55. Carnegie Investments - Risks Facing OpenAI

  56. Fortune - HSBC Analysis: OpenAI $207B Funding Shortfall

  57. Tontine Coffee-House - Ford’s 1956 IPO and Foundation Divestiture

  58. Fortune - Mark Zuckerberg Defends Foundation LLC Structure

  59. Institute for Policy Studies - The Giving Pledge at 15

  60. Inside Philanthropy - OpenAI Foundation Health Focus Questions

  61. Inside Philanthropy - AI Resilience Conflict of Interest

  62. ProPublica Nonprofit Explorer - OpenAI Inc.

  63. Inside Philanthropy - Will Foundation Be Subject to 5% Rule?

  64. OpenAI Foundation Questions - Inside Philanthropy

  65. OpenAI Structure - NBC News

  66. OpenAI Restructure - San Francisco Foundation

  67. A Look Under the Hood of the OpenAI Foundation’s People-First AI Fund - Inside Philanthropy

  68. OpenAI Foundation Questions - Inside Philanthropy

  69. OpenAI Foundation Questions - Inside Philanthropy

  70. U.S. News - Changing OpenAI’s Nonprofit Structure

  71. OpenAI Foundation Questions - Inside Philanthropy

  72. HSBC Analysis - OpenAI Funding Shortfall - Fortune

  73. IRS - Excess Business Holdings of Private Foundation

  74. OpenAI Foundation Questions - Inside Philanthropy

  75. Institute for Policy Studies - The Giving Pledge at 15

  76. Cost-effectiveness calculation: If $500K in advocacy increases disbursement rate by 1 percentage point on $130B corpus, that yields $1.3B additional deployment. $500K/$1.3B = $0.0004 per dollar deployed.

  77. Reuters - Musk-led group offers $97.4 billion for OpenAI nonprofit