Edited today1.3k words3 backlinksUpdated every 6 weeksDue in 6 weeks
3QualityStub •Quality: 3/100LLM-assigned rating of overall page quality, considering depth, accuracy, and completeness.Structure suggests 8788.5ImportanceHighImportance: 88.5/100How central this topic is to AI safety. Higher scores mean greater relevance to understanding or mitigating AI risk.50.5ResearchModerateResearch Value: 50.5/100How much value deeper investigation of this topic could yield. Higher scores indicate under-explored topics with high insight potential.
Content6/13
LLM summaryLLM summaryBasic text summary used in search results, entity link tooltips, info boxes, and related page cards.crux content improve <id>ScheduleScheduleHow often the page should be refreshed. Drives the overdue tracking system.EntityEntityYAML entity definition with type, description, and related entries.Add entity YAML in data/entities/Edit history3Edit historyTracked changes from improve pipeline runs and manual edits.OverviewOverviewA ## Overview heading section that orients readers. Helps with search and AI summaries.
Tables8/ ~5TablesData tables for structured comparisons and reference material.Diagrams2/ ~1DiagramsVisual content — Mermaid diagrams, charts, or Squiggle estimate models.Int. links41/ ~10Int. linksLinks to other wiki pages. More internal links = better graph connectivity.–Ext. links1/ ~6Ext. linksLinks to external websites, papers, and resources outside the wiki.Add links to external sourcesFootnotes0/ ~4FootnotesFootnote citations [^N] with source references at the bottom of the page.Add [^N] footnote citations–References1/ ~4ReferencesCurated external resources linked via <R> components or cited_by in YAML.Add <R> resource linksQuotes0QuotesSupporting quotes extracted from cited sources to back up page claims.crux citations extract-quotes <id>Accuracy0AccuracyCitations verified against their sources for factual accuracy.crux citations verify <id>Backlinks3BacklinksNumber of other wiki pages that link to this page. Higher backlink count means better integration into the knowledge graph.
Change History3
Surface tacticalValue in /wiki table and score 53 pages3 weeks ago
Added `tacticalValue` to `ExploreItem` interface, `getExploreItems()` mappings, the `/wiki` explore table (new sortable "Tact." column), and the card view sort dropdown. Scored 49 new pages with tactical values (4 were already scored), bringing total to 53.
sonnet-4 · ~30min
Clarify overview pages with new entity type3 weeks ago
Added `overview` as a proper entity type throughout the system, migrated all 36 overview pages to `entityType: overview`, built overview-specific InfoBox rendering with child page links, created an OverviewBanner component, and added a knowledge-base-overview page template to Crux.
Investigate and integrate Foundation-layer.ai#1444 weeks ago
Investigated foundation-layer.ai, extracted content via Jina reader proxy, then integrated findings into the wiki: created a new organization page, added entity definition, added resource entry, and updated funders-overview with The Foundation Layer Fund and AISTOF.
Issues2
QualityRated 3 but structure suggests 87 (underrated by 84 points)
Links1 link could use <R> components
Longtermist Funders (Overview)
Overview
Longtermist funders provide critical financial support for organizations working on AI safety, existential risk reduction, and related cause areas. The funding landscape is characterized by a relatively small number of major philanthropists and foundations that provide the majority of resources, with additional support from regranting programs and smaller donors.
The field has experienced significant growth in funding over the past decade, though it remains small relative to overall AI development spending. Major shifts occurred in 2022-2023 with the FTX collapse eliminating a significant planned funding source, though other funders have partially filled the gap.
Comprehensive Funder Comparison
By Annual Giving and Focus Area
Funder
Annual Giving
AI Safety
Global Health
Science
Education
Other
Gates Foundation
≈$7B
Minimal
$4B
$1B
$500M
$1B
Wellcome Trust
≈$1.5B
Minimal
$500M
$800M
—
$200M
Chan Zuckerberg InitiativeOrganizationChan Zuckerberg InitiativeThe Chan Zuckerberg Initiative is a philanthropic LLC that has pivoted dramatically from broad social causes to AI-powered biomedical research, with substantial funding (\$10B+ over next decade) bu...Quality: 50/100
≈$1B
$0
$200M
$800M
$30M
—
Howard Hughes Medical Institute
≈$1B
$0
Minimal
$1B
—
—
Coefficient GivingOrganizationCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed \$4B+ in grants since 2014, including \$336M to AI safety (~60% of external funding). The organization spent ~\$50M on AI safety in 2024...Quality: 55/100
≈$700M
$65M
$300M
$50M
—
$285M
MacArthur FoundationOrganizationMacArthur FoundationComprehensive profile of the \$9 billion MacArthur Foundation documenting its evolution from 1978 to present, with \$8.27 billion in total grants across climate, criminal justice, nuclear threats, ...Quality: 65/100
≈$260M
Minimal
—
$50M
—
$200M
Hewlett FoundationOrganizationWilliam and Flora Hewlett FoundationThe Hewlett Foundation is a \$14.8 billion philanthropic organization that focuses primarily on AI cybersecurity rather than AI alignment or existential risk, distinguishing it from AI safety-focus...Quality: 55/100
≈$473M
$8M
—
—
$100M
$365M
Survival and Flourishing FundOrganizationSurvival and Flourishing FundSFF distributed \$141M since 2019 (primarily from Jaan Tallinn's ~\$900M fortune), with the 2025 round totaling \$34.33M (86% to AI safety). Uses unique S-process mechanism where 6-12 recommenders ...Quality: 59/100
≈$35M
$30M
—
—
—
$5M
Schmidt FuturesOrganizationSchmidt FuturesSchmidt Futures is a major philanthropic initiative founded by Eric Schmidt that has committed substantial funding to AI safety research (\$135M across AI2050 and AI Safety Science programs) while ...Quality: 60/100
≈$200M
$5M
—
$100M
$50M
$45M
Long-Term Future FundOrganizationLong-Term Future Fund (LTFF)LTFF is a regranting program that has distributed \$20M since 2017 (approximately \$10M to AI safety) with median grants of \$25K, filling a critical niche between personal savings and institutiona...Quality: 56/100
≈$5-10M
$5-10M
—
—
—
—
ManifundOrganizationManifundManifund is a \$2M+ annual charitable regranting platform (founded 2022) that provides fast grants (<1 week) to AI safety projects through expert regrantors (\$50K-400K budgets), fiscal sponsorship...Quality: 50/100
≈$2-5M
$1-3M
—
—
—
$1-2M
Key Individual Philanthropists
Person
Net Worth
Annual Giving
AI Safety
Lifetime Total
Primary Vehicle
Bill Gates
≈$130B
≈$5B
Minimal
$50B+
Gates Foundation
Elon Musk (Funder)AnalysisElon Musk (Funder)Elon Musk's philanthropy represents a massive gap between potential and actual impact. With ~\$400B net worth and a 2012 Giving Pledge commitment, he has given only ~\$250M annually through his fou...Quality: 45/100
≈$400B
≈$250M
Minimal
≈$8B
Musk Foundation
Mark Zuckerberg
≈$200B
≈$1B
$0
≈$8B
CZIOrganizationChan Zuckerberg InitiativeThe Chan Zuckerberg Initiative is a philanthropic LLC that has pivoted dramatically from broad social causes to AI-powered biomedical research, with substantial funding (\$10B+ over next decade) bu...Quality: 50/100
Dustin MoskovitzPersonDustin Moskovitz (AI Safety Funder)Dustin Moskovitz and Cari Tuna have given \$4B+ since 2011, with ~\$336M (12% of total) directed to AI safety through Coefficient Giving, making them the largest individual AI safety funders global...Quality: 49/100
≈$17B
≈$700M
$65M
$4B+
Coefficient GivingOrganizationCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed \$4B+ in grants since 2014, including \$336M to AI safety (~60% of external funding). The organization spent ~\$50M on AI safety in 2024...Quality: 55/100
MacKenzie Scott
≈$35B
≈$3-4B
Unknown
$17B+
Direct giving
Jaan TallinnPersonJaan TallinnProfile of Jaan Tallinn documenting \$150M+ lifetime AI safety giving (86% of \$51M in 2024), primarily through SFF (\$34.33M distributed in a 2025 grant round). Co-founded CSER (2012) and FLI (201...Quality: 53/100
≈$500M
≈$50M
$40M+
$100M+
SFFOrganizationSurvival and Flourishing FundSFF distributed \$141M since 2019 (primarily from Jaan Tallinn's ~\$900M fortune), with the 2025 round totaling \$34.33M (86% to AI safety). Uses unique S-process mechanism where 6-12 recommenders ...Quality: 59/100, direct
Vitalik Buterin (Funder)OrganizationVitalik Buterin (Funder)Vitalik Buterin's 2021 donation of \$665.8M in cryptocurrency to FLI was one of the largest single donations to AI safety in history. Beyond this landmark gift, he gives ~\$50M annually to AI safet...Quality: 45/100
≈$500M
≈$50M
$15M+
$800M+
FLIOrganizationFuture of Life Institute (FLI)Comprehensive profile of FLI documenting \$25M+ in grants distributed (2015: \$7M to 37 projects, 2021: \$25M program), major public campaigns (Asilomar Principles with 5,700+ signatories, 2023 Pau...Quality: 46/100 ($665M), MIRI, Balvi
Eric Schmidt
≈$25B
≈$200M
$5M
$1B+
Schmidt Futures
AI Safety Funding Concentration
The AI safety funding landscape is highly concentrated among a few donors:
Funder
AI Safety (Annual)
% of Total AI Safety Funding
Coefficient GivingOrganizationCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed \$4B+ in grants since 2014, including \$336M to AI safety (~60% of external funding). The organization spent ~\$50M on AI safety in 2024...Quality: 55/100
$65M
≈55%
Survival and Flourishing FundOrganizationSurvival and Flourishing FundSFF distributed \$141M since 2019 (primarily from Jaan Tallinn's ~\$900M fortune), with the 2025 round totaling \$34.33M (86% to AI safety). Uses unique S-process mechanism where 6-12 recommenders ...Quality: 59/100
$30M
≈25%
Jaan TallinnPersonJaan TallinnProfile of Jaan Tallinn documenting \$150M+ lifetime AI safety giving (86% of \$51M in 2024), primarily through SFF (\$34.33M distributed in a 2025 grant round). Co-founded CSER (2012) and FLI (201...Quality: 53/100 (direct)
$10M
≈8%
Vitalik ButerinOrganizationVitalik Buterin (Funder)Vitalik Buterin's 2021 donation of \$665.8M in cryptocurrency to FLI was one of the largest single donations to AI safety in history. Beyond this landmark gift, he gives ~\$50M annually to AI safet...Quality: 45/100
$5-15M
≈5-10%
Long-Term Future FundOrganizationLong-Term Future Fund (LTFF)LTFF is a regranting program that has distributed \$20M since 2017 (approximately \$10M to AI safety) with median grants of \$25K, filling a critical niche between personal savings and institutiona...Quality: 56/100
$5-10M
≈5%
Other sources
$5-10M
≈5%
Total estimated
≈$120-150M/year
100%
Untapped Philanthropic Potential
Several major philanthropists have significant resources but minimal AI safety engagement:
Person
Net Worth
Current AI Safety
Potential (1% of net worth)
Elon MuskAnalysisElon Musk (Funder)Elon Musk's philanthropy represents a massive gap between potential and actual impact. With ~\$400B net worth and a 2012 Giving Pledge commitment, he has given only ~\$250M annually through his fou...Quality: 45/100
$400B
≈$0
$4B/year
Mark Zuckerberg
$200B
$0
$2B/year
Bill Gates
$130B
Minimal
$1.3B/year
Larry Ellison
$230B
$0
$2.3B/year
Jeff Bezos
$200B
$0
$2B/year
If these five individuals allocated just 1% of their net worth annually to AI safety, it would represent $11.6B/year — roughly 80x current total funding.
AI Safety Funders (Detailed)
Organization
Type
Annual Giving (Est.)
Primary Focus
Key Grantees
Coefficient GivingOrganizationCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed \$4B+ in grants since 2014, including \$336M to AI safety (~60% of external funding). The organization spent ~\$50M on AI safety in 2024...Quality: 55/100
Foundation
$65M AI safety
Technical alignment, governance, evals
MIRIOrganizationMachine Intelligence Research InstituteComprehensive organizational history documenting MIRI's trajectory from pioneering AI safety research (2000-2020) to policy advocacy after acknowledging research failure, with detailed financial da...Quality: 50/100, RedwoodOrganizationRedwood ResearchA nonprofit AI safety and security research organization founded in 2021, known for pioneering AI Control research, developing causal scrubbing interpretability methods, and conducting landmark ali...Quality: 78/100, METROrganizationMETRMETR conducts pre-deployment dangerous capability evaluations for frontier AI labs (OpenAI, Anthropic, Google DeepMind), testing autonomous replication, cybersecurity, CBRN, and manipulation capabi...Quality: 66/100, GovAI
Survival and Flourishing FundOrganizationSurvival and Flourishing FundSFF distributed \$141M since 2019 (primarily from Jaan Tallinn's ~\$900M fortune), with the 2025 round totaling \$34.33M (86% to AI safety). Uses unique S-process mechanism where 6-12 recommenders ...Quality: 59/100
Donor Lottery
$30M
AI safety, x-risk
MIRI, ARC Evals, SERI, CAISOrganizationCenter for AI SafetyCAIS is a nonprofit research organization founded by Dan Hendrycks that has distributed compute grants to researchers, published technical AI safety papers including the representation engineering ...Quality: 42/100
Long-Term Future FundOrganizationLong-Term Future Fund (LTFF)LTFF is a regranting program that has distributed \$20M since 2017 (approximately \$10M to AI safety) with median grants of \$25K, filling a critical niche between personal savings and institutiona...Quality: 56/100
Regranting
$5-10M
AI safety, x-risk research
Individual researchers, small orgs
The Foundation Layer FundOrganizationThe Foundation LayerThe Foundation Layer is a philanthropic guide and donor-advised fund created by Tyler John (Effective Institutions Project) that has facilitated over 100 grants exceeding \$70 million across AI saf...Quality: 3/100
Donor-Advised
$70M+ (cumulative, 100+ grants)
Alignment, nonproliferation, defensive tech, power distribution, talent
Broad AI safety ecosystem
AI Safety Tactical Opportunities Fund (AISTOF)
Pooled Fund
$30M+ (cumulative, 150+ grants)
Emerging opportunities across governance, alignment, evals
Rapid-response grantmaking
ManifundOrganizationManifundManifund is a \$2M+ annual charitable regranting platform (founded 2022) that provides fast grants (<1 week) to AI safety projects through expert regrantors (\$50K-400K budgets), fiscal sponsorship...Quality: 50/100
Regranting Platform
$2-5M
EA causes broadly
Community projects
Non-AI-Safety Major Funders
Organization
Type
Annual Giving
Focus Areas
AI Safety
Gates Foundation
Foundation
$7B
Global health, poverty, education
Minimal
Wellcome Trust
Foundation
$1.5B
Health research, science
Minimal
Chan Zuckerberg InitiativeOrganizationChan Zuckerberg InitiativeThe Chan Zuckerberg Initiative is a philanthropic LLC that has pivoted dramatically from broad social causes to AI-powered biomedical research, with substantial funding (\$10B+ over next decade) bu...Quality: 50/100
LLC
$1B
AI-biology, disease cures
$0
Hewlett FoundationOrganizationWilliam and Flora Hewlett FoundationThe Hewlett Foundation is a \$14.8 billion philanthropic organization that focuses primarily on AI cybersecurity rather than AI alignment or existential risk, distinguishing it from AI safety-focus...Quality: 55/100
Foundation
$473M
Environment, democracy, education
$8M (cybersecurity)
MacArthur FoundationOrganizationMacArthur FoundationComprehensive profile of the \$9 billion MacArthur Foundation documenting its evolution from 1978 to present, with \$8.27 billion in total grants across climate, criminal justice, nuclear threats, ...Quality: 65/100
Foundation
$260M
Climate, justice, nuclear risk
Minimal
Schmidt FuturesOrganizationSchmidt FuturesSchmidt Futures is a major philanthropic initiative founded by Eric Schmidt that has committed substantial funding to AI safety research (\$135M across AI2050 and AI Safety Science programs) while ...Quality: 60/100
LLC
$200M
Science, AI applications, talent
$5M
AI Safety Funding Landscape
Loading diagram...
Broader Philanthropy Landscape (For Context)
Loading diagram...
The Scale Gap
Category
Annual Funding
Notes
AI Safety (total)
≈$120-150M
Highly concentrated
Gates Foundation alone
≈$7,000M
50x AI safety total
AI capabilities (industry)
≈$50,000M+
400x AI safety total
Global philanthropy
≈$500,000M
4,000x AI safety total
Pending Major Funding Sources
Anthropic-Derived Capital
Anthropic (Funder)AnalysisAnthropic (Funder)Comprehensive model of EA-aligned philanthropic capital at Anthropic. At \$380B valuation (Series G, Feb 2026, \$30B raised): \$27-76B risk-adjusted EA capital expected. Total funding raised exceed...Quality: 65/100 represents potentially the largest future source of longtermist philanthropic capital. At Anthropic's current $350B valuation:
Source
Estimated Value
EA Likelihood
Notes
Founder pledges (7 founders, 80%)
$39-59B
2/7 strongly EA-aligned
Only Dario & Daniela have documented EA connections
Jaan TallinnPersonJaan TallinnProfile of Jaan Tallinn documenting \$150M+ lifetime AI safety giving (86% of \$51M in 2024), primarily through SFF (\$34.33M distributed in a 2025 grant round). Co-founded CSER (2012) and FLI (201...Quality: 53/100 stake
$2-6B (conservative)
Very high
Series A lead investor
Dustin MoskovitzPersonDustin Moskovitz (AI Safety Funder)Dustin Moskovitz and Cari Tuna have given \$4B+ since 2011, with ~\$336M (12% of total) directed to AI safety through Coefficient Giving, making them the largest individual AI safety funders global...Quality: 49/100 stake
$3-9B
Certain
$500M+ already in nonprofit
Employee pledges + matching
$20-40B
High (in DAFs)
Historical 3:1 matching reduced to 1:1 for new hires
Total risk-adjusted
$25-70B
—
Wide range reflects cause allocation uncertainty
Key uncertainties:
Only 2/7 founders have documented strong EA connections—71% of founder equity may go to non-EA causes
Matching program reduced from 3:1 at 50% to 1:1 at 25% for new employees
IPO timeline: 2026-2027 expected; capital deployment likely 2027-2035
For comparison, this $25-70B range represents 170-470x current annual AI safety funding of ≈$150M. Even if only 10% ultimately reaches EA causes, it would still be transformative.
See Anthropic (Funder)AnalysisAnthropic (Funder)Comprehensive model of EA-aligned philanthropic capital at Anthropic. At \$380B valuation (Series G, Feb 2026, \$30B raised): \$27-76B risk-adjusted EA capital expected. Total funding raised exceed...Quality: 65/100 for comprehensive analysis.
OpenAI Foundation
The OpenAI FoundationOrganizationOpenAI FoundationThe OpenAI Foundation holds 26% equity (~\$130B) in OpenAI Group PBC with governance control, but detailed analysis of board member incentives reveals strong bias toward capital preservation over p...Quality: 87/100 holds 26% of OpenAI, worth approximately $130B at current valuations. Unlike Anthropic's pledge-based model, the Foundation has direct legal control over these assets. Cause allocation is uncertain—the Foundation's stated mission focuses on "safe AGI" but specific philanthropic priorities are undisclosed.
Recent Trends
2024-2026 Developments:
Coefficient Giving launched $40M AI Safety Request for Proposals (January 2025)
SFF allocated $34.33M, with 86% going to AI-related projects
Coefficient Giving (formerly Open Philanthropy) rebranded in November 2025
LTFF continued steady grantmaking at ≈$5M annually
Anthropic founders announced 80% donation pledges (January 2026)
The Foundation LayerOrganizationThe Foundation LayerThe Foundation Layer is a philanthropic guide and donor-advised fund created by Tyler John (Effective Institutions Project) that has facilitated over 100 grants exceeding \$70 million across AI saf...Quality: 3/100launched (early 2026) — a comprehensive philanthropic guide by Tyler John (Effective Institutions Project) synthesizing five years of AI safety advisory into a donor guidebook, covering alignment, nonproliferation, defensive tech, power distribution, and talent
Post-FTX Landscape:
Future Fund's collapse eliminated ≈$160M in committed grants
Some organizations faced funding crises; others found alternative support
A comprehensive philanthropic guide by Tyler John (Effective Institutions Project) aimed at persuading major donors to fund AI safety. Covers AGI timelines, existential risks (loss of control, malicious use, power concentration), and proposes a five-pillar philanthropic strategy: alignment science, nonproliferation, defensive technology, power distribution, and talent mobilization. Includes a getting-started guide for donors with specific funds and advisors.
Anthropic (Funder)AnalysisAnthropic (Funder)Comprehensive model of EA-aligned philanthropic capital at Anthropic. At \$380B valuation (Series G, Feb 2026, \$30B raised): \$27-76B risk-adjusted EA capital expected. Total funding raised exceed...Quality: 65/100Elon Musk (Funder)AnalysisElon Musk (Funder)Elon Musk's philanthropy represents a massive gap between potential and actual impact. With ~\$400B net worth and a 2012 Giving Pledge commitment, he has given only ~\$250M annually through his fou...Quality: 45/100
Organizations
MacArthur FoundationOrganizationMacArthur FoundationComprehensive profile of the \$9 billion MacArthur Foundation documenting its evolution from 1978 to present, with \$8.27 billion in total grants across climate, criminal justice, nuclear threats, ...Quality: 65/100Coefficient GivingOrganizationCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed \$4B+ in grants since 2014, including \$336M to AI safety (~60% of external funding). The organization spent ~\$50M on AI safety in 2024...Quality: 55/100Schmidt FuturesOrganizationSchmidt FuturesSchmidt Futures is a major philanthropic initiative founded by Eric Schmidt that has committed substantial funding to AI safety research (\$135M across AI2050 and AI Safety Science programs) while ...Quality: 60/100William and Flora Hewlett FoundationOrganizationWilliam and Flora Hewlett FoundationThe Hewlett Foundation is a \$14.8 billion philanthropic organization that focuses primarily on AI cybersecurity rather than AI alignment or existential risk, distinguishing it from AI safety-focus...Quality: 55/100Survival and Flourishing FundOrganizationSurvival and Flourishing FundSFF distributed \$141M since 2019 (primarily from Jaan Tallinn's ~\$900M fortune), with the 2025 round totaling \$34.33M (86% to AI safety). Uses unique S-process mechanism where 6-12 recommenders ...Quality: 59/100Vitalik Buterin (Funder)OrganizationVitalik Buterin (Funder)Vitalik Buterin's 2021 donation of \$665.8M in cryptocurrency to FLI was one of the largest single donations to AI safety in history. Beyond this landmark gift, he gives ~\$50M annually to AI safet...Quality: 45/100
Other
Jaan TallinnPersonJaan TallinnProfile of Jaan Tallinn documenting \$150M+ lifetime AI safety giving (86% of \$51M in 2024), primarily through SFF (\$34.33M distributed in a 2025 grant round). Co-founded CSER (2012) and FLI (201...Quality: 53/100Dustin Moskovitz (AI Safety Funder)PersonDustin Moskovitz (AI Safety Funder)Dustin Moskovitz and Cari Tuna have given \$4B+ since 2011, with ~\$336M (12% of total) directed to AI safety through Coefficient Giving, making them the largest individual AI safety funders global...Quality: 49/100
Concepts
EA Funding Absorption CapacityConceptEA Funding Absorption CapacityThe EA ecosystem's ability to absorb large capital inflows is limited by talent pipelines, management capacity, and the challenge of maintaining quality at scale. Current AI safety funding is \$120...Quality: 3/100