Structured Facts
Database Records
Revenue
$21M
as of 2024
Headcount
29
as of 2025
Founded Date
Mar 2014
Key People
1JC
Jack ClarkFounder
Co-Founder & Policy Advisor
Co-founded Anthropic. Also co-founded AI Index at Stanford. Advisor to FLI on AI policy. Per FLI and Wikipedia.
All Facts
Financial
Organization
Political
General
Other
campaignPro-Human AI Declaration (March 2026): 5 pillars, 150+ signatory organizations (AFL-CIO to Congress of Christian Leaders), individual signatories include Yoshua Bengio, Daron Acemoglu, Steve Bannon, Ralph NaderMar 20264 pts▶
| As Of | Value | Link | |
|---|---|---|---|
| Mar 2026 | Pro-Human AI Declaration (March 2026): 5 pillars, 150+ signatory organizations (AFL-CIO to Congress of Christian Leaders), individual signatories include Yoshua Bengio, Daron Acemoglu, Steve Bannon, Ralph Nader | view → | |
| Oct 2025 | Superintelligence Prohibition Statement (October 2025): calls for prohibition on superintelligence development until 'broad scientific consensus on safety'; 69,000+ signatories including Geoffrey Hinton, Yoshua Bengio, Steve Wozniak | view → | |
| Mar 2023 | Pause Giant AI Experiments Open Letter (March 2023): called for a 6-month moratorium on training AI systems more powerful than GPT-4. 33,000+ signatories including Yoshua Bengio, Stuart Russell, Steve Wozniak, Elon Musk. | view → | |
| Jan 2017 | Asilomar AI Principles (January 2017): 23 principles for beneficial AI development adopted at the Asilomar Conference. 5,700+ signatories including Stuart Russell, Elon Musk, and Demis Hassabis. | view → |
grant-given1810000020243 pts▶
publicationAI Safety Index published biannually (Summer 2025, Winter 2025). Evaluates 7 leading AI companies on 33 indicators across 6 domains. Winter 2025 finding: no company has adequate guardrails for catastrophic misuse.Dec 2025view →
Board Seats
1| Member | Role | Source | Notes | Source check |
|---|---|---|---|---|
| Victoria Krakovna | Board Member | futureoflife.org | Google DeepMind research scientist. Confirmed on futureoflife.org/team as of 2026-03-16. |
Divisions
8| Name | DivisionType | Lead | Status | Source | Notes | StartDate | Slug | Website | Source check |
|---|---|---|---|---|---|---|---|---|---|
| Policy & Advocacy | program-area | Mark Brakel | active | futureoflife.org | Campaigns include Asilomar Principles (5,700+ signatories), 2023 Pause Letter (33,000+ signatories), AI Act advocacy. EU and UN engagement. Led by Mark Brakel (Global Director of Policy). | — | — | — | |
| FLI Grants Program | program-area | Andrea Berman | active | futureoflife.org | FLI's grantmaking arm. $25M+ distributed since 2015 across AI safety, nuclear risk, governance, and existential risk reduction. Andrea Berman is Grants Manager. | — | — | — | |
| Fellowship Programs | program-area | Andrea Berman | active | futureoflife.org | Vitalik Buterin PhD and Postdoctoral Fellowships in AI Existential Safety. Run with BAIF. 14+ PhD fellows and 4+ postdocs at top universities. Falls under Operations & Grants team. | 2022 | — | — | |
| Fellowship Programs | program-area | — | active | futureoflife.org | Vitalik Buterin PhD and Postdoctoral Fellowships in AI Existential Safety. Run with BAIF. 14+ PhD fellows and 4+ postdocs at top universities. | 2022 | — | — | |
| Futures Program | program-area | Emilia Javorsky | active | futureoflife.org | Storytelling, worldbuilding, scenario planning for beneficial tech futures. | 2024 | fli-futures | — | |
| Autonomous Weapons Campaign | program-area | — | active | futureoflife.org | Slaughterbots films (100M+ views), Lethal Autonomous Weapons Pledge (5,218 signatories), Autonomous Weapons Watch database. | — | fli-autonomous-weapons | futureoflife.org | |
| FLI Grantmaking Program | fund | — | active | futureoflife.org | 2015: $7M (Musk-funded), 2021: $25M (Buterin), 2022-2024: ~$16.5M total. AI safety, nuclear risk, autonomous weapons. | — | fli-grants | — | |
| AI Safety Index Program | program-area | — | active | futureoflife.org | Biannual. 33 indicators, 6 domains, 7 companies evaluated. Expert panel of 6 AI scientists. | 2024 | fli-safety-index | futureoflife.org |
Funding Programs
12| Name | ProgramType | Description | DivisionId | TotalBudget | Currency | Status | Source | Notes | Source check |
|---|---|---|---|---|---|---|---|---|---|
| 2018 AGI Safety Grant Program | grant-round | 10 projects focused on AGI safety; recipients at Stanford, MIT, Oxford, Yale, ANU | ov901J11Xp | $1.8M | USD | awarded | futureoflife.org | $1.78M total. Funded what became GovAI at Oxford (Allan Dafoe). | |
| 2024 Grants | grant-round | 6 grants including AI-nuclear nexus and journalism | ov901J11Xp | $4.2M | USD | awarded | futureoflife.org | Largest $1.85M to IASEAI and $1.5M to FAS. | |
| 2023 Grants | grant-round | 16 grants for AI safety research, policy, and governance | ov901J11Xp | $8.4M | USD | awarded | futureoflife.org | Largest to FAR AI ($1.86M) and ARC ($1.4M). | |
| Nuclear War Research Grant Program | grant-round | 10 grants studying nuclear war environmental impacts: climate, agriculture, ozone, fire modeling | ov901J11Xp | $4.1M | USD | open | futureoflife.org | Recipients at MIT, Rutgers, Exeter, Colorado, IIASA, PIK. 2023-2025. | |
| Vitalik Buterin Postdoctoral Fellowship in AI Existential Safety | fellowship | $80K/year stipend + $10K research fund. Fellows at Berkeley/CHAI, MIT, Oxford. | 6HSiapvN_m | — | USD | open | futureoflife.org | Run with BAIF. Fellows include Nisan Stiennon (Berkeley), Peter S. Park (MIT). | |
| US-China AI Governance PhD Fellowship | fellowship | Same structure as technical PhD fellowship. Focused on US-China AI governance. | 6HSiapvN_m | — | USD | open | futureoflife.org | 2025 class: Ruofei Wang, John Ferguson, Kayla Blomquist. | |
| Request for Proposals on Religious Projects | rfp | Up to $1.5M total; individual grants $30K-$300K. Faith community engagement with AI risks. | ov901J11Xp | $1.5M | USD | open | futureoflife.org | Launched 2026. | |
| How to Mitigate AI-Driven Power Concentration | rfp | 13 projects addressing AI-driven power concentration. Largest $1.66M to OpenMined Foundation. | ov901J11Xp | $5.6M | USD | open | futureoflife.org | Two review rounds (July and October 2024). | |
| Impact of AI on SDGs | rfp | 10 research grants at $15K each on AI impact on poverty, health, energy and climate. Primarily Global South recipients. | ov901J11Xp | $150K | USD | awarded | futureoflife.org | — | |
| 2015 AI Safety Research Grant Program | grant-round | First peer-reviewed AI safety grant program; 37 grants funded from Elon Musk's $10M donation | ov901J11Xp | $6.5M | USD | awarded | futureoflife.org | $6.5M distributed. Largest grant $1.5M to FHI (Nick Bostrom). Recipients included MIRI, UC Berkeley (Stuart Russell). | |
| Multistakeholder Engagement for Safe and Prosperous AI | rfp | Up to $5M for multi-stakeholder engagement projects. Individual grants $100K-$500K, multi-year up to 3 years. | ov901J11Xp | $5M | USD | open | futureoflife.org | — | |
| Global Institutions Governing AI | rfp | 6 research papers at $15K each designing governance institutions for AGI | ov901J11Xp | $90K | USD | awarded | futureoflife.org | — |
Publications
10| Title | PublicationType | Authors | Url | PublishedDate | IsFlagship | Source | Notes | Venue | Source check |
|---|---|---|---|---|---|---|---|---|---|
| Pro-Human AI Declaration | policy-brief | Future of Life Institute | humanstatement.org | 2026-01 | ✓ | humanstatement.org | 200+ individual signatories, 100+ organizations; cross-partisan from AFL-CIO to Congress of Christian Leaders | — | |
| AI Safety Index Winter 2025 | report | Future of Life Institute | futureoflife.org | 2025-12 | ✓ | futureoflife.org | — | FLI | |
| AI Safety Index: Winter 2025 | report | Future of Life Institute | futureoflife.org | 2025-12 | — | futureoflife.org | — | — | |
| Statement on Superintelligence | policy-brief | Future of Life Institute | superintelligence-statement.org | 2025-10 | ✓ | superintelligence-statement.org | 134,015 signatories | — | |
| AI Safety Index: Summer 2025 | report | Future of Life Institute | futureoflife.org | 2025-06 | ✓ | futureoflife.org | — | — | |
| Pause Giant AI Experiments: An Open Letter | policy-brief | Future of Life Institute | futureoflife.org | 2023-03 | ✓ | futureoflife.org | 31,810 signatories including Hinton, Bengio, Musk, Wozniak | — | |
| Lethal Autonomous Weapons Pledge | policy-brief | Future of Life Institute | futureoflife.org | 2018-06 | ✓ | futureoflife.org | 5,218 signatories pledging not to develop lethal autonomous weapons | — | |
| Asilomar AI Principles | policy-brief | Future of Life Institute | futureoflife.org | 2017-01 | ✓ | futureoflife.org | 5,720 signatories; 23 principles for beneficial AI; from Asilomar conference | — | |
| Autonomous Weapons: AI and Robotics Researchers Open Letter | policy-brief | Future of Life Institute | futureoflife.org | 2016-02 | ✓ | futureoflife.org | 34,378 signatories | — | |
| Research Priorities for Robust and Beneficial AI: An Open Letter | policy-brief | Future of Life Institute | futureoflife.org | 2015-10 | ✓ | futureoflife.org | 11,251 signatories; first major AI safety open letter | — |
▶Internal Metadata
| ID: | sid_d9sWZtyVwg |
| Stable ID: | sid_d9sWZtyVwg |
| Wiki ID: | E528 |
| Type: | organization |
| YAML Source: | packages/factbase/data/fb-entities/fli.yaml |
| Facts: | 25 structured (26 total) |
| Records: | 32 in 5 collections |