Skip to content
Longterm Wiki

Future of Life Institute (FLI)

Safety Organization
Founded Mar 2014 (12 years old)futureoflife.org
Structured Facts
Database Records
Revenue
$21M
as of 2024
Headcount
29
as of 2025
Founded Date
Mar 2014

Key People

1
JC
Jack ClarkFounder
Co-Founder & Policy Advisor
Co-founded Anthropic. Also co-founded AI Index at Stanford. Advisor to FLI on AI policy. Per FLI and Wikipedia.

All Facts

Financial

Annual Expenses$17M20243 pts
As OfValueLink
2024$17Mview →
2023$16Mview →
2021$16Mview →
Grant Received$666M20212 pts
As OfValueLink
2021$666Mview →
2015$10Mview →
Headcount2920252 pts
As OfValueLink
202529view →
202311view →
Revenue$21M20243 pts
As OfValueLink
2024$21Mview →
2023$4.7Mview →
2021$550Mview →

Organization

Founded DateMar 2014view →

Political

Lobbying Spend$360K20253 pts
As OfValueLink
2025$360Kview →
2024€447Kview →
2024$310Kview →

General

Websitehttps://futureoflife.org/view →

Other

campaignPro-Human AI Declaration (March 2026): 5 pillars, 150+ signatory organizations (AFL-CIO to Congress of Christian Leaders), individual signatories include Yoshua Bengio, Daron Acemoglu, Steve Bannon, Ralph NaderMar 20264 pts
As OfValueLink
Mar 2026Pro-Human AI Declaration (March 2026): 5 pillars, 150+ signatory organizations (AFL-CIO to Congress of Christian Leaders), individual signatories include Yoshua Bengio, Daron Acemoglu, Steve Bannon, Ralph Naderview →
Oct 2025Superintelligence Prohibition Statement (October 2025): calls for prohibition on superintelligence development until 'broad scientific consensus on safety'; 69,000+ signatories including Geoffrey Hinton, Yoshua Bengio, Steve Wozniakview →
Mar 2023Pause Giant AI Experiments Open Letter (March 2023): called for a 6-month moratorium on training AI systems more powerful than GPT-4. 33,000+ signatories including Yoshua Bengio, Stuart Russell, Steve Wozniak, Elon Musk.view →
Jan 2017Asilomar AI Principles (January 2017): 23 principles for beneficial AI development adopted at the Asilomar Conference. 5,700+ signatories including Stuart Russell, Elon Musk, and Demis Hassabis.view →
grant-given1810000020243 pts
As OfValueLink
202418100000view →
202315700000view →
Dec 202225025000view →
publicationAI Safety Index published biannually (Summer 2025, Winter 2025). Evaluates 7 leading AI companies on 33 indicators across 6 domains. Winter 2025 finding: no company has adequate guardrails for catastrophic misuse.Dec 2025view →
subsidiaryFuture of Life Action and Research, Inc. (FLARE) — 501(c)(4) advocacy arm2025view →

Board Seats

1
MemberRoleSourceNotesSource check
Victoria KrakovnaBoard Memberfutureoflife.orgGoogle DeepMind research scientist. Confirmed on futureoflife.org/team as of 2026-03-16.

Divisions

8
NameDivisionTypeLeadStatusSourceNotesStartDateSlugWebsiteSource check
Policy & Advocacyprogram-areaMark Brakelactivefutureoflife.orgCampaigns include Asilomar Principles (5,700+ signatories), 2023 Pause Letter (33,000+ signatories), AI Act advocacy. EU and UN engagement. Led by Mark Brakel (Global Director of Policy).
FLI Grants Programprogram-areaAndrea Bermanactivefutureoflife.orgFLI's grantmaking arm. $25M+ distributed since 2015 across AI safety, nuclear risk, governance, and existential risk reduction. Andrea Berman is Grants Manager.
Fellowship Programsprogram-areaAndrea Bermanactivefutureoflife.orgVitalik Buterin PhD and Postdoctoral Fellowships in AI Existential Safety. Run with BAIF. 14+ PhD fellows and 4+ postdocs at top universities. Falls under Operations & Grants team.2022
Fellowship Programsprogram-areaactivefutureoflife.orgVitalik Buterin PhD and Postdoctoral Fellowships in AI Existential Safety. Run with BAIF. 14+ PhD fellows and 4+ postdocs at top universities.2022
Futures Programprogram-areaEmilia Javorskyactivefutureoflife.orgStorytelling, worldbuilding, scenario planning for beneficial tech futures.2024fli-futures
Autonomous Weapons Campaignprogram-areaactivefutureoflife.orgSlaughterbots films (100M+ views), Lethal Autonomous Weapons Pledge (5,218 signatories), Autonomous Weapons Watch database.fli-autonomous-weaponsfutureoflife.org
FLI Grantmaking Programfundactivefutureoflife.org2015: $7M (Musk-funded), 2021: $25M (Buterin), 2022-2024: ~$16.5M total. AI safety, nuclear risk, autonomous weapons.fli-grants
AI Safety Index Programprogram-areaactivefutureoflife.orgBiannual. 33 indicators, 6 domains, 7 companies evaluated. Expert panel of 6 AI scientists.2024fli-safety-indexfutureoflife.org

Funding Programs

12
NameProgramTypeDescriptionDivisionIdTotalBudgetCurrencyStatusSourceNotesSource check
2018 AGI Safety Grant Programgrant-round10 projects focused on AGI safety; recipients at Stanford, MIT, Oxford, Yale, ANUov901J11Xp$1.8MUSDawardedfutureoflife.org$1.78M total. Funded what became GovAI at Oxford (Allan Dafoe).
2024 Grantsgrant-round6 grants including AI-nuclear nexus and journalismov901J11Xp$4.2MUSDawardedfutureoflife.orgLargest $1.85M to IASEAI and $1.5M to FAS.
2023 Grantsgrant-round16 grants for AI safety research, policy, and governanceov901J11Xp$8.4MUSDawardedfutureoflife.orgLargest to FAR AI ($1.86M) and ARC ($1.4M).
Nuclear War Research Grant Programgrant-round10 grants studying nuclear war environmental impacts: climate, agriculture, ozone, fire modelingov901J11Xp$4.1MUSDopenfutureoflife.orgRecipients at MIT, Rutgers, Exeter, Colorado, IIASA, PIK. 2023-2025.
Vitalik Buterin Postdoctoral Fellowship in AI Existential Safetyfellowship$80K/year stipend + $10K research fund. Fellows at Berkeley/CHAI, MIT, Oxford.6HSiapvN_mUSDopenfutureoflife.orgRun with BAIF. Fellows include Nisan Stiennon (Berkeley), Peter S. Park (MIT).
US-China AI Governance PhD FellowshipfellowshipSame structure as technical PhD fellowship. Focused on US-China AI governance.6HSiapvN_mUSDopenfutureoflife.org2025 class: Ruofei Wang, John Ferguson, Kayla Blomquist.
Request for Proposals on Religious ProjectsrfpUp to $1.5M total; individual grants $30K-$300K. Faith community engagement with AI risks.ov901J11Xp$1.5MUSDopenfutureoflife.orgLaunched 2026.
How to Mitigate AI-Driven Power Concentrationrfp13 projects addressing AI-driven power concentration. Largest $1.66M to OpenMined Foundation.ov901J11Xp$5.6MUSDopenfutureoflife.orgTwo review rounds (July and October 2024).
Impact of AI on SDGsrfp10 research grants at $15K each on AI impact on poverty, health, energy and climate. Primarily Global South recipients.ov901J11Xp$150KUSDawardedfutureoflife.org
2015 AI Safety Research Grant Programgrant-roundFirst peer-reviewed AI safety grant program; 37 grants funded from Elon Musk's $10M donationov901J11Xp$6.5MUSDawardedfutureoflife.org$6.5M distributed. Largest grant $1.5M to FHI (Nick Bostrom). Recipients included MIRI, UC Berkeley (Stuart Russell).
Multistakeholder Engagement for Safe and Prosperous AIrfpUp to $5M for multi-stakeholder engagement projects. Individual grants $100K-$500K, multi-year up to 3 years.ov901J11Xp$5MUSDopenfutureoflife.org
Global Institutions Governing AIrfp6 research papers at $15K each designing governance institutions for AGIov901J11Xp$90KUSDawardedfutureoflife.org

Publications

10
TitlePublicationTypeAuthorsUrlPublishedDateIsFlagshipSourceNotesVenueSource check
Pro-Human AI Declarationpolicy-briefFuture of Life Institutehumanstatement.org2026-01humanstatement.org200+ individual signatories, 100+ organizations; cross-partisan from AFL-CIO to Congress of Christian Leaders
AI Safety Index Winter 2025reportFuture of Life Institutefutureoflife.org2025-12futureoflife.orgFLI
AI Safety Index: Winter 2025reportFuture of Life Institutefutureoflife.org2025-12futureoflife.org
Statement on Superintelligencepolicy-briefFuture of Life Institutesuperintelligence-statement.org2025-10superintelligence-statement.org134,015 signatories
AI Safety Index: Summer 2025reportFuture of Life Institutefutureoflife.org2025-06futureoflife.org
Pause Giant AI Experiments: An Open Letterpolicy-briefFuture of Life Institutefutureoflife.org2023-03futureoflife.org31,810 signatories including Hinton, Bengio, Musk, Wozniak
Lethal Autonomous Weapons Pledgepolicy-briefFuture of Life Institutefutureoflife.org2018-06futureoflife.org5,218 signatories pledging not to develop lethal autonomous weapons
Asilomar AI Principlespolicy-briefFuture of Life Institutefutureoflife.org2017-01futureoflife.org5,720 signatories; 23 principles for beneficial AI; from Asilomar conference
Autonomous Weapons: AI and Robotics Researchers Open Letterpolicy-briefFuture of Life Institutefutureoflife.org2016-02futureoflife.org34,378 signatories
Research Priorities for Robust and Beneficial AI: An Open Letterpolicy-briefFuture of Life Institutefutureoflife.org2015-10futureoflife.org11,251 signatories; first major AI safety open letter
Internal Metadata
ID: sid_d9sWZtyVwg
Stable ID: sid_d9sWZtyVwg
Wiki ID: E528
Type: organization
YAML Source: packages/factbase/data/fb-entities/fli.yaml
Facts: 25 structured (26 total)
Records: 32 in 5 collections