Future of Life Institute (FLI)
- QualityRated 46 but structure suggests 100 (underrated by 54 points)
- Links9 links could use <R> components
Quick Assessment
Section titled “Quick Assessment”| Dimension | Assessment | Evidence |
|---|---|---|
| Focus | AI Safety Advocacy + Grantmaking | Dual approach: public campaigns and research funding |
| Grant Scale | $25M+ distributed | 2015: $7M to 37 projects; 2021: $25M program from Buterin donation |
| Public Profile | Very High | Asilomar Principles (5,700+ signatories), Pause Letter (33,000+ signatories) |
| Approach | Policy + Research + Advocacy | EU AI Act engagement, UN autonomous weapons, Slaughterbots films |
| Location | Boston, MA (global staff of 20+) | Policy teams in US and EU |
| Major Funding | $665.8M (2021 Buterin), $10M (2015 Musk) | Endowment from cryptocurrency donation |
| Key Conferences | Puerto Rico 2015, Asilomar 2017 | Considered birthplace of AI alignment field |
Organization Details
Section titled “Organization Details”| Attribute | Details |
|---|---|
| Full Name | Future of Life Institute |
| Type | 501(c)(3) Nonprofit |
| EIN | 47-1052538 |
| Founded | March 2014 |
| Launch Event | May 24, 2014 at MIT (auditorium 10-250) |
| Founders | Max Tegmark (President), Jaan Tallinn, Anthony Aguirre (Executive Director), Viktoriya Krakovna, Meia Chita-Tegmark |
| Location | Boston, Massachusetts (headquarters); global remote staff |
| Staff Size | 20+ full-time team members |
| Teams | Policy, Outreach, Grantmaking |
| Website | futureoflife.org |
| Related Sites | autonomousweapons.org, autonomousweaponswatch.org |
| Research Grants | $25M+ distributed across multiple rounds |
| EU Advocacy Budget | €446,619 annually |
Overview
Section titled “Overview”The Future of Life Institute (FLI) is a nonprofit organization dedicated to reducing existential risks from advanced technologies, with a particular focus on artificial intelligence. Founded in March 2014 by MIT cosmologist Max Tegmark, Skype co-founder Jaan Tallinn, UC Santa Cruz physicist Anthony Aguirre, DeepMind research scientist Viktoriya Krakovna, and Tufts researcher Meia Chita-Tegmark, FLI has become one of the most publicly visible organizations in the AI safety space. The organization officially launched on May 24, 2014, at MIT’s auditorium 10-250 with a panel discussion on “The Future of Technology: Benefits and Risks,” moderated by Alan Alda and featuring panelists including Nobel laureate Frank Wilczek, synthetic biologist George Church, and Jaan Tallinn.
Unlike research-focused organizations like MIRI or Redwood Research, FLI emphasizes public advocacy, policy engagement, and awareness-raising alongside its grantmaking. This tripartite approach—combining direct research funding, high-profile public campaigns, and government engagement—has made FLI particularly effective at shaping public discourse around AI risk. The organization’s 2015 Puerto Rico conference is sometimes described as the “birthplace of the field of AI alignment,” bringing together leading AI researchers to discuss safety concerns that had previously been marginalized in academic circles. The subsequent 2017 Asilomar conference produced the 23 Asilomar AI Principles, one of the earliest and most influential sets of AI governance principles.
FLI’s major initiatives have helped establish AI safety as a mainstream concern rather than a fringe topic. The 2023 “Pause Giant AI Experiments” open letter garnered over 33,000 signatures and generated massive media coverage, even though the requested pause was not implemented by AI labs. The organization has also been influential in autonomous weapons policy, producing the viral Slaughterbots video series and advocating for international regulation at the United Nations. FLI received a transformative $665.8 million cryptocurrency donation from Ethereum co-founder Vitalik Buterin in 2021, which has been partially converted to an endowment ensuring long-term organizational independence.
Founding and Early History
Section titled “Founding and Early History”The Future of Life Institute emerged from concerns about existential risks that had been growing among a network of physicists, AI researchers, and technology entrepreneurs. Max Tegmark, an MIT cosmologist who had become increasingly concerned about AI safety after reading Nick Bostrom’s work, connected with Jaan Tallinn, who had been funding existential risk research through organizations like MIRI and the Cambridge Centre for the Study of Existential Risk (CSER). Together with Anthony Aguirre (co-founder of the Foundational Questions Institute and later Metaculus), Viktoriya Krakovna (then a PhD student, now at DeepMind), and Meia Chita-Tegmark, they formally established FLI in March 2014.
The founding team recognized a gap in the existential risk ecosystem: while organizations like MIRI focused on technical AI safety research and CSER on academic study, there was no organization specifically dedicated to public engagement, policy advocacy, and convening stakeholders across academia, industry, and government. FLI was designed to fill this gap, with a mission to “steer transformative technology towards benefiting life and away from large-scale risks.”
| Milestone | Date | Significance |
|---|---|---|
| FLI Founded | March 2014 | Organization formally established |
| MIT Launch Event | May 24, 2014 | Public launch with Alan Alda moderating; panelists included George Church, Frank Wilczek, Jaan Tallinn |
| Research Priorities Open Letter | January 2015 | First major public initiative; signed by Stephen Hawking, Elon Musk, and leading AI researchers |
| Puerto Rico Conference | January 2-5, 2015 | ”The Future of AI: Opportunities and Challenges”; considered birthplace of AI alignment field |
| Musk Donation Announced | January 2015 | $10M commitment to fund AI safety research |
| First Grants Announced | July 1, 2015 | $7M awarded to 37 research projects |
| Asilomar Conference | January 5-8, 2017 | Produced 23 Asilomar Principles; 100+ attendees |
| Slaughterbots Video | November 13, 2017 | 2M+ views within weeks; screened at UN |
| Buterin Donation | 2021 | $665.8M cryptocurrency donation |
| Pause Letter | March 2023 | 33,000+ signatures; massive media coverage |
Key Initiatives
Section titled “Key Initiatives”Research Grants Program
Section titled “Research Grants Program”FLI established the world’s first peer-reviewed grant program specifically aimed at AI safety research. The program began following the January 2015 Puerto Rico conference, when Elon Musk announced a $10 million donation to support “a global research program aimed at keeping AI beneficial to humanity.”
2015 Grant Program: FLI issued a Request for Proposals (RFP) in early 2015, receiving nearly 300 applications from research teams worldwide. The RFP sought proposals in two categories: “project grants” (typically $100,000-$500,000 over 2-3 years) for research by small teams or individuals, and “center grants” ($500,000-$1,500,000) for establishing new research centers. On July 1, 2015, FLI announced $7 million in awards to 37 research projects. Coefficient GivingCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100 (then Open Philanthropy) supplemented this with $1.186 million after determining that the quality of proposals exceeded available funding.
| Grant Round | Amount | Projects | Source | Focus Areas |
|---|---|---|---|---|
| 2015 Round | $7M | 37 | Elon Musk ($10M donation) | Technical AI safety, value alignment, economics, policy, autonomous weapons |
| Coefficient Giving Supplement | $1.186M | Additional projects | Coefficient Giving (then Open Philanthropy) | High-quality proposals exceeding initial funding |
| 2021 Program | $25M | Multiple | Vitalik Buterin donation | Expanded AI safety and governance research |
| 2023 Grants | Various | Multiple | Ongoing | PhD fellowships, technical research |
2015 Grant Recipients (selected examples):
| Recipient | Institution | Amount | Project Focus |
|---|---|---|---|
| Nick Bostrom | FHI Oxford | $1.5M | Strategic Research Center for AI (geopolitical challenges) |
| Stuart Russell | UC Berkeley | ≈$500K | Value alignment and inverse reinforcement learning |
| MIRI | Machine Intelligence Research Institute | $299,310 | Long-term AI safety research ($250K over 3 years) |
| Owain Evans | FHI (collaboration with MIRI) | $227,212 | Algorithms learning human preferences despite irrationalities |
| Manuela Veloso | Carnegie Mellon | ≈$200K | Explainable AI systems |
| Paul Christiano | UC Berkeley | ≈$150K | Value learning approaches |
| Ramana Kumar | Cambridge (collaboration with MIRI) | $36,750 | Self-reference in HOL theorem prover |
| Michael Webb | Stanford | ≈$100K | Economic impacts of AI |
| Heather Roff | Various | ≈$100K | Meaningful human control of autonomous weapons |
The funded projects spanned technical AI safety (ensuring advanced AI systems align with human values), economic analysis (managing AI’s labor market impacts), policy research (autonomous weapons governance), and philosophical foundations (clarifying concepts of agency and liability for autonomous systems).
Puerto Rico Conference (2015)
Section titled “Puerto Rico Conference (2015)”The Puerto Rico AI Safety Conference (officially “The Future of AI: Opportunities and Challenges”) was held January 2-5, 2015, in San Juan. This conference is sometimes described as the “birthplace of the field of AI alignment,” as it brought together the world’s leading AI builders from academia and industry to engage with experts in economics, law, and ethics on AI safety for the first time at scale.
| Aspect | Details |
|---|---|
| Dates | January 2-5, 2015 |
| Location | San Juan, Puerto Rico |
| Attendees | ≈40 leading AI researchers and thought leaders |
| Outcome | Research Priorities Open Letter; Elon Musk $10M donation announcement |
| Significance | First major convening of AI safety concerns with mainstream AI researchers |
Notable Attendees:
- AI Researchers: Stuart Russell (Berkeley), Thomas Dietterich (AAAI President), Francesca Rossi (IJCAI President), Bart Selman (Cornell), Tom Mitchell (CMU), Murray Shanahan (Imperial College)
- Industry: Representatives from Google DeepMind, Vicarious
- Existential Risk Organizations: FHI, CSER, MIRI representatives
- Technology Leaders: Elon Musk, Vernor Vinge
The conference produced an open letter on AI safety that was subsequently signed by Stephen Hawking, Elon Musk, and many leading AI researchers. Following the conference, Musk announced his $10 million donation to fund FLI’s research grants program.
Asilomar Conference and AI Principles (2017)
Section titled “Asilomar Conference and AI Principles (2017)”The Beneficial AI 2017 conference, held January 5-8, 2017, at the Asilomar Conference Grounds in California, was a sequel to the 2015 Puerto Rico conference. More than 100 thought leaders and researchers in AI, economics, law, ethics, and philosophy met to address and formulate principles for beneficial AI development. The conference was not open to the public, with attendance curated to include influential figures who could shape the field’s direction.
| Aspect | Details |
|---|---|
| Dates | January 5-8, 2017 |
| Location | Asilomar Conference Center, Pacific Grove, California |
| Attendees | 100+ AI researchers, industry leaders, philosophers |
| Outcome | 23 Asilomar AI Principles published January 30, 2017 |
| Signatories | 1,797 AI/robotics researchers + 3,923 others (5,700+ total) |
Notable Participants:
| Category | Participants |
|---|---|
| AI Researchers | Stuart Russell (Berkeley), Bart Selman (Cornell), Yoshua Bengio (Montreal), Ilya Sutskever (OpenAI/DeepMind), Yann LeCun (Facebook), Dario Amodei (OpenAI), Nate Soares (MIRI), Shane Legg (DeepMind), Viktoriya Krakovna (DeepMind/FLI), Stefano Ermon (Stanford) |
| Industry Leaders | Elon Musk (Tesla/SpaceX), Demis Hassabis (DeepMind CEO), Ray Kurzweil (Google) |
| Philosophers & Authors | Nick Bostrom (FHI), David Chalmers (NYU), Sam Harris |
| FLI Leadership | Jaan Tallinn, Max Tegmark, Richard Mallah |
The 23 Asilomar AI Principles are organized into three categories:
Research Issues (5 principles):
- Research Goal: Create beneficial, not undirected intelligence
- Research Funding: Include safety research alongside capability research
- Science-Policy Link: Constructive exchange between researchers and policymakers
- Research Culture: Foster cooperation, trust, and transparency
- Race Avoidance: Avoid corner-cutting on safety for competitive advantage
Ethics and Values (13 principles):
- Safety: AI systems should be safe and secure
- Failure Transparency: Capability to determine causes of harm
- Judicial Transparency: Explanations for legal decisions
- Responsibility: Designers and builders are stakeholders in implications
- Value Alignment: AI goals should align with human values
- Human Values: Designed to be compatible with human dignity, rights, freedoms
- Personal Privacy: Control over data access for AI systems
- Liberty and Privacy: AI should not unreasonably curtail liberty
- Shared Benefit: Benefits should be broadly distributed
- Shared Prosperity: Economic prosperity should be broadly shared
- Human Control: Humans should choose how to delegate decisions
- Non-subversion: Power from AI should respect social processes
- AI Arms Race: Lethal autonomous weapons race should be avoided
Longer-term Issues (5 principles):
- Capability Caution: Avoid strong assumptions about upper limits
- Importance: Advanced AI could be profound change; plan accordingly
- Risks: Catastrophic or existential risks require commensurate effort
- Recursive Self-Improvement: Subject to strict safety and control
- Common Good: Superintelligence should benefit all humanity
Legacy and Influence: The Asilomar Principles have been cited in policy discussions worldwide. Key themes (human-centric AI, transparency, robustness) appear in later legislation including the EU AI Act. Notable signatories included Stephen Hawking, Elon Musk, Anthony D. Romero (ACLU Executive Director), Demis Hassabis, Ilya Sutskever, Yann LeCun, Yoshua Bengio, and Stuart Russell.
”Pause Giant AI Experiments” Letter (2023)
Section titled “”Pause Giant AI Experiments” Letter (2023)”The open letter “Pause Giant AI Experiments” was published by FLI on March 22, 2023—one week after OpenAI released GPT-4. The letter called for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” citing concerns about AI-generated propaganda, extreme automation of jobs, human obsolescence, and society-wide loss of control. The timing was strategic: GPT-4 demonstrated capabilities that surprised even AI researchers, and public attention to AI risk was at an all-time high.
| Aspect | Details |
|---|---|
| Published | March 22, 2023 (one week after GPT-4 release) |
| Signatories | 33,000+ total |
| Notable Signatories | Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak, Yuval Noah Harari, Gary Marcus |
| Request | 6-month pause on training AI systems more powerful than GPT-4 |
| Media Coverage | Extensive worldwide coverage; US Senate hearing cited the letter |
Key Arguments in the Letter:
- Contemporary AI systems are becoming “human-competitive at general tasks”
- AI labs are locked in an “out-of-control race” that “no one—not even their creators—can understand, predict, or reliably control”
- Profound risks to society including “flooding our information channels with propaganda and untruth,” “automating away all jobs,” and “loss of control of our civilization”
- The pause should be used to develop “shared safety protocols” verified by independent experts
Reactions and Criticism:
| Critic/Supporter | Position | Argument |
|---|---|---|
| Timnit Gebru, Emily Bender, Margaret Mitchell | Critical | Letter is “sensationalist,” amplifies “dystopian sci-fi scenario” while ignoring current algorithmic harms |
| Bill Gates | Did not sign | ”Asking one particular group to pause doesn’t solve the challenges” |
| Sam Altman (OpenAI CEO) | Critical | Letter is “missing most technical nuance”; OpenAI was not training GPT-5 as claimed in early drafts |
| Reid Hoffman | Critical | Called it “virtue signalling” with no real impact |
| Eliezer Yudkowsky | Critical (from other direction) | Wrote in Time: “shut it all down”—letter doesn’t go far enough |
| European Parliament | Engaged | Issued formal response; EU policymakers cited letter in AI Act discussions |
| US Senate | Engaged | Hearing on AI safety cited the letter |
Actual Outcomes: The requested pause was not implemented. As FLI noted on the letter’s one-year anniversary, AI companies instead “directed vast investments in infrastructure to train ever-more giant AI systems.” However, FLI’s policy director Mark Brakel noted that the response exceeded expectations: “The reaction has been intense. We feel that it has given voice to a huge undercurrent of concern about the risks of high-powered AI systems not just at the public level, but top researchers in AI and other topics, business leaders, and policymakers.”
The letter did contribute to a significant shift in public discourse. AI safety became a mainstream media topic, government inquiries accelerated, and phrases like “existential risk from AI” entered common vocabulary. Whether this attention will translate to effective governance remains contested.
Autonomous Weapons Advocacy: Slaughterbots
Section titled “Autonomous Weapons Advocacy: Slaughterbots”Beyond AI safety, FLI has been a leading advocate for international regulation of lethal autonomous weapons systems (LAWS). Their most visible campaign is the Slaughterbots video series, produced in collaboration with Stuart Russell.
Slaughterbots (2017): Released November 13, 2017, this arms-control advocacy video presents a dramatized near-future scenario where swarms of inexpensive microdrones use facial recognition and AI to assassinate political opponents. The script was written by Stuart Russell; production was funded by FLI. According to Russell: “What we were trying to show was the property of autonomous weapons to turn into weapons of mass destruction automatically because you can launch as many as you want.”
| Video | Release Date | Views | Key Message |
|---|---|---|---|
| Slaughterbots | November 13, 2017 | 2M+ within weeks | Microdrones as WMDs; need for regulation |
| if human: kill() | November 30, 2021 | Sequel | Depicts failed ban, technical errors, eventual treaty |
| Artificial Escalation | 2022 | Ongoing series | AI in nuclear command and control |
UN Engagement: FLI representatives regularly attend UN Convention on Certain Conventional Weapons (CCW) meetings in Geneva. FLI’s Anna Hehir has spoken at these forums about the “proliferation and escalation risks of autonomous weapons,” arguing these weapons are “unpredictable, unreliable, and unexplainable.”
Related Resources: FLI operates autonomousweapons.org (case for regulation) and autonomousweaponswatch.org (database of weapons systems with concerning autonomy levels developed globally).
Founders and Leadership
Section titled “Founders and Leadership”Max Tegmark (President)
Section titled “Max Tegmark (President)”| Aspect | Details |
|---|---|
| Role | Co-founder, President |
| Background | MIT Professor of Physics (cosmology specialty) |
| Education | PhD Physics, UC Berkeley (1994); BA Physics & Economics, Stockholm School of Economics (1990) |
| Books | Life 3.0: Being Human in the Age of AI (2017), Our Mathematical Universe (2014) |
| Media | Web Summit 2024 (Lisbon), numerous science documentaries, TED talks |
| Research | Cosmology, foundations of physics, consciousness, AI safety |
Tegmark is the most public face of FLI, frequently appearing in media to discuss AI risks. His 2017 book Life 3.0 was widely read in technology circles and helped popularize concepts like “AI alignment” to general audiences. Tegmark has testified before the European Parliament on AI regulation and regularly engages with policymakers.
Anthony Aguirre (Executive Director)
Section titled “Anthony Aguirre (Executive Director)”| Aspect | Details |
|---|---|
| Role | Co-founder, Executive Director |
| Background | Faggin Presidential Professor for the Physics of Information, UC Santa Cruz |
| Education | PhD Astronomy, Harvard University (2000) |
| Other Roles | Co-founder, Foundational Questions Institute (FQXi, 2006); Co-founder, Metaculus (2015) |
| Books | Cosmological Koans (2019); Keep The Future Human (March 2025) |
| Research | Theoretical cosmology, gravitation, statistical mechanics, AI governance |
Aguirre has shifted FLI’s focus toward more direct policy engagement in recent years. His March 2025 essay Keep The Future Human: Why and How We Should Close the Gates to AGI and Superintelligence proposes an international regulatory scheme for AI. He has appeared on the AXRP podcast discussing FLI’s strategy and the organization’s evolution from academic grantmaking to policy advocacy.
Jaan Tallinn (Co-founder, Board Member)
Section titled “Jaan Tallinn (Co-founder, Board Member)”| Aspect | Details |
|---|---|
| Role | Co-founder, Board Member |
| Background | Founding engineer of Skype and Kazaa |
| Philanthropy | Founder, Survival and Flourishing Fund; co-founder, Cambridge Centre for the Study of Existential Risk (CSER) |
| Estimated Giving | $100M+ to existential risk organizations |
| Focus | AI safety funding, existential risk ecosystem building |
Tallinn is one of the largest individual funders of existential risk research globally. His network of organizations (SFF, CSER, FLI) forms a significant portion of the AI safety funding landscape. He participated as a panelist at both the 2015 Puerto Rico and 2017 Asilomar conferences.
Other Founders and Key Staff
Section titled “Other Founders and Key Staff”| Person | Role | Background |
|---|---|---|
| Viktoriya Krakovna | Co-founder | Research scientist at DeepMind; AI safety research (specification gaming, impact measures) |
| Meia Chita-Tegmark | Co-founder | Previously at Tufts University; organizer and researcher |
| Risto Uuk | Head of EU Policy and Research | Leads FLI’s EU AI policy work, including AI Act engagement |
| Mark Brakel | Director of Policy | Led response to pause letter; government relations |
| Anna Hehir | Policy (Autonomous Weapons) | UN Geneva CCW representative |
| Emilia Javorsky | Policy | Vienna Autonomous Weapons Conference 2025 representative |
Staff Structure: FLI has grown to 20+ full-time staff members globally, primarily organized into Policy, Outreach, and Grantmaking teams. Staff backgrounds span machine learning, medicine, government, and industry.
Funding and Financials
Section titled “Funding and Financials”FLI’s funding history includes several transformative donations that have shaped the organization’s trajectory and independence.
Major Donors
Section titled “Major Donors”| Donor | Amount | Year | Purpose |
|---|---|---|---|
| Vitalik Buterin | $665.8M (cryptocurrency) | 2021 | Largest donation; partial endowment, grantmaking |
| Elon Musk | $10M | 2015 | First AI safety research grants program |
| Coefficient GivingCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100 | $1.9M total | Various | Supplemental grant funding, operational support |
| Survival and Flourishing Fund | $500K | Various | Operational support |
| Jaan Tallinn | Ongoing | 2014-present | Founding support, strategic direction |
The Buterin Donation
Section titled “The Buterin Donation”In 2021, Ethereum co-founder Vitalik Buterin donated $665.8 million in cryptocurrency to FLI—the largest single donation in the organization’s history and one of the largest cryptocurrency donations to any nonprofit. The donation was “large and unconditional,” with FLI converting a significant portion to an endowment to ensure long-term organizational independence. According to FLI’s finances page, Buterin was not officially acknowledged as “largest donor by far” until May 2023, when the organization updated its funding page.
The donation has been used for:
- Endowment: Long-term organizational sustainability
- 2021 Grant Program: $25 million announced for AI safety research
- Operational Deficit Coverage: FLI’s 2023 income was only $624,714; the Buterin endowment covers operating shortfalls
- Asset Transfers: Between December 11-30, 2022, FLI transferred $368 million to three related entities governed by the same four people (Max Tegmark, Meia Chita-Tegmark, Anthony Aguirre, Jaan Tallinn)
Financial Overview
Section titled “Financial Overview”| Metric | Value | Notes |
|---|---|---|
| 2023 Income | $624,714 | $600K from single individual donor |
| 2024 Income | €83,241 | Limited fundraising year |
| EU Advocacy Spending | €446,619/year | Includes staff and Dentons Global Advisors |
| Total Grants Distributed | $25M+ | Across all grant programs |
| Grant Size Range | $22,000 - $1.5M | Historical range |
| Donations Received | 1,500+ | “Various sizes from wide variety of donors” since founding |
Institutional Funders
Section titled “Institutional Funders”| Funder | Amount | Purpose |
|---|---|---|
| Coefficient GivingCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100 | $1.186M (2015) | Supplement to Musk grants (high-quality proposals exceeded funding) |
| Coefficient GivingCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100 | Additional grants | Various operational support |
| Survival and Flourishing Fund | $500K | Operational support |
Policy Work and Government Engagement
Section titled “Policy Work and Government Engagement”FLI maintains active policy engagement across multiple jurisdictions, with dedicated staff for EU, UN, and US advocacy.
European Union
Section titled “European Union”FLI’s EU work focuses on two priorities: (1) promoting beneficial AI development and (2) regulating lethal autonomous weapons. Their most significant achievement was advocating for the inclusion of foundation models (general-purpose AI systems) in the scope of the EU AI Act.
| Initiative | Status | FLI Role |
|---|---|---|
| EU AI Act (Foundation Models) | Adopted | Successfully pushed for inclusion of general-purpose systems; advocated for adoption |
| Definition of Manipulation | Ongoing | Recommending broader definition to include any manipulatory technique and societal harm |
| Autonomous Weapons Treaty | Advocacy | Encouraging EU member states to support international treaty |
EU Advocacy Details:
- Budget: €446,619 annually (includes staff salaries and Dentons Global Advisors consulting)
- Lead: Risto Uuk (Head of EU Policy and Research)
- Key Achievement: Foundation models included in AI Act scope
United Nations
Section titled “United Nations”FLI advocates at the UN for a legally binding international instrument on autonomous weapons and a new international agency to govern AI.
| Activity | Forum | Outcome |
|---|---|---|
| Autonomous Weapons Treaty | CCW (Convention on Certain Conventional Weapons) Geneva | Ongoing advocacy; FLI agrees with ICRC recommendation for legally binding rules |
| 2018 Letter on Lethal Autonomous Weapons | Global | FLI drafted letter calling for laws against lethal autonomous weapons |
| Digital Cooperation Roadmap | UN Secretary-General | FLI (with France and Finland) served as civil society champion; recommendations (3C) on AI governance were adopted |
| Slaughterbots Screening | UN CCW | 2017 video shown to delegates |
United States
Section titled “United States”| Activity | Details |
|---|---|
| Congressional Testimony | Max Tegmark and others have testified before Congress on AI risk |
| Senate Hearings | 2023 pause letter cited in AI safety hearings |
| Policy Research | Analysis supporting US AI governance frameworks |
Public Education and Outreach
Section titled “Public Education and Outreach”| Medium | Activities |
|---|---|
| Podcasts | Interviews with researchers, policymakers; AXRP appearance by Anthony Aguirre |
| Articles and Reports | Explainers on AI risk, policy analysis, technical summaries |
| Videos | Slaughterbots series, educational content on AI safety |
| Websites | futureoflife.org, autonomousweapons.org, autonomousweaponswatch.org |
| Newsletters | Regular updates on AI safety and policy developments |
| Social Media | Ongoing communication; significant following |
| Conferences | Web Summit 2024 (Tegmark), Vienna Autonomous Weapons Conference 2025 (Javorsky) |
Research and Fellowship Programs
Section titled “Research and Fellowship Programs”| Program | Description |
|---|---|
| AI Safety Grants | Direct research funding (see grants section) |
| PhD Fellowships | Technical AI safety research; 2024 launched US-China AI Governance fellowship |
| Convening | Conferences bringing together researchers, industry, and policymakers |
| Publications | Policy papers, technical research support |
Controversies and Criticisms
Section titled “Controversies and Criticisms”FLI has faced significant criticism from multiple directions, reflecting tensions within the AI ethics and safety communities.
Pause Letter Criticism (2023)
Section titled “Pause Letter Criticism (2023)”The 2023 pause letter was criticized from both within and outside the AI safety community:
| Critic | Affiliation | Criticism |
|---|---|---|
| Timnit Gebru | DAIR, former Google | ”Sensationalist”; amplifies “dystopian sci-fi scenario” while ignoring current algorithmic harms |
| Emily Bender | University of Washington | Co-author of “On the Dangers of Stochastic Parrots”; letter ignores real present-day harms |
| Margaret Mitchell | Former Google AI Ethics | ”Letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI. Ignoring active harms right now is a privilege that some of us don’t have.” |
| Bill Gates | Microsoft | ”Asking one particular group to pause doesn’t solve the challenges” |
| Sam Altman | OpenAI CEO | ”Missing most technical nuance about where we need the pause”; disputed claims about GPT-5 training |
| Reid Hoffman | LinkedIn/Microsoft | ”Virtue signalling” with no real impact |
| Eliezer Yudkowsky | MIRI | Time essay: “Shut it all down”—letter doesn’t go far enough; requested moratorium is insufficient |
Near-Term vs. Long-Term AI Risk Debate
Section titled “Near-Term vs. Long-Term AI Risk Debate”Critics argue that FLI’s focus on long-term existential risk from hypothetical superintelligent AI distracts from immediate harms:
| Argument | Source | FLI Position |
|---|---|---|
| ”Long-term AI risk arguments are speculative and downplay near-term harms” | AI ethics researchers (Gebru, Bender, Mitchell) | Both near-term and long-term risks deserve attention |
| ”Provoking fear of AI serves tech billionaires who fund these groups” | Critics of effective altruism | FLI maintains editorial independence despite funding sources |
| ”Current discrimination and job loss are more urgent than speculative superintelligence” | Labor and civil rights advocates | AI safety research addresses both capability and deployment risks |
TESCREALism Accusations
Section titled “TESCREALism Accusations”Philosopher Émile Torres has accused FLI of embracing “TESCREALism”—the ideology of re-engineering humanity through AI for immortality, space colonization, and post-human civilization. Torres argues that while some TESCREALists support unregulated AI development, FLI “embraces the goal but is alarmed by what can go wrong along the way.” FLI has not directly responded to these characterizations.
Controversial Grant Proposal (Nya Dagbladet Foundation)
Section titled “Controversial Grant Proposal (Nya Dagbladet Foundation)”In 2022, FLI faced controversy over a potential grant to the Nya Dagbladet Foundation (NDF):
| Timeline | Event |
|---|---|
| Initial review | FLI was “initially positive” about NDF proposal |
| Due diligence | FLI’s process “uncovered information indicating that NDF was not aligned with FLI’s values or charitable purposes” |
| November 2022 | FLI informed NDF they would not proceed with a grant |
| December 15, 2022 | Swedish media contacted FLI describing Nya Dagbladet as a “far-right extremist group” |
| Outcome | FLI issued public statement; zero funding was given to NDF |
Elon Musk Association
Section titled “Elon Musk Association”| Issue | Context | FLI Response |
|---|---|---|
| Initial Funding | $10M grant from Musk (2015) | Donation was earmarked for research grants; FLI has received 1,500+ donations since |
| Pause Letter Signatory | Musk among 33,000+ signatories | Many prominent researchers also signed; Musk is one of thousands |
| Perception | Some media portray FLI as “Musk-aligned” | FLI maintains editorial and programmatic independence; Buterin donation is now larger |
| Conflict of Interest Concerns | Musk’s xAI competes with OpenAI; pause letter benefits competitors | FLI points to diverse signatory list including Bengio, Russell, Hinton |
Cryptocurrency Donation Transparency
Section titled “Cryptocurrency Donation Transparency”| Issue | Context |
|---|---|
| Late Disclosure | Buterin’s $665.8M donation (2021) was not publicly acknowledged as “largest donor by far” on FLI’s website until May 2023 |
| Asset Transfers | Between December 11-30, 2022, FLI transferred $368M to three entities governed by the same four people (Tegmark, Chita-Tegmark, Aguirre, Tallinn) |
| Cryptocurrency Volatility | Donation value fluctuated significantly; actual liquid value unclear |
Connection to FTX/Effective Altruism
Section titled “Connection to FTX/Effective Altruism”FLI operates within the broader effective altruism ecosystem, which was significantly affected by the FTX collapse in November 2022. While FLI was not directly funded by FTX or the FTX Future Fund to the same extent as other EA organizations, the association has drawn scrutiny. FLI has not received clawback demands, but the broader EA funding crisis has affected the landscape in which FLI operates.
Comparison with Other Organizations
Section titled “Comparison with Other Organizations”| Aspect | FLI | MIRI | CAIS | Coefficient Giving | CSER |
|---|---|---|---|---|---|
| Primary Focus | Advocacy + Grants + Policy | Technical AI safety research | Research + Statement of Concern | Grantmaking (broad) | Academic existential risk research |
| Public Profile | Very High | Low-Medium | Medium | Medium | Medium |
| Media Strategy | Very Active (viral videos, open letters) | Minimal | Selective (single statement) | Moderate | Academic publications |
| Policy Engagement | Very High (EU, UN, US) | Minimal | Limited | Moderate (via grantees) | Moderate |
| Grant Distribution | $25M+ | N/A (recipient) | N/A (new org) | Billions | N/A |
| Funding Model | Major donors + endowment | Donations | Donations | Good Ventures | University + grants |
| Geographic Focus | Global | US | US | Global | UK |
| Founding Year | 2014 | 2000 | 2022 | 2014 | 2012 |
| Founder Connection | Tallinn (board) | Tallinn (funded) | Hinton, Bengio, etc. | Moskovitz | Tallinn (co-founder) |
Positioning in the AI Safety Ecosystem
Section titled “Positioning in the AI Safety Ecosystem”FLI occupies a distinct niche: high-profile public advocacy combined with grantmaking and policy engagement. While MIRI focuses on technical research and Coefficient Giving on behind-the-scenes grantmaking, FLI prioritizes visibility and discourse-shaping. This creates both advantages (media influence, policy access) and disadvantages (controversy, perception of sensationalism).
Strengths and Limitations
Section titled “Strengths and Limitations”Strengths
Section titled “Strengths”| Strength | Evidence | Impact |
|---|---|---|
| Public Visibility | Pause letter: 33,000+ signatures; Slaughterbots: 2M+ views; Asilomar Principles: 5,700+ signatories | Shaped public discourse on AI risk; made “AI safety” mainstream term |
| Convening Power | Puerto Rico 2015, Asilomar 2017 brought together top AI researchers, industry leaders, philosophers | Created field of AI alignment; produced influential governance frameworks |
| Policy Access | EU AI Act engagement; UN CCW participation; US Congressional testimony | Foundation models included in AI Act; autonomous weapons on international agenda |
| Financial Resources | $665.8M Buterin donation; $25M+ in grants distributed | Long-term sustainability; significant grantmaking capacity |
| Communication | Viral videos, open letters, effective media strategy | Public awareness of AI risk dramatically increased |
| Network Effects | Tallinn connections to CSER, SFF; overlap with EA/rationalist communities | Influence across multiple organizations |
| First-Mover Advantage | Founded 2014; first AI safety grants program 2015 | Established credibility before AI became mainstream concern |
Limitations
Section titled “Limitations”| Limitation | Context | Consequence |
|---|---|---|
| Controversy | Pause letter criticism; TESCREALism accusations; near-term vs. long-term debate | Alienated some AI ethics researchers; credibility questioned in some circles |
| Perception Issues | Musk association; tech billionaire funding; late Buterin disclosure | Some view FLI as serving elite interests |
| Research Capacity | More advocacy than original research; relies on grantees | Dependent on others for technical work |
| Governance Concentration | Four individuals (Tegmark, Chita-Tegmark, Aguirre, Tallinn) control multiple related entities | Lack of external board diversity |
| Messaging Criticism | ”Sensationalist” accusations; “dystopian sci-fi” framing | May undermine credibility with skeptics |
| Narrow Community | Closely tied to EA/rationalist/TESCREAL networks | Limited engagement with broader civil society |
| Effectiveness Unclear | Pause letter did not achieve pause; labs continued scaling | High-profile campaigns may not translate to policy change |
Timeline
Section titled “Timeline”| Year | Event |
|---|---|
| March 2014 | FLI founded by Tegmark, Tallinn, Aguirre, Krakovna, Chita-Tegmark |
| May 24, 2014 | Official launch at MIT; Alan Alda moderates panel |
| January 2-5, 2015 | Puerto Rico Conference: “The Future of AI: Opportunities and Challenges” |
| January 2015 | Research Priorities Open Letter; Musk announces $10M donation |
| July 1, 2015 | First AI safety grants announced: $7M to 37 projects |
| October 2016 | AI Safety Research profiles published |
| January 5-8, 2017 | Asilomar Conference; 23 AI Principles developed |
| January 30, 2017 | Asilomar AI Principles published |
| November 13, 2017 | Slaughterbots video released; 2M+ views |
| 2018 | FLI drafts letter calling for laws against lethal autonomous weapons |
| 2021 | Vitalik Buterin donates $665.8M in cryptocurrency |
| July 2021 | $25M grant program announced (Buterin funding) |
| November 30, 2021 | Slaughterbots sequel “if human: kill()” released |
| November 2022 | FLI rejects Nya Dagbladet Foundation grant; FTX collapse affects EA ecosystem |
| December 2022 | $368M transferred to three related entities |
| March 22, 2023 | ”Pause Giant AI Experiments” open letter published |
| May 2023 | Buterin acknowledged as “largest donor by far” on website |
| 2024 | PhD Fellowship in US-China AI Governance launched |
| November 2024 | Max Tegmark at Web Summit (Lisbon) |
| January 2025 | Emilia Javorsky at Vienna Autonomous Weapons Conference |
| March 2025 | Anthony Aguirre publishes Keep The Future Human |
Sources and Citations
Section titled “Sources and Citations”Primary Sources
Section titled “Primary Sources”Grant Programs
Section titled “Grant Programs”Conferences and Principles
Section titled “Conferences and Principles”- Puerto Rico AI Safety Conference
- Beneficial AI 2017 (Asilomar)
- Asilomar AI Principles
- Asilomar Conference on Beneficial AI - Wikipedia
Pause Letter
Section titled “Pause Letter”- Pause Giant AI Experiments: An Open Letter
- Pause Giant AI Experiments - Wikipedia
- FLI FAQs about the Pause Letter
Autonomous Weapons
Section titled “Autonomous Weapons”Leadership
Section titled “Leadership”Media and Analysis
Section titled “Media and Analysis”- Future of Life Institute - Wikipedia
- FLI - EA Forum Topic
- FLI - InfluenceWatch
- IEEE Spectrum: AI Pause Letter Stokes Fear and Controversy
- LessWrong: Elon Musk Donates $10M to FLI
Financial
Section titled “Financial”- FLI - ProPublica Nonprofit Explorer
- FLI - GuideStar Profile
- Philanthropy News Digest: FLI Received $665M in Crypto