Skip to content

Future of Life Institute (FLI)

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:46 (Adequate)⚠️
Importance:42 (Reference)
Last edited:2026-01-29 (3 days ago)
Words:6.0k
Structure:
📊 32📈 2🔗 5📚 5215%Score: 15/15
LLM Summary:Comprehensive profile of FLI documenting $25M+ in grants distributed (2015: $7M to 37 projects, 2021: $25M program), major public campaigns (Asilomar Principles with 5,700+ signatories, 2023 Pause Letter with 33,000+ signatories), and $665.8M Buterin donation (2021). Organization operates primarily through advocacy and grantmaking rather than direct research, with active EU/UN/US policy engagement.
Issues (2):
  • QualityRated 46 but structure suggests 100 (underrated by 54 points)
  • Links9 links could use <R> components
DimensionAssessmentEvidence
FocusAI Safety Advocacy + GrantmakingDual approach: public campaigns and research funding
Grant Scale$25M+ distributed2015: $7M to 37 projects; 2021: $25M program from Buterin donation
Public ProfileVery HighAsilomar Principles (5,700+ signatories), Pause Letter (33,000+ signatories)
ApproachPolicy + Research + AdvocacyEU AI Act engagement, UN autonomous weapons, Slaughterbots films
LocationBoston, MA (global staff of 20+)Policy teams in US and EU
Major Funding$665.8M (2021 Buterin), $10M (2015 Musk)Endowment from cryptocurrency donation
Key ConferencesPuerto Rico 2015, Asilomar 2017Considered birthplace of AI alignment field
AttributeDetails
Full NameFuture of Life Institute
Type501(c)(3) Nonprofit
EIN47-1052538
FoundedMarch 2014
Launch EventMay 24, 2014 at MIT (auditorium 10-250)
FoundersMax Tegmark (President), Jaan Tallinn, Anthony Aguirre (Executive Director), Viktoriya Krakovna, Meia Chita-Tegmark
LocationBoston, Massachusetts (headquarters); global remote staff
Staff Size20+ full-time team members
TeamsPolicy, Outreach, Grantmaking
Websitefutureoflife.org
Related Sitesautonomousweapons.org, autonomousweaponswatch.org
Research Grants$25M+ distributed across multiple rounds
EU Advocacy Budget€446,619 annually

The Future of Life Institute (FLI) is a nonprofit organization dedicated to reducing existential risks from advanced technologies, with a particular focus on artificial intelligence. Founded in March 2014 by MIT cosmologist Max Tegmark, Skype co-founder Jaan Tallinn, UC Santa Cruz physicist Anthony Aguirre, DeepMind research scientist Viktoriya Krakovna, and Tufts researcher Meia Chita-Tegmark, FLI has become one of the most publicly visible organizations in the AI safety space. The organization officially launched on May 24, 2014, at MIT’s auditorium 10-250 with a panel discussion on “The Future of Technology: Benefits and Risks,” moderated by Alan Alda and featuring panelists including Nobel laureate Frank Wilczek, synthetic biologist George Church, and Jaan Tallinn.

Unlike research-focused organizations like MIRI or Redwood Research, FLI emphasizes public advocacy, policy engagement, and awareness-raising alongside its grantmaking. This tripartite approach—combining direct research funding, high-profile public campaigns, and government engagement—has made FLI particularly effective at shaping public discourse around AI risk. The organization’s 2015 Puerto Rico conference is sometimes described as the “birthplace of the field of AI alignment,” bringing together leading AI researchers to discuss safety concerns that had previously been marginalized in academic circles. The subsequent 2017 Asilomar conference produced the 23 Asilomar AI Principles, one of the earliest and most influential sets of AI governance principles.

FLI’s major initiatives have helped establish AI safety as a mainstream concern rather than a fringe topic. The 2023 “Pause Giant AI Experiments” open letter garnered over 33,000 signatures and generated massive media coverage, even though the requested pause was not implemented by AI labs. The organization has also been influential in autonomous weapons policy, producing the viral Slaughterbots video series and advocating for international regulation at the United Nations. FLI received a transformative $665.8 million cryptocurrency donation from Ethereum co-founder Vitalik Buterin in 2021, which has been partially converted to an endowment ensuring long-term organizational independence.

The Future of Life Institute emerged from concerns about existential risks that had been growing among a network of physicists, AI researchers, and technology entrepreneurs. Max Tegmark, an MIT cosmologist who had become increasingly concerned about AI safety after reading Nick Bostrom’s work, connected with Jaan Tallinn, who had been funding existential risk research through organizations like MIRI and the Cambridge Centre for the Study of Existential Risk (CSER). Together with Anthony Aguirre (co-founder of the Foundational Questions Institute and later Metaculus), Viktoriya Krakovna (then a PhD student, now at DeepMind), and Meia Chita-Tegmark, they formally established FLI in March 2014.

The founding team recognized a gap in the existential risk ecosystem: while organizations like MIRI focused on technical AI safety research and CSER on academic study, there was no organization specifically dedicated to public engagement, policy advocacy, and convening stakeholders across academia, industry, and government. FLI was designed to fill this gap, with a mission to “steer transformative technology towards benefiting life and away from large-scale risks.”

MilestoneDateSignificance
FLI FoundedMarch 2014Organization formally established
MIT Launch EventMay 24, 2014Public launch with Alan Alda moderating; panelists included George Church, Frank Wilczek, Jaan Tallinn
Research Priorities Open LetterJanuary 2015First major public initiative; signed by Stephen Hawking, Elon Musk, and leading AI researchers
Puerto Rico ConferenceJanuary 2-5, 2015”The Future of AI: Opportunities and Challenges”; considered birthplace of AI alignment field
Musk Donation AnnouncedJanuary 2015$10M commitment to fund AI safety research
First Grants AnnouncedJuly 1, 2015$7M awarded to 37 research projects
Asilomar ConferenceJanuary 5-8, 2017Produced 23 Asilomar Principles; 100+ attendees
Slaughterbots VideoNovember 13, 20172M+ views within weeks; screened at UN
Buterin Donation2021$665.8M cryptocurrency donation
Pause LetterMarch 202333,000+ signatures; massive media coverage
Loading diagram...

FLI established the world’s first peer-reviewed grant program specifically aimed at AI safety research. The program began following the January 2015 Puerto Rico conference, when Elon Musk announced a $10 million donation to support “a global research program aimed at keeping AI beneficial to humanity.”

2015 Grant Program: FLI issued a Request for Proposals (RFP) in early 2015, receiving nearly 300 applications from research teams worldwide. The RFP sought proposals in two categories: “project grants” (typically $100,000-$500,000 over 2-3 years) for research by small teams or individuals, and “center grants” ($500,000-$1,500,000) for establishing new research centers. On July 1, 2015, FLI announced $7 million in awards to 37 research projects. Coefficient Giving (then Open Philanthropy) supplemented this with $1.186 million after determining that the quality of proposals exceeded available funding.

Grant RoundAmountProjectsSourceFocus Areas
2015 Round$7M37Elon Musk ($10M donation)Technical AI safety, value alignment, economics, policy, autonomous weapons
Coefficient Giving Supplement$1.186MAdditional projectsCoefficient Giving (then Open Philanthropy)High-quality proposals exceeding initial funding
2021 Program$25MMultipleVitalik Buterin donationExpanded AI safety and governance research
2023 GrantsVariousMultipleOngoingPhD fellowships, technical research

2015 Grant Recipients (selected examples):

RecipientInstitutionAmountProject Focus
Nick BostromFHI Oxford$1.5MStrategic Research Center for AI (geopolitical challenges)
Stuart RussellUC Berkeley≈$500KValue alignment and inverse reinforcement learning
MIRIMachine Intelligence Research Institute$299,310Long-term AI safety research ($250K over 3 years)
Owain EvansFHI (collaboration with MIRI)$227,212Algorithms learning human preferences despite irrationalities
Manuela VelosoCarnegie Mellon≈$200KExplainable AI systems
Paul ChristianoUC Berkeley≈$150KValue learning approaches
Ramana KumarCambridge (collaboration with MIRI)$36,750Self-reference in HOL theorem prover
Michael WebbStanford≈$100KEconomic impacts of AI
Heather RoffVarious≈$100KMeaningful human control of autonomous weapons

The funded projects spanned technical AI safety (ensuring advanced AI systems align with human values), economic analysis (managing AI’s labor market impacts), policy research (autonomous weapons governance), and philosophical foundations (clarifying concepts of agency and liability for autonomous systems).

The Puerto Rico AI Safety Conference (officially “The Future of AI: Opportunities and Challenges”) was held January 2-5, 2015, in San Juan. This conference is sometimes described as the “birthplace of the field of AI alignment,” as it brought together the world’s leading AI builders from academia and industry to engage with experts in economics, law, and ethics on AI safety for the first time at scale.

AspectDetails
DatesJanuary 2-5, 2015
LocationSan Juan, Puerto Rico
Attendees≈40 leading AI researchers and thought leaders
OutcomeResearch Priorities Open Letter; Elon Musk $10M donation announcement
SignificanceFirst major convening of AI safety concerns with mainstream AI researchers

Notable Attendees:

  • AI Researchers: Stuart Russell (Berkeley), Thomas Dietterich (AAAI President), Francesca Rossi (IJCAI President), Bart Selman (Cornell), Tom Mitchell (CMU), Murray Shanahan (Imperial College)
  • Industry: Representatives from Google DeepMind, Vicarious
  • Existential Risk Organizations: FHI, CSER, MIRI representatives
  • Technology Leaders: Elon Musk, Vernor Vinge

The conference produced an open letter on AI safety that was subsequently signed by Stephen Hawking, Elon Musk, and many leading AI researchers. Following the conference, Musk announced his $10 million donation to fund FLI’s research grants program.

Asilomar Conference and AI Principles (2017)

Section titled “Asilomar Conference and AI Principles (2017)”

The Beneficial AI 2017 conference, held January 5-8, 2017, at the Asilomar Conference Grounds in California, was a sequel to the 2015 Puerto Rico conference. More than 100 thought leaders and researchers in AI, economics, law, ethics, and philosophy met to address and formulate principles for beneficial AI development. The conference was not open to the public, with attendance curated to include influential figures who could shape the field’s direction.

AspectDetails
DatesJanuary 5-8, 2017
LocationAsilomar Conference Center, Pacific Grove, California
Attendees100+ AI researchers, industry leaders, philosophers
Outcome23 Asilomar AI Principles published January 30, 2017
Signatories1,797 AI/robotics researchers + 3,923 others (5,700+ total)

Notable Participants:

CategoryParticipants
AI ResearchersStuart Russell (Berkeley), Bart Selman (Cornell), Yoshua Bengio (Montreal), Ilya Sutskever (OpenAI/DeepMind), Yann LeCun (Facebook), Dario Amodei (OpenAI), Nate Soares (MIRI), Shane Legg (DeepMind), Viktoriya Krakovna (DeepMind/FLI), Stefano Ermon (Stanford)
Industry LeadersElon Musk (Tesla/SpaceX), Demis Hassabis (DeepMind CEO), Ray Kurzweil (Google)
Philosophers & AuthorsNick Bostrom (FHI), David Chalmers (NYU), Sam Harris
FLI LeadershipJaan Tallinn, Max Tegmark, Richard Mallah

The 23 Asilomar AI Principles are organized into three categories:

Research Issues (5 principles):

  1. Research Goal: Create beneficial, not undirected intelligence
  2. Research Funding: Include safety research alongside capability research
  3. Science-Policy Link: Constructive exchange between researchers and policymakers
  4. Research Culture: Foster cooperation, trust, and transparency
  5. Race Avoidance: Avoid corner-cutting on safety for competitive advantage

Ethics and Values (13 principles):

  1. Safety: AI systems should be safe and secure
  2. Failure Transparency: Capability to determine causes of harm
  3. Judicial Transparency: Explanations for legal decisions
  4. Responsibility: Designers and builders are stakeholders in implications
  5. Value Alignment: AI goals should align with human values
  6. Human Values: Designed to be compatible with human dignity, rights, freedoms
  7. Personal Privacy: Control over data access for AI systems
  8. Liberty and Privacy: AI should not unreasonably curtail liberty
  9. Shared Benefit: Benefits should be broadly distributed
  10. Shared Prosperity: Economic prosperity should be broadly shared
  11. Human Control: Humans should choose how to delegate decisions
  12. Non-subversion: Power from AI should respect social processes
  13. AI Arms Race: Lethal autonomous weapons race should be avoided

Longer-term Issues (5 principles):

  1. Capability Caution: Avoid strong assumptions about upper limits
  2. Importance: Advanced AI could be profound change; plan accordingly
  3. Risks: Catastrophic or existential risks require commensurate effort
  4. Recursive Self-Improvement: Subject to strict safety and control
  5. Common Good: Superintelligence should benefit all humanity

Legacy and Influence: The Asilomar Principles have been cited in policy discussions worldwide. Key themes (human-centric AI, transparency, robustness) appear in later legislation including the EU AI Act. Notable signatories included Stephen Hawking, Elon Musk, Anthony D. Romero (ACLU Executive Director), Demis Hassabis, Ilya Sutskever, Yann LeCun, Yoshua Bengio, and Stuart Russell.

”Pause Giant AI Experiments” Letter (2023)

Section titled “”Pause Giant AI Experiments” Letter (2023)”

The open letter “Pause Giant AI Experiments” was published by FLI on March 22, 2023—one week after OpenAI released GPT-4. The letter called for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” citing concerns about AI-generated propaganda, extreme automation of jobs, human obsolescence, and society-wide loss of control. The timing was strategic: GPT-4 demonstrated capabilities that surprised even AI researchers, and public attention to AI risk was at an all-time high.

AspectDetails
PublishedMarch 22, 2023 (one week after GPT-4 release)
Signatories33,000+ total
Notable SignatoriesYoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak, Yuval Noah Harari, Gary Marcus
Request6-month pause on training AI systems more powerful than GPT-4
Media CoverageExtensive worldwide coverage; US Senate hearing cited the letter

Key Arguments in the Letter:

  1. Contemporary AI systems are becoming “human-competitive at general tasks”
  2. AI labs are locked in an “out-of-control race” that “no one—not even their creators—can understand, predict, or reliably control”
  3. Profound risks to society including “flooding our information channels with propaganda and untruth,” “automating away all jobs,” and “loss of control of our civilization”
  4. The pause should be used to develop “shared safety protocols” verified by independent experts

Reactions and Criticism:

Critic/SupporterPositionArgument
Timnit Gebru, Emily Bender, Margaret MitchellCriticalLetter is “sensationalist,” amplifies “dystopian sci-fi scenario” while ignoring current algorithmic harms
Bill GatesDid not sign”Asking one particular group to pause doesn’t solve the challenges”
Sam Altman (OpenAI CEO)CriticalLetter is “missing most technical nuance”; OpenAI was not training GPT-5 as claimed in early drafts
Reid HoffmanCriticalCalled it “virtue signalling” with no real impact
Eliezer YudkowskyCritical (from other direction)Wrote in Time: “shut it all down”—letter doesn’t go far enough
European ParliamentEngagedIssued formal response; EU policymakers cited letter in AI Act discussions
US SenateEngagedHearing on AI safety cited the letter

Actual Outcomes: The requested pause was not implemented. As FLI noted on the letter’s one-year anniversary, AI companies instead “directed vast investments in infrastructure to train ever-more giant AI systems.” However, FLI’s policy director Mark Brakel noted that the response exceeded expectations: “The reaction has been intense. We feel that it has given voice to a huge undercurrent of concern about the risks of high-powered AI systems not just at the public level, but top researchers in AI and other topics, business leaders, and policymakers.”

The letter did contribute to a significant shift in public discourse. AI safety became a mainstream media topic, government inquiries accelerated, and phrases like “existential risk from AI” entered common vocabulary. Whether this attention will translate to effective governance remains contested.

Autonomous Weapons Advocacy: Slaughterbots

Section titled “Autonomous Weapons Advocacy: Slaughterbots”

Beyond AI safety, FLI has been a leading advocate for international regulation of lethal autonomous weapons systems (LAWS). Their most visible campaign is the Slaughterbots video series, produced in collaboration with Stuart Russell.

Slaughterbots (2017): Released November 13, 2017, this arms-control advocacy video presents a dramatized near-future scenario where swarms of inexpensive microdrones use facial recognition and AI to assassinate political opponents. The script was written by Stuart Russell; production was funded by FLI. According to Russell: “What we were trying to show was the property of autonomous weapons to turn into weapons of mass destruction automatically because you can launch as many as you want.”

VideoRelease DateViewsKey Message
SlaughterbotsNovember 13, 20172M+ within weeksMicrodrones as WMDs; need for regulation
if human: kill()November 30, 2021SequelDepicts failed ban, technical errors, eventual treaty
Artificial Escalation2022Ongoing seriesAI in nuclear command and control

UN Engagement: FLI representatives regularly attend UN Convention on Certain Conventional Weapons (CCW) meetings in Geneva. FLI’s Anna Hehir has spoken at these forums about the “proliferation and escalation risks of autonomous weapons,” arguing these weapons are “unpredictable, unreliable, and unexplainable.”

Related Resources: FLI operates autonomousweapons.org (case for regulation) and autonomousweaponswatch.org (database of weapons systems with concerning autonomy levels developed globally).

AspectDetails
RoleCo-founder, President
BackgroundMIT Professor of Physics (cosmology specialty)
EducationPhD Physics, UC Berkeley (1994); BA Physics & Economics, Stockholm School of Economics (1990)
BooksLife 3.0: Being Human in the Age of AI (2017), Our Mathematical Universe (2014)
MediaWeb Summit 2024 (Lisbon), numerous science documentaries, TED talks
ResearchCosmology, foundations of physics, consciousness, AI safety

Tegmark is the most public face of FLI, frequently appearing in media to discuss AI risks. His 2017 book Life 3.0 was widely read in technology circles and helped popularize concepts like “AI alignment” to general audiences. Tegmark has testified before the European Parliament on AI regulation and regularly engages with policymakers.

AspectDetails
RoleCo-founder, Executive Director
BackgroundFaggin Presidential Professor for the Physics of Information, UC Santa Cruz
EducationPhD Astronomy, Harvard University (2000)
Other RolesCo-founder, Foundational Questions Institute (FQXi, 2006); Co-founder, Metaculus (2015)
BooksCosmological Koans (2019); Keep The Future Human (March 2025)
ResearchTheoretical cosmology, gravitation, statistical mechanics, AI governance

Aguirre has shifted FLI’s focus toward more direct policy engagement in recent years. His March 2025 essay Keep The Future Human: Why and How We Should Close the Gates to AGI and Superintelligence proposes an international regulatory scheme for AI. He has appeared on the AXRP podcast discussing FLI’s strategy and the organization’s evolution from academic grantmaking to policy advocacy.

AspectDetails
RoleCo-founder, Board Member
BackgroundFounding engineer of Skype and Kazaa
PhilanthropyFounder, Survival and Flourishing Fund; co-founder, Cambridge Centre for the Study of Existential Risk (CSER)
Estimated Giving$100M+ to existential risk organizations
FocusAI safety funding, existential risk ecosystem building

Tallinn is one of the largest individual funders of existential risk research globally. His network of organizations (SFF, CSER, FLI) forms a significant portion of the AI safety funding landscape. He participated as a panelist at both the 2015 Puerto Rico and 2017 Asilomar conferences.

PersonRoleBackground
Viktoriya KrakovnaCo-founderResearch scientist at DeepMind; AI safety research (specification gaming, impact measures)
Meia Chita-TegmarkCo-founderPreviously at Tufts University; organizer and researcher
Risto UukHead of EU Policy and ResearchLeads FLI’s EU AI policy work, including AI Act engagement
Mark BrakelDirector of PolicyLed response to pause letter; government relations
Anna HehirPolicy (Autonomous Weapons)UN Geneva CCW representative
Emilia JavorskyPolicyVienna Autonomous Weapons Conference 2025 representative

Staff Structure: FLI has grown to 20+ full-time staff members globally, primarily organized into Policy, Outreach, and Grantmaking teams. Staff backgrounds span machine learning, medicine, government, and industry.

FLI’s funding history includes several transformative donations that have shaped the organization’s trajectory and independence.

DonorAmountYearPurpose
Vitalik Buterin$665.8M (cryptocurrency)2021Largest donation; partial endowment, grantmaking
Elon Musk$10M2015First AI safety research grants program
Coefficient Giving$1.9M totalVariousSupplemental grant funding, operational support
Survival and Flourishing Fund$500KVariousOperational support
Jaan TallinnOngoing2014-presentFounding support, strategic direction

In 2021, Ethereum co-founder Vitalik Buterin donated $665.8 million in cryptocurrency to FLI—the largest single donation in the organization’s history and one of the largest cryptocurrency donations to any nonprofit. The donation was “large and unconditional,” with FLI converting a significant portion to an endowment to ensure long-term organizational independence. According to FLI’s finances page, Buterin was not officially acknowledged as “largest donor by far” until May 2023, when the organization updated its funding page.

The donation has been used for:

  • Endowment: Long-term organizational sustainability
  • 2021 Grant Program: $25 million announced for AI safety research
  • Operational Deficit Coverage: FLI’s 2023 income was only $624,714; the Buterin endowment covers operating shortfalls
  • Asset Transfers: Between December 11-30, 2022, FLI transferred $368 million to three related entities governed by the same four people (Max Tegmark, Meia Chita-Tegmark, Anthony Aguirre, Jaan Tallinn)
MetricValueNotes
2023 Income$624,714$600K from single individual donor
2024 Income€83,241Limited fundraising year
EU Advocacy Spending€446,619/yearIncludes staff and Dentons Global Advisors
Total Grants Distributed$25M+Across all grant programs
Grant Size Range$22,000 - $1.5MHistorical range
Donations Received1,500+“Various sizes from wide variety of donors” since founding
FunderAmountPurpose
Coefficient Giving$1.186M (2015)Supplement to Musk grants (high-quality proposals exceeded funding)
Coefficient GivingAdditional grantsVarious operational support
Survival and Flourishing Fund$500KOperational support

FLI maintains active policy engagement across multiple jurisdictions, with dedicated staff for EU, UN, and US advocacy.

FLI’s EU work focuses on two priorities: (1) promoting beneficial AI development and (2) regulating lethal autonomous weapons. Their most significant achievement was advocating for the inclusion of foundation models (general-purpose AI systems) in the scope of the EU AI Act.

InitiativeStatusFLI Role
EU AI Act (Foundation Models)AdoptedSuccessfully pushed for inclusion of general-purpose systems; advocated for adoption
Definition of ManipulationOngoingRecommending broader definition to include any manipulatory technique and societal harm
Autonomous Weapons TreatyAdvocacyEncouraging EU member states to support international treaty

EU Advocacy Details:

  • Budget: €446,619 annually (includes staff salaries and Dentons Global Advisors consulting)
  • Lead: Risto Uuk (Head of EU Policy and Research)
  • Key Achievement: Foundation models included in AI Act scope

FLI advocates at the UN for a legally binding international instrument on autonomous weapons and a new international agency to govern AI.

ActivityForumOutcome
Autonomous Weapons TreatyCCW (Convention on Certain Conventional Weapons) GenevaOngoing advocacy; FLI agrees with ICRC recommendation for legally binding rules
2018 Letter on Lethal Autonomous WeaponsGlobalFLI drafted letter calling for laws against lethal autonomous weapons
Digital Cooperation RoadmapUN Secretary-GeneralFLI (with France and Finland) served as civil society champion; recommendations (3C) on AI governance were adopted
Slaughterbots ScreeningUN CCW2017 video shown to delegates
ActivityDetails
Congressional TestimonyMax Tegmark and others have testified before Congress on AI risk
Senate Hearings2023 pause letter cited in AI safety hearings
Policy ResearchAnalysis supporting US AI governance frameworks
MediumActivities
PodcastsInterviews with researchers, policymakers; AXRP appearance by Anthony Aguirre
Articles and ReportsExplainers on AI risk, policy analysis, technical summaries
VideosSlaughterbots series, educational content on AI safety
Websitesfutureoflife.org, autonomousweapons.org, autonomousweaponswatch.org
NewslettersRegular updates on AI safety and policy developments
Social MediaOngoing communication; significant following
ConferencesWeb Summit 2024 (Tegmark), Vienna Autonomous Weapons Conference 2025 (Javorsky)
ProgramDescription
AI Safety GrantsDirect research funding (see grants section)
PhD FellowshipsTechnical AI safety research; 2024 launched US-China AI Governance fellowship
ConveningConferences bringing together researchers, industry, and policymakers
PublicationsPolicy papers, technical research support

FLI has faced significant criticism from multiple directions, reflecting tensions within the AI ethics and safety communities.

The 2023 pause letter was criticized from both within and outside the AI safety community:

CriticAffiliationCriticism
Timnit GebruDAIR, former Google”Sensationalist”; amplifies “dystopian sci-fi scenario” while ignoring current algorithmic harms
Emily BenderUniversity of WashingtonCo-author of “On the Dangers of Stochastic Parrots”; letter ignores real present-day harms
Margaret MitchellFormer Google AI Ethics”Letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI. Ignoring active harms right now is a privilege that some of us don’t have.”
Bill GatesMicrosoft”Asking one particular group to pause doesn’t solve the challenges”
Sam AltmanOpenAI CEO”Missing most technical nuance about where we need the pause”; disputed claims about GPT-5 training
Reid HoffmanLinkedIn/Microsoft”Virtue signalling” with no real impact
Eliezer YudkowskyMIRITime essay: “Shut it all down”—letter doesn’t go far enough; requested moratorium is insufficient

Critics argue that FLI’s focus on long-term existential risk from hypothetical superintelligent AI distracts from immediate harms:

ArgumentSourceFLI Position
”Long-term AI risk arguments are speculative and downplay near-term harms”AI ethics researchers (Gebru, Bender, Mitchell)Both near-term and long-term risks deserve attention
”Provoking fear of AI serves tech billionaires who fund these groups”Critics of effective altruismFLI maintains editorial independence despite funding sources
”Current discrimination and job loss are more urgent than speculative superintelligence”Labor and civil rights advocatesAI safety research addresses both capability and deployment risks

Philosopher Émile Torres has accused FLI of embracing “TESCREALism”—the ideology of re-engineering humanity through AI for immortality, space colonization, and post-human civilization. Torres argues that while some TESCREALists support unregulated AI development, FLI “embraces the goal but is alarmed by what can go wrong along the way.” FLI has not directly responded to these characterizations.

Controversial Grant Proposal (Nya Dagbladet Foundation)

Section titled “Controversial Grant Proposal (Nya Dagbladet Foundation)”

In 2022, FLI faced controversy over a potential grant to the Nya Dagbladet Foundation (NDF):

TimelineEvent
Initial reviewFLI was “initially positive” about NDF proposal
Due diligenceFLI’s process “uncovered information indicating that NDF was not aligned with FLI’s values or charitable purposes”
November 2022FLI informed NDF they would not proceed with a grant
December 15, 2022Swedish media contacted FLI describing Nya Dagbladet as a “far-right extremist group”
OutcomeFLI issued public statement; zero funding was given to NDF
IssueContextFLI Response
Initial Funding$10M grant from Musk (2015)Donation was earmarked for research grants; FLI has received 1,500+ donations since
Pause Letter SignatoryMusk among 33,000+ signatoriesMany prominent researchers also signed; Musk is one of thousands
PerceptionSome media portray FLI as “Musk-aligned”FLI maintains editorial and programmatic independence; Buterin donation is now larger
Conflict of Interest ConcernsMusk’s xAI competes with OpenAI; pause letter benefits competitorsFLI points to diverse signatory list including Bengio, Russell, Hinton
IssueContext
Late DisclosureButerin’s $665.8M donation (2021) was not publicly acknowledged as “largest donor by far” on FLI’s website until May 2023
Asset TransfersBetween December 11-30, 2022, FLI transferred $368M to three entities governed by the same four people (Tegmark, Chita-Tegmark, Aguirre, Tallinn)
Cryptocurrency VolatilityDonation value fluctuated significantly; actual liquid value unclear

FLI operates within the broader effective altruism ecosystem, which was significantly affected by the FTX collapse in November 2022. While FLI was not directly funded by FTX or the FTX Future Fund to the same extent as other EA organizations, the association has drawn scrutiny. FLI has not received clawback demands, but the broader EA funding crisis has affected the landscape in which FLI operates.

AspectFLIMIRICAISCoefficient GivingCSER
Primary FocusAdvocacy + Grants + PolicyTechnical AI safety researchResearch + Statement of ConcernGrantmaking (broad)Academic existential risk research
Public ProfileVery HighLow-MediumMediumMediumMedium
Media StrategyVery Active (viral videos, open letters)MinimalSelective (single statement)ModerateAcademic publications
Policy EngagementVery High (EU, UN, US)MinimalLimitedModerate (via grantees)Moderate
Grant Distribution$25M+N/A (recipient)N/A (new org)BillionsN/A
Funding ModelMajor donors + endowmentDonationsDonationsGood VenturesUniversity + grants
Geographic FocusGlobalUSUSGlobalUK
Founding Year20142000202220142012
Founder ConnectionTallinn (board)Tallinn (funded)Hinton, Bengio, etc.MoskovitzTallinn (co-founder)

FLI occupies a distinct niche: high-profile public advocacy combined with grantmaking and policy engagement. While MIRI focuses on technical research and Coefficient Giving on behind-the-scenes grantmaking, FLI prioritizes visibility and discourse-shaping. This creates both advantages (media influence, policy access) and disadvantages (controversy, perception of sensationalism).

Loading diagram...
StrengthEvidenceImpact
Public VisibilityPause letter: 33,000+ signatures; Slaughterbots: 2M+ views; Asilomar Principles: 5,700+ signatoriesShaped public discourse on AI risk; made “AI safety” mainstream term
Convening PowerPuerto Rico 2015, Asilomar 2017 brought together top AI researchers, industry leaders, philosophersCreated field of AI alignment; produced influential governance frameworks
Policy AccessEU AI Act engagement; UN CCW participation; US Congressional testimonyFoundation models included in AI Act; autonomous weapons on international agenda
Financial Resources$665.8M Buterin donation; $25M+ in grants distributedLong-term sustainability; significant grantmaking capacity
CommunicationViral videos, open letters, effective media strategyPublic awareness of AI risk dramatically increased
Network EffectsTallinn connections to CSER, SFF; overlap with EA/rationalist communitiesInfluence across multiple organizations
First-Mover AdvantageFounded 2014; first AI safety grants program 2015Established credibility before AI became mainstream concern
LimitationContextConsequence
ControversyPause letter criticism; TESCREALism accusations; near-term vs. long-term debateAlienated some AI ethics researchers; credibility questioned in some circles
Perception IssuesMusk association; tech billionaire funding; late Buterin disclosureSome view FLI as serving elite interests
Research CapacityMore advocacy than original research; relies on granteesDependent on others for technical work
Governance ConcentrationFour individuals (Tegmark, Chita-Tegmark, Aguirre, Tallinn) control multiple related entitiesLack of external board diversity
Messaging Criticism”Sensationalist” accusations; “dystopian sci-fi” framingMay undermine credibility with skeptics
Narrow CommunityClosely tied to EA/rationalist/TESCREAL networksLimited engagement with broader civil society
Effectiveness UnclearPause letter did not achieve pause; labs continued scalingHigh-profile campaigns may not translate to policy change
YearEvent
March 2014FLI founded by Tegmark, Tallinn, Aguirre, Krakovna, Chita-Tegmark
May 24, 2014Official launch at MIT; Alan Alda moderates panel
January 2-5, 2015Puerto Rico Conference: “The Future of AI: Opportunities and Challenges”
January 2015Research Priorities Open Letter; Musk announces $10M donation
July 1, 2015First AI safety grants announced: $7M to 37 projects
October 2016AI Safety Research profiles published
January 5-8, 2017Asilomar Conference; 23 AI Principles developed
January 30, 2017Asilomar AI Principles published
November 13, 2017Slaughterbots video released; 2M+ views
2018FLI drafts letter calling for laws against lethal autonomous weapons
2021Vitalik Buterin donates $665.8M in cryptocurrency
July 2021$25M grant program announced (Buterin funding)
November 30, 2021Slaughterbots sequel “if human: kill()” released
November 2022FLI rejects Nya Dagbladet Foundation grant; FTX collapse affects EA ecosystem
December 2022$368M transferred to three related entities
March 22, 2023”Pause Giant AI Experiments” open letter published
May 2023Buterin acknowledged as “largest donor by far” on website
2024PhD Fellowship in US-China AI Governance launched
November 2024Max Tegmark at Web Summit (Lisbon)
January 2025Emilia Javorsky at Vienna Autonomous Weapons Conference
March 2025Anthony Aguirre publishes Keep The Future Human