Skip to content

MIRI (Machine Intelligence Research Institute)

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:50 (Adequate)
Importance:37 (Reference)
Last edited:2026-01-31 (1 day ago)
Words:1.9k
Backlinks:9
Structure:
📊 1📈 0🔗 3📚 7423%Score: 10/15
LLM Summary:Comprehensive organizational history documenting MIRI's trajectory from pioneering AI safety research (2000-2020) to policy advocacy after acknowledging research failure, with detailed financial data showing $5M annual deficit and ~2 year runway. Provides well-sourced analysis of the organization's $25.6M revenue peak (2021), subsequent decline, and strategic pivot away from technical alignment work.
Critical Insights (4):
  • ClaimMIRI, the founding organization of AI alignment research with >$5M annual budget, has abandoned technical research and now recommends against technical alignment careers, estimating >90% P(doom) by 2027-2030.S:4.5I:4.5A:4.0
  • Counterint.The 'sharp left turn' scenario - where alignment approaches work during training but break down when AI rapidly becomes superhuman - motivates MIRI's skepticism of iterative alignment approaches used by Anthropic and other labs.S:3.5I:4.5A:4.0
  • GapAfter 8 years of agent foundations research (2012-2020) and 2 years attempting empirical alignment (2020-2022), MIRI concluded both approaches are fundamentally insufficient for superintelligence alignment.S:4.0I:4.0A:3.5
Issues (1):
  • Links15 links could use <R> components
DimensionAssessmentEvidence
Historical SignificanceFirst organization to focus on ASI alignment as technical problemAmong first to recognize ASI as most important event in 21st century MIRI About
Current StrategyPolicy advocacy to halt AI developmentMajor 2024 pivot after acknowledging alignment research “extremely unlikely to succeed in time” MIRI About
Research OutputMinimal recent publicationsNear-zero new publications from core researchers between 2018 and 2022 LessWrong
Financial StatusOperating at deficit with ≈2 year runway$4.97M net loss in 2024, $15.24M in net assets ProPublica
Field ImpactControversial but influentialRaised awareness but faced criticism for theoretical approach and failed research programs LessWrong

The Machine Intelligence Research Institute (MIRI) is a 501(c)(3) nonprofit research organization based in Berkeley, California, founded in 2000 by Eliezer Yudkowsky with funding from Brian and Sabine Atkins Wikipedia. Originally named the Singularity Institute for Artificial Intelligence (SIAI), MIRI was the first organization to advocate for and work on artificial superintelligence (ASI) alignment as a technical problem MIRI About.

The organization has undergone several dramatic strategic pivots throughout its 24-year history. Initially created to accelerate AI development, MIRI shifted focus in 2005 when Yudkowsky became concerned about superintelligent AI risks Wikipedia. After two decades of technical research, MIRI announced a major strategy pivot in 2024, moving away from alignment research toward policy advocacy aimed at halting the development of increasingly general AI models MIRI About. This shift came after the organization acknowledged that its primary research initiative had “largely failed” MIRI 2024 Update.

With approximately 42 employees ProPublica and an interdisciplinary approach that deliberately hires from computer science, economics, mathematics, and philosophy backgrounds Future of Life Institute, MIRI aligns itself with the principles and objectives of the effective altruism movement Wikipedia.

MIRI was established in 2000 with a paradoxical original mission: accelerating AI development. The organization operated under this goal until 2005, when founder Eliezer Yudkowsky’s concerns about superintelligent AI risks prompted a fundamental reorientation toward AI safety Wikipedia. That same year, the organization relocated from Atlanta to Silicon Valley Wikipedia, positioning itself at the heart of the technology industry.

Beginning in 2006, MIRI organized the annual Singularity Summit to discuss AI’s future and risks, initially in cooperation with Stanford University and with funding from Peter Thiel Wikipedia. These summits became prominent venues for discussing the implications of advanced artificial intelligence and helped raise awareness of AI safety concerns within both academic and technology communities.

In December 2012, the institute sold its name, web domain, and the Singularity Summit to Singularity University Wikipedia, marking the end of this public outreach phase. The following month, in January 2013, the organization adopted its current name: Machine Intelligence Research Institute Wikipedia.

During this period, MIRI pursued an ambitious agenda focused on mathematical foundations of AI safety. The organization published actively on topics including logical uncertainty and probabilistic reasoning, decision theory and agent foundations, AI alignment and value learning, corrigibility and interruptibility, formal verification of AI systems, and mathematical foundations of safe AI MIRI Publications.

MIRI received significant funding during this era. Open Philanthropy provided $2,652,500 over two years in February 2019 for general support, increasing their annual support from $1.4 million in 2018 to $2.31 million in 2019 Open Philanthropy. In April 2020, Open Philanthropy awarded MIRI its largest grant to date: $7,703,750, with $6.24 million from Open Philanthropy’s main funders and $1.46 million from a partnership with BitMEX co-founder Ben Delo Open Philanthropy. At this peak, Open Philanthropy was providing approximately 60% of MIRI’s predicted budgets for 2020-2021 Open Philanthropy.

However, the organization experienced a dramatic revenue spike to $25.6 million in 2021 ProPublica, partly due to a donation of several million dollars worth of Ethereum from Vitalik Buterin Open Philanthropy.

Strategic Collapse and Pivot (2020-Present)

Section titled “Strategic Collapse and Pivot (2020-Present)”

The 2020 update revealed a critical turning point: MIRI’s primary research initiative had “largely failed,” prompting years of regrouping MIRI 2024 Update. By 2021, MIRI announced a reduced emphasis on technical research in favor of advocacy and policy influence, citing diminishing returns on alignment progress. This led to near-zero new publications from core researchers between 2018 and 2022 LessWrong.

The organization also became “more pessimistic that such work will have time to bear fruit” regarding technical alignment research without policy interventions MIRI 2024 Update. This assessment culminated in the 2024 announcement of a major strategy pivot away from alignment research entirely, with MIRI’s current focus now on attempting to halt the development of increasingly general AI models via discussions with policymakers about the extreme risks artificial superintelligence poses MIRI About.

MIRI operates as a 501(c)(3) nonprofit with approximately 42 employees ProPublica. The organization deliberately hires from diverse backgrounds including computer science, economics, mathematics, and philosophy, recognizing that AI safety requires interdisciplinary perspectives Future of Life Institute.

The leadership team includes:

  • Eliezer Yudkowsky - Chair and Head Researcher ($599,970 compensation in 2024) ProPublica
  • Malo Bourgon - CEO ($241,531 compensation in 2024) ProPublica
  • Nate Soares - President ($236,614 compensation in 2024) ProPublica
  • Scott Garrabrant - Employee ($296,735 compensation in 2024) ProPublica
  • Benya Fallenstein - Research Fellow ($239,947 compensation in 2024) ProPublica

MIRI’s financial situation has deteriorated significantly from its 2021 peak. The organization reported $1,534,913 in total revenue for 2024, while expenses reached $6,508,701, resulting in a net loss of $4,973,788 ProPublica. Despite this deficit, MIRI maintains $16,493,789 in total assets and $15,242,215 in net assets ProPublica, providing approximately two years of operational runway ProPublica.

Executive compensation represented $3,132,826, or 48.1% of total expenses in 2024 ProPublica. The organization projected spending $5.6 million in 2024 and expects expenses of $6.5 million to $7 million in 2025 ProPublica.

MIRI’s current focus is on attempting to halt the development of increasingly general AI models through discussions with policymakers about the extreme risks artificial superintelligence poses MIRI About. This represents a dramatic departure from the organization’s historical emphasis on technical alignment research.

The organization acknowledges the pessimistic nature of this approach, stating that policy efforts are “very unlikely to save us, but all other plans we know of seem even less likely to succeed” MIRI 2024 Update. This reflects a belief that alignment research is “extremely unlikely to succeed in time to prevent an unprecedented catastrophe” MIRI About.

MIRI’s research output followed a clear trajectory. Between 2012 and 2016, the organization actively published on topics like logical uncertainty, decision theory, and AI alignment MIRI Publications. However, from 2018 to 2022, core researchers produced near-zero new publications LessWrong, reflecting the organization’s acknowledgment that its foundational bet on mathematical formalization had underdelivered relative to capability advances LessWrong.

MIRI’s technical work focused on six core areas, all aimed at developing mathematical foundations for safe artificial intelligence:

  1. Logical uncertainty and probabilistic reasoning - Developing frameworks for reasoning under logical uncertainty MIRI Publications
  2. Decision theory and agent foundations - Theoretical work on how rational agents should make decisions MIRI Publications
  3. AI alignment and value learning - Methods for ensuring AI systems pursue intended goals MIRI Publications
  4. Corrigibility and interruptibility - Designing systems that can be safely modified or shut down MIRI Publications
  5. Formal verification of AI systems - Mathematical proofs of system properties MIRI Publications
  6. Mathematical foundations of safe AI - Fundamental theoretical work underlying safety approaches MIRI Publications

Eliezer Yudkowsky assessed that “the gameboard looks ‘incredibly grim’ to him, because from his perspective the field has made almost no progress on the alignment problem” LessWrong. This pessimistic evaluation reflects MIRI’s acknowledgment that its foundational bet on mathematical formalization had underdelivered relative to capability advances LessWrong.

Despite these internal assessments, MIRI received recognition as a recommended charity from Raising for Effective Giving, which cited the organization’s impact potential in preventing “vast amounts of future suffering,” the funding gap for AI safety work, and its effective methodology with historical precedent in computer science foundations Raising for Effective Giving.

MIRI has faced allegations of cult-like dynamics, with critics claiming that “MIRI and LW [are] just an Eliezer-worshipping cult” LessWrong. A LessWrong compilation of MIRI criticisms identified Holden Karnofsky’s critique as “the best criticism of MIRI as an organisation” LessWrong.

Even major funders expressed reservations. Open Philanthropy’s 2016 evaluation included significant concerns about MIRI’s Agent Foundations research agenda, though they continued supporting the organization for other reasons Open Philanthropy.

Critics have questioned whether theoretical work can be done so far in advance of testing and experimentation LessWrong. This challenge to MIRI’s highly theoretical approach proved prescient, as the organization itself later acknowledged that its primary research initiative had “largely failed” MIRI 2024 Update.

Several core technical assumptions have faced criticism:

Generalization thesis vagueness: Critics challenge MIRI’s “generalization thesis” as “unsatisfyingly vague” - the idea that smart systems with goal-directedness markers will pick up dangerous varieties through generalization LessWrong.

Goal-directedness concept: Critics note that “whether coding assistants are less ‘generally goal-directed’ than a hypothetical machine that manipulates users… is actually theoretically undecidable,” questioning whether goal-directedness will cause the behaviors MIRI worries about LessWrong.

MIRI’s current advocacy for shutting down AI research has drawn significant pushback. Critics argue this position “would obviously be very difficult, and very damaging (because we don’t get the benefits of AI for all time it’s shut down)” LessWrong.

MIRI was among the first organizations to recognize the future invention of artificial superintelligence as the most important and potentially catastrophic event in the twenty-first century MIRI About. This early recognition helped establish AI safety as a legitimate field of study and influenced the development of alignment research at major AI laboratories.

The organization’s alignment with effective altruism principles Wikipedia and its role in hosting the Singularity Summit contributed to raising awareness of AI safety concerns among philanthropists, researchers, and policymakers. MIRI’s work influenced the emergence of AI safety as a funded research area, even as its own technical research program ultimately failed to achieve its goals.

The organization’s trajectory - from pioneering AI safety work to acknowledging research failure and pivoting to policy advocacy - represents a cautionary case study in the challenges of theoretical safety research conducted far in advance of the systems it aims to protect against.