MIRI (Machine Intelligence Research Institute)
- ClaimMIRI, the founding organization of AI alignment research with >$5M annual budget, has abandoned technical research and now recommends against technical alignment careers, estimating >90% P(doom) by 2027-2030.S:4.5I:4.5A:4.0
- Counterint.The 'sharp left turn' scenario - where alignment approaches work during training but break down when AI rapidly becomes superhuman - motivates MIRI's skepticism of iterative alignment approaches used by Anthropic and other labs.S:3.5I:4.5A:4.0
- GapAfter 8 years of agent foundations research (2012-2020) and 2 years attempting empirical alignment (2020-2022), MIRI concluded both approaches are fundamentally insufficient for superintelligence alignment.S:4.0I:4.0A:3.5
- Links15 links could use <R> components
Quick Assessment
Section titled “Quick Assessment”| Dimension | Assessment | Evidence |
|---|---|---|
| Historical Significance | First organization to focus on ASI alignment as technical problem | Among first to recognize ASI as most important event in 21st century MIRI About |
| Current Strategy | Policy advocacy to halt AI development | Major 2024 pivot after acknowledging alignment research “extremely unlikely to succeed in time” MIRI About |
| Research Output | Minimal recent publications | Near-zero new publications from core researchers between 2018 and 2022 LessWrong |
| Financial Status | Operating at deficit with ≈2 year runway | $4.97M net loss in 2024, $15.24M in net assets ProPublica |
| Field Impact | Controversial but influential | Raised awareness but faced criticism for theoretical approach and failed research programs LessWrong |
Overview
Section titled “Overview”The Machine Intelligence Research Institute (MIRI) is a 501(c)(3) nonprofit research organization based in Berkeley, California, founded in 2000 by Eliezer YudkowskyResearcherEliezer YudkowskyComprehensive biographical profile of Eliezer Yudkowsky covering his foundational contributions to AI safety (CEV, early problem formulation, agent foundations) and notably pessimistic views (>90% ...Quality: 35/100 with funding from Brian and Sabine Atkins Wikipedia. Originally named the Singularity Institute for Artificial Intelligence (SIAI), MIRI was the first organization to advocate for and work on artificial superintelligence (ASI) alignment as a technical problem MIRI About.
The organization has undergone several dramatic strategic pivots throughout its 24-year history. Initially created to accelerate AI development, MIRI shifted focus in 2005 when Yudkowsky became concerned about superintelligent AI risks Wikipedia. After two decades of technical research, MIRI announced a major strategy pivot in 2024, moving away from alignment research toward policy advocacy aimed at halting the development of increasingly general AI models MIRI About. This shift came after the organization acknowledged that its primary research initiative had “largely failed” MIRI 2024 Update.
With approximately 42 employees ProPublica and an interdisciplinary approach that deliberately hires from computer science, economics, mathematics, and philosophy backgrounds Future of Life Institute, MIRI aligns itself with the principles and objectives of the effective altruism movement Wikipedia.
History
Section titled “History”Founding and Early Years (2000-2005)
Section titled “Founding and Early Years (2000-2005)”MIRI was established in 2000 with a paradoxical original mission: accelerating AI development. The organization operated under this goal until 2005, when founder Eliezer Yudkowsky’s concerns about superintelligent AI risks prompted a fundamental reorientation toward AI safety Wikipedia. That same year, the organization relocated from Atlanta to Silicon Valley Wikipedia, positioning itself at the heart of the technology industry.
Singularity Summit Era (2006-2012)
Section titled “Singularity Summit Era (2006-2012)”Beginning in 2006, MIRI organized the annual Singularity Summit to discuss AI’s future and risks, initially in cooperation with Stanford University and with funding from Peter Thiel Wikipedia. These summits became prominent venues for discussing the implications of advanced artificial intelligence and helped raise awareness of AI safety concerns within both academic and technology communities.
In December 2012, the institute sold its name, web domain, and the Singularity Summit to Singularity University Wikipedia, marking the end of this public outreach phase. The following month, in January 2013, the organization adopted its current name: Machine Intelligence Research Institute Wikipedia.
Technical Research Focus (2012-2020)
Section titled “Technical Research Focus (2012-2020)”During this period, MIRI pursued an ambitious agenda focused on mathematical foundations of AI safety. The organization published actively on topics including logical uncertainty and probabilistic reasoning, decision theory and agent foundations, AI alignment and value learning, corrigibility and interruptibility, formal verification of AI systems, and mathematical foundations of safe AI MIRI Publications.
MIRI received significant funding during this era. Open PhilanthropyOpen PhilanthropyOpen Philanthropy rebranded to Coefficient Giving in November 2025. See the Coefficient Giving page for current information. provided $2,652,500 over two years in February 2019 for general support, increasing their annual support from $1.4 million in 2018 to $2.31 million in 2019 Open Philanthropy. In April 2020, Open Philanthropy awarded MIRI its largest grant to date: $7,703,750, with $6.24 million from Open Philanthropy’s main funders and $1.46 million from a partnership with BitMEX co-founder Ben Delo Open Philanthropy. At this peak, Open Philanthropy was providing approximately 60% of MIRI’s predicted budgets for 2020-2021 Open Philanthropy.
However, the organization experienced a dramatic revenue spike to $25.6 million in 2021 ProPublica, partly due to a donation of several million dollars worth of Ethereum from Vitalik Buterin Open Philanthropy.
Strategic Collapse and Pivot (2020-Present)
Section titled “Strategic Collapse and Pivot (2020-Present)”The 2020 update revealed a critical turning point: MIRI’s primary research initiative had “largely failed,” prompting years of regrouping MIRI 2024 Update. By 2021, MIRI announced a reduced emphasis on technical research in favor of advocacy and policy influence, citing diminishing returns on alignment progress. This led to near-zero new publications from core researchers between 2018 and 2022 LessWrong.
The organization also became “more pessimistic that such work will have time to bear fruit” regarding technical alignment research without policy interventions MIRI 2024 Update. This assessment culminated in the 2024 announcement of a major strategy pivot away from alignment research entirely, with MIRI’s current focus now on attempting to halt the development of increasingly general AI models via discussions with policymakers about the extreme risks artificial superintelligence poses MIRI About.
Current Operations
Section titled “Current Operations”Organizational Structure
Section titled “Organizational Structure”MIRI operates as a 501(c)(3) nonprofit with approximately 42 employees ProPublica. The organization deliberately hires from diverse backgrounds including computer science, economics, mathematics, and philosophy, recognizing that AI safety requires interdisciplinary perspectives Future of Life Institute.
The leadership team includes:
- Eliezer Yudkowsky - Chair and Head Researcher ($599,970 compensation in 2024) ProPublica
- Malo Bourgon - CEO ($241,531 compensation in 2024) ProPublica
- Nate Soares - President ($236,614 compensation in 2024) ProPublica
- Scott Garrabrant - Employee ($296,735 compensation in 2024) ProPublica
- Benya Fallenstein - Research Fellow ($239,947 compensation in 2024) ProPublica
Financial Position
Section titled “Financial Position”MIRI’s financial situation has deteriorated significantly from its 2021 peak. The organization reported $1,534,913 in total revenue for 2024, while expenses reached $6,508,701, resulting in a net loss of $4,973,788 ProPublica. Despite this deficit, MIRI maintains $16,493,789 in total assets and $15,242,215 in net assets ProPublica, providing approximately two years of operational runway ProPublica.
Executive compensation represented $3,132,826, or 48.1% of total expenses in 2024 ProPublica. The organization projected spending $5.6 million in 2024 and expects expenses of $6.5 million to $7 million in 2025 ProPublica.
Current Strategy
Section titled “Current Strategy”MIRI’s current focus is on attempting to halt the development of increasingly general AI models through discussions with policymakers about the extreme risks artificial superintelligence poses MIRI About. This represents a dramatic departure from the organization’s historical emphasis on technical alignment research.
The organization acknowledges the pessimistic nature of this approach, stating that policy efforts are “very unlikely to save us, but all other plans we know of seem even less likely to succeed” MIRI 2024 Update. This reflects a belief that alignment research is “extremely unlikely to succeed in time to prevent an unprecedented catastrophe” MIRI About.
Research Legacy
Section titled “Research Legacy”Publication Timeline
Section titled “Publication Timeline”MIRI’s research output followed a clear trajectory. Between 2012 and 2016, the organization actively published on topics like logical uncertainty, decision theory, and AI alignment MIRI Publications. However, from 2018 to 2022, core researchers produced near-zero new publications LessWrong, reflecting the organization’s acknowledgment that its foundational bet on mathematical formalization had underdelivered relative to capability advances LessWrong.
Research Areas
Section titled “Research Areas”MIRI’s technical work focused on six core areas, all aimed at developing mathematical foundations for safe artificial intelligence:
- Logical uncertainty and probabilistic reasoning - Developing frameworks for reasoning under logical uncertainty MIRI Publications
- Decision theory and agent foundations - Theoretical work on how rational agents should make decisions MIRI Publications
- AI alignment and value learning - Methods for ensuring AI systems pursue intended goals MIRI Publications
- Corrigibility and interruptibility - Designing systems that can be safely modified or shut down MIRI Publications
- Formal verification of AI systems - Mathematical proofs of system properties MIRI Publications
- Mathematical foundations of safe AI - Fundamental theoretical work underlying safety approaches MIRI Publications
Assessment of Research Impact
Section titled “Assessment of Research Impact”Eliezer Yudkowsky assessed that “the gameboard looks ‘incredibly grim’ to him, because from his perspective the field has made almost no progress on the alignment problem” LessWrong. This pessimistic evaluation reflects MIRI’s acknowledgment that its foundational bet on mathematical formalization had underdelivered relative to capability advances LessWrong.
Despite these internal assessments, MIRI received recognition as a recommended charity from Raising for Effective Giving, which cited the organization’s impact potential in preventing “vast amounts of future suffering,” the funding gap for AI safety work, and its effective methodology with historical precedent in computer science foundations Raising for Effective Giving.
Criticisms and Controversies
Section titled “Criticisms and Controversies”Organizational Criticisms
Section titled “Organizational Criticisms”MIRI has faced allegations of cult-like dynamics, with critics claiming that “MIRI and LW [are] just an Eliezer-worshipping cult” LessWrong. A LessWrongLesswrongLessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving $5M+ in funding and serving as the origin point for ~31% of EA survey respondent...Quality: 44/100 compilation of MIRI criticisms identified Holden Karnofsky’s critique as “the best criticism of MIRI as an organisation” LessWrong.
Even major funders expressed reservations. Open Philanthropy’s 2016 evaluation included significant concerns about MIRI’s Agent Foundations research agenda, though they continued supporting the organization for other reasons Open Philanthropy.
Research Methodology Critiques
Section titled “Research Methodology Critiques”Critics have questioned whether theoretical work can be done so far in advance of testing and experimentation LessWrong. This challenge to MIRI’s highly theoretical approach proved prescient, as the organization itself later acknowledged that its primary research initiative had “largely failed” MIRI 2024 Update.
Technical Disagreements
Section titled “Technical Disagreements”Several core technical assumptions have faced criticism:
Generalization thesis vagueness: Critics challenge MIRI’s “generalization thesis” as “unsatisfyingly vague” - the idea that smart systems with goal-directedness markers will pick up dangerous varieties through generalization LessWrong.
Goal-directedness concept: Critics note that “whether coding assistants are less ‘generally goal-directed’ than a hypothetical machine that manipulates users… is actually theoretically undecidable,” questioning whether goal-directedness will cause the behaviors MIRI worries about LessWrong.
Policy Position Critiques
Section titled “Policy Position Critiques”MIRI’s current advocacy for shutting down AI research has drawn significant pushback. Critics argue this position “would obviously be very difficult, and very damaging (because we don’t get the benefits of AI for all time it’s shut down)” LessWrong.
Influence and Legacy
Section titled “Influence and Legacy”MIRI was among the first organizations to recognize the future invention of artificial superintelligence as the most important and potentially catastrophic event in the twenty-first century MIRI About. This early recognition helped establish AI safety as a legitimate field of study and influenced the development of alignment research at major AI laboratories.
The organization’s alignment with effective altruism principles Wikipedia and its role in hosting the Singularity Summit contributed to raising awareness of AI safety concerns among philanthropists, researchers, and policymakers. MIRI’s work influenced the emergence of AI safety as a funded research area, even as its own technical research program ultimately failed to achieve its goals.
The organization’s trajectory - from pioneering AI safety work to acknowledging research failure and pivoting to policy advocacy - represents a cautionary case study in the challenges of theoretical safety research conducted far in advance of the systems it aims to protect against.
Sources and Further Reading
Section titled “Sources and Further Reading”Primary Sources
Section titled “Primary Sources”- MIRI About Page - Official organizational overview
- MIRI 2024 Mission and Strategy Update - Announcement of policy pivot
- All MIRI Publications - Complete research output
Financial and Organizational Information
Section titled “Financial and Organizational Information”- MIRI on ProPublica Nonprofit Explorer - Detailed financial filings
- Open Philanthropy - MIRI General Support (2019) - Major grant details
Critical Analyses
Section titled “Critical Analyses”- Steelmanning MIRI Critics - LessWrong - Compilation of major criticisms
- MIRI: The Danger of Good Intentions - Future of Life Institute - Profile and assessment
General Background
Section titled “General Background”- Machine Intelligence Research Institute - Wikipedia - Historical overview
- MIRI - Raising for Effective Giving - Charity evaluation