Skip to content
Longterm Wiki

Machine Intelligence Research Institute (MIRI)

Safety Organization
Founded Jan 2000 (26 years old)HQ: Berkeley, CAintelligence.org

Also known as: MIRI, Singularity Institute, Singularity Institute for Artificial Intelligence, SIAI

Entity
Wiki
About
Business
Data

The Machine Intelligence Research Institute (MIRI) is one of the oldest organizations focused on AI existential risk, founded in 2000 as the Singularity Institute for Artificial Intelligence (SIAI).

Revenue
$1.5M
as of 2024
Headcount
42
as of 2024
Total Funding Raised
$55M
as of 2025
Annual Expenses
$6.5M
as of 2024
Net Assets
$15M
as of 2024

Key Metrics

Revenue (ARR)

$1.5M2024
Revenue (ARR) chart. Annual run rate: $26M in 2021 to $1.5M in 2024.$0$7.3M$15M$22M$29M2021202220232024

Headcount

422024
Headcount chart. Employees: 28 in 2023 to 42 in 2024.01326395220232024

Facts

10
Financial
Total Funding Raised$55M
Headcount42
Annual Expenses$6.5M
Revenue$1.5M
Net Assets$15M
General
Websitehttp://intelligence.org/
Organization
Founded DateJan 2000
HeadquartersBerkeley, CA
Legal Structure501(c)(3) nonprofit
People

Other Data

Entity Assessments
5 entries
DimensionRatingEvidenceAssessor
current-strategyPolicy advocacy to halt AI developmentMajor 2024 pivot after acknowledging alignment research 'extremely unlikely to succeed in time' [MIRI About](https://intelligence.org/about/)editorial
field-impactControversial but influentialRaised awareness but faced criticism for theoretical approach and failed research programs [LessWrong](https://www.lesswrong.com/posts/rfNHWe5JWhGuSqMHN/steelmanning-miri-critics)editorial
financial-statusOperating at deficit with ~2 year runway$4.97M net loss in 2024, $15.24M in net assets [ProPublica](https://projects.propublica.org/nonprofits/organizations/582565917)editorial
historical-significanceFirst organization to focus on ASI alignment as technical problemAmong first to recognize ASI as most important event in 21st century [MIRI About](https://intelligence.org/about/)editorial
research-outputMinimal recent publicationsNear-zero new publications from core researchers between 2018 and 2022 [LessWrong](https://www.lesswrong.com/posts/rfNHWe5JWhGuSqMHN/steelmanning-miri-critics)editorial
Entity Events
9 entries
TitleDateEventTypeDescriptionSignificance
Strategic pivot away from alignment research2024pivot2024 announcement; current focus on attempting to halt development of increasingly general AI models via discussions with policymakers about extreme risks.major
$4.3M Ethereum donation from Vitalik Buterin2021-05fundingContributed to a revenue spike to $25.6M in 2021.major
Largest single Open Philanthropy grant — $7.7M2020-04funding$6.24M from main OP funders + $1.46M from BitMEX co-founder Ben Delo. At this peak, OP provided ~60% of MIRI's predicted budgets for 2020-2021.major
Open Philanthropy two-year general support grant ($2.65M)2019-02fundingProvided $2,652,500 over two years; OP support grew from $1.4M (2018) to $2.31M (2019).moderate
Renamed to Machine Intelligence Research Institute2013-01pivotmajor
Sold name, web domain, and Singularity Summit to Singularity University2012-12pivotMarked the end of the public-outreach phase.major
First Singularity Summit2006launchAnnual summit organized in cooperation with Stanford University, with funding from Peter Thiel.moderate
Reorientation toward AI safety2005pivotYudkowsky's concerns about superintelligent AI risks prompted a fundamental reorientation toward AI safety. Organization also relocated from Atlanta to Silicon Valley that year.major
Singularity Institute for Artificial Intelligence founded2000foundingFounded by Eliezer Yudkowsky with the original (paradoxical) mission of accelerating AI development.major

Divisions

1
Team

Core technical research on mathematical foundations of AI alignment, including agent foundations and decision theory

Prediction Markets

10 active

Related Wiki Pages

Top Related Pages

Approaches

AI AlignmentAgent Foundations

Analysis

Instrumental Convergence FrameworkAI Safety Multi-Actor Strategic LandscapeDonations List WebsiteTimelines Wiki

Policy

Executive Order 14179: Removing Barriers to American Leadership in AI

Key Debates

AI Alignment Research AgendasAI Accident Risk CruxesWhy Alignment Might Be Hard

Risks

Corrigibility Failure

Other

CorrigibilityNate SoaresAI Control

Organizations

Redwood Research

Concepts

Existential Risk from AIEa Epistemic Failures In The Ftx EraSituational AwarenessAutonomous Coding

Historical

Deep Learning Revolution Era