MIT AI Risk Repository
- QualityRated 40 but structure suggests 87 (underrated by 47 points)
Quick Assessment
Section titled “Quick Assessment”| Dimension | Assessment | Evidence |
|---|---|---|
| Coverage | Comprehensive | 1,700+ risks from 65+ frameworks |
| Data Freshness | Quarterly updates | Regular additions since August 2024 |
| Accessibility | High | Free access via Google Sheets/OneDrive |
| Methodology | Rigorous | Systematic review, expert consultation |
| Target Audience | Broad | Industry, policymakers, academics, auditors |
| Maintenance | Active | MIT FutureTech team |
Project Details
Section titled “Project Details”| Attribute | Details |
|---|---|
| Name | MIT AI Risk Repository |
| Organization | MIT FutureTech / MIT AI Risk Initiative |
| Lead Researcher | Peter Slattery |
| Team | Alexander Saeri, Michael Noetel, Jess Graham, Neil Thompson |
| Website | airisk.mit.edu |
| Paper | arXiv:2408.12622 |
| License | CC BY 4.0 |
| Data Access | Google Sheets, OneDrive |
Overview
Section titled “Overview”The MIT AI Risk Repository is a living database cataloging over 1,700 AI risks extracted from 65+ published frameworks and taxonomies. It represents the first comprehensive attempt to curate, analyze, and organize AI risk frameworks into a publicly accessible, categorized database.
The repository serves multiple stakeholders:
- Industry: Identifying risks for product development and compliance
- Policymakers: Understanding the risk landscape for regulation
- Academics: Research foundation and gap analysis
- Risk Evaluators: Structured framework for auditing AI systems
The Problem It Solves
Section titled “The Problem It Solves”Before the repository, AI risk knowledge was fragmented across dozens of separate frameworks, each with different terminology, scope, and categorization schemes. This fragmentation made it difficult to:
- Compare risks across frameworks
- Identify gaps in coverage
- Develop comprehensive risk management strategies
- Coordinate across organizations and jurisdictions
The repository provides a unified view, extracting risks from existing frameworks and organizing them using consistent taxonomies.
Dual Taxonomy Structure
Section titled “Dual Taxonomy Structure”The repository uses two complementary classification systems:
Causal Taxonomy
Section titled “Causal Taxonomy”Classifies risks by how, when, and why they occur:
| Dimension | Categories | Examples |
|---|---|---|
| Entity | Human, AI | Who causes the risk |
| Intentionality | Intentional, Unintentional | Malicious vs. accidental |
| Timing | Pre-deployment, Post-deployment | When risk manifests |
This enables filtering by causal pathway—e.g., “show me all unintentional AI-caused post-deployment risks.”
Domain Taxonomy
Section titled “Domain Taxonomy”Organizes risks into 7 domains and 24 subdomains:
| Domain | Subdomains | Examples |
|---|---|---|
| Discrimination & Toxicity | Bias, unfairness, harmful content | Algorithmic discrimination |
| Privacy & Security | Data breaches, surveillance | Model inversion attacks |
| Misinformation | DeepfakesRiskDeepfakesComprehensive overview of deepfake risks documenting $60M+ in fraud losses, 90%+ non-consensual imagery prevalence, and declining detection effectiveness (65% best accuracy). Reviews technical capa...Quality: 50/100, manipulation | AI-generated disinformationRiskAI DisinformationPost-2024 analysis shows AI disinformation had limited immediate electoral impact (cheap fakes used 7x more than AI content), but creates concerning long-term epistemic erosion with 82% higher beli...Quality: 54/100 |
| Malicious Actors & Misuse | Cyberattacks, weapons | Autonomous weaponsRiskAutonomous WeaponsComprehensive overview of lethal autonomous weapons systems documenting their battlefield deployment (Libya 2020, Ukraine 2022-present) with AI-enabled drones achieving 70-80% hit rates versus 10-2...Quality: 56/100 |
| Human-Computer Interaction | Overreliance, manipulation | Automation biasRiskAutomation BiasComprehensive review of automation bias showing physician accuracy drops from 92.8% to 23.6% with incorrect AI guidance, 78% of users accept AI outputs without scrutiny, and LLM hallucination rates...Quality: 56/100 |
| Socioeconomic & Environmental | Job displacement, energy use | Labor market disruption |
| AI System Safety | Failures, alignment issues | Goal misgeneralizationRiskGoal MisgeneralizationGoal misgeneralization occurs when AI systems learn transferable capabilities but pursue wrong objectives in deployment, with 60-80% of RL agents exhibiting this failure mode under distribution shi...Quality: 63/100 |
The April 2025 update added a new subdomain: multi-agent risks.
Database Evolution
Section titled “Database Evolution”| Version | Date | Frameworks | Risks | Key Additions |
|---|---|---|---|---|
| v1 | August 2024 | 43 | ≈770 | Initial release |
| v2 | December 2024 | 56 | ≈1,070 | +13 frameworks, +300 risks |
| v3 | April 2025 | 65+ | 1,612 | +22 frameworks, multi-agent subdomain |
| Current | December 2025 | 74+ | 1,700+ | Ongoing additions |
Methodology
Section titled “Methodology”Framework Identification
Section titled “Framework Identification”Researchers used multiple methods to identify source frameworks:
- Systematic search strategy: Academic databases, grey literature
- Forward/backward searching: References within identified frameworks
- Expert consultation: Input from AI safety researchers
Risk Extraction
Section titled “Risk Extraction”For each framework:
- Extract individual risk categories
- Normalize terminology to consistent vocabulary
- Classify using both taxonomies
- Link to source material
Quality Assurance
Section titled “Quality Assurance”- Best fit framework synthesis: Iterative refinement of taxonomies
- Expert review: Validation by AI safety researchers
- Regular updates: Quarterly incorporation of new frameworks
Use Cases
Section titled “Use Cases”Risk Management
Section titled “Risk Management”Organizations use the repository to:
| Use Case | Application |
|---|---|
| Gap Analysis | Identify risks not covered by current policies |
| Compliance Mapping | Match internal categories to regulatory frameworks |
| Audit Checklists | Structured approach to AI system review |
| Training Materials | Comprehensive risk awareness resources |
Research
Section titled “Research”Academics leverage the database for:
- Systematic reviews: Foundation for literature analysis
- Taxonomy development: Building on established categorization
- Comparative analysis: Understanding how frameworks differ
- Trend identification: Tracking emerging risk categories
Policy Development
Section titled “Policy Development”Policymakers reference the repository for:
- Regulatory scope: Understanding what risks exist to regulate
- International coordinationAi Transition Model ParameterInternational CoordinationThis page contains only a React component placeholder with no actual content rendered. Cannot assess importance or quality without substantive text.: Common vocabulary across jurisdictions
- Framework comparison: Evaluating existing approaches
Related Work: Risk Mitigations
Section titled “Related Work: Risk Mitigations”In December 2025, the MIT team extended the repository with a Risk Mitigations database, mapping interventions to the risks they address. This enables:
- Identifying which risks lack adequate mitigations
- Comparing mitigation strategies across domains
- Prioritizing intervention research
Strengths and Limitations
Section titled “Strengths and Limitations”Strengths
Section titled “Strengths”| Strength | Evidence |
|---|---|
| Comprehensive coverage | 1,700+ risks from 65+ frameworks |
| Rigorous methodology | Systematic review, expert validation |
| Dual taxonomy | Enables multiple analysis perspectives |
| Regular updates | Quarterly additions of new frameworks |
| Open access | CC BY 4.0 license, free database access |
| Institutional backing | MIT credibility and resources |
Limitations
Section titled “Limitations”| Limitation | Impact |
|---|---|
| Framework-dependent | Only captures risks identified in published sources |
| No quantification | Doesn’t assess likelihood or severity |
| Extraction methodology | Interpretation decisions affect categorization |
| English-language focus | May miss non-English frameworks |
| Static snapshots | Individual risks don’t track evolution over time |
| Aggregation challenges | Similar risks may appear duplicated across frameworks |
Comparison with Other Resources
Section titled “Comparison with Other Resources”| Resource | Focus | Coverage | Updates |
|---|---|---|---|
| MIT AI Risk Repository | Comprehensive catalog | 1,700+ risks | Quarterly |
| NIST AI RMF | Risk management process | Process-focused | Periodic |
| EU AI ActPolicyEU AI ActComprehensive overview of the EU AI Act's risk-based regulatory framework, particularly its two-tier approach to foundation models that distinguishes between standard and systemic risk AI systems. ...Quality: 55/100 Categories | Regulatory compliance | Regulatory risks | Legislative cycle |
| AISafety.infoStampy Aisafety InfoAISafety.info is a volunteer-maintained wiki with 280+ answers on AI existential risk, complemented by Stampy, an LLM chatbot searching 10K-100K alignment documents via RAG. Features include a Disc...Quality: 45/100 | Public educationPublic EducationPublic education initiatives show measurable but modest impacts: MIT programs increased accurate AI risk perception by 34%, while 67% of Americans and 73% of policymakers still lack sufficient AI u...Quality: 51/100 | Conceptual | Community-driven |
| This Wiki (Longterm) | Prioritization analysis | X-risk focused | Ongoing |