Skip to content

MIT AI Risk Repository

📋Page Status
Page Type:ResponseStyle Guide →Intervention/response page
Quality:40 (Adequate)⚠️
Importance:60 (Useful)
Last edited:2026-02-02 (4 days ago)
Words:1.1k
Structure:
📊 9📈 0🔗 9📚 924%Score: 13/15
LLM Summary:The MIT AI Risk Repository catalogs 1,700+ AI risks from 65+ frameworks into a searchable database with dual taxonomies (causal and domain-based). Updated quarterly since August 2024, it provides the first comprehensive public catalog of AI risks but is limited by framework extraction methodology and lacks quantitative risk assessments.
Issues (1):
  • QualityRated 40 but structure suggests 87 (underrated by 47 points)
DimensionAssessmentEvidence
CoverageComprehensive1,700+ risks from 65+ frameworks
Data FreshnessQuarterly updatesRegular additions since August 2024
AccessibilityHighFree access via Google Sheets/OneDrive
MethodologyRigorousSystematic review, expert consultation
Target AudienceBroadIndustry, policymakers, academics, auditors
MaintenanceActiveMIT FutureTech team
AttributeDetails
NameMIT AI Risk Repository
OrganizationMIT FutureTech / MIT AI Risk Initiative
Lead ResearcherPeter Slattery
TeamAlexander Saeri, Michael Noetel, Jess Graham, Neil Thompson
Websiteairisk.mit.edu
PaperarXiv:2408.12622
LicenseCC BY 4.0
Data AccessGoogle Sheets, OneDrive

The MIT AI Risk Repository is a living database cataloging over 1,700 AI risks extracted from 65+ published frameworks and taxonomies. It represents the first comprehensive attempt to curate, analyze, and organize AI risk frameworks into a publicly accessible, categorized database.

The repository serves multiple stakeholders:

  • Industry: Identifying risks for product development and compliance
  • Policymakers: Understanding the risk landscape for regulation
  • Academics: Research foundation and gap analysis
  • Risk Evaluators: Structured framework for auditing AI systems

Before the repository, AI risk knowledge was fragmented across dozens of separate frameworks, each with different terminology, scope, and categorization schemes. This fragmentation made it difficult to:

  • Compare risks across frameworks
  • Identify gaps in coverage
  • Develop comprehensive risk management strategies
  • Coordinate across organizations and jurisdictions

The repository provides a unified view, extracting risks from existing frameworks and organizing them using consistent taxonomies.

The repository uses two complementary classification systems:

Classifies risks by how, when, and why they occur:

DimensionCategoriesExamples
EntityHuman, AIWho causes the risk
IntentionalityIntentional, UnintentionalMalicious vs. accidental
TimingPre-deployment, Post-deploymentWhen risk manifests

This enables filtering by causal pathway—e.g., “show me all unintentional AI-caused post-deployment risks.”

Organizes risks into 7 domains and 24 subdomains:

DomainSubdomainsExamples
Discrimination & ToxicityBias, unfairness, harmful contentAlgorithmic discrimination
Privacy & SecurityData breaches, surveillanceModel inversion attacks
MisinformationDeepfakes, manipulationAI-generated disinformation
Malicious Actors & MisuseCyberattacks, weaponsAutonomous weapons
Human-Computer InteractionOverreliance, manipulationAutomation bias
Socioeconomic & EnvironmentalJob displacement, energy useLabor market disruption
AI System SafetyFailures, alignment issuesGoal misgeneralization

The April 2025 update added a new subdomain: multi-agent risks.

VersionDateFrameworksRisksKey Additions
v1August 202443≈770Initial release
v2December 202456≈1,070+13 frameworks, +300 risks
v3April 202565+1,612+22 frameworks, multi-agent subdomain
CurrentDecember 202574+1,700+Ongoing additions

Researchers used multiple methods to identify source frameworks:

  1. Systematic search strategy: Academic databases, grey literature
  2. Forward/backward searching: References within identified frameworks
  3. Expert consultation: Input from AI safety researchers

For each framework:

  1. Extract individual risk categories
  2. Normalize terminology to consistent vocabulary
  3. Classify using both taxonomies
  4. Link to source material
  • Best fit framework synthesis: Iterative refinement of taxonomies
  • Expert review: Validation by AI safety researchers
  • Regular updates: Quarterly incorporation of new frameworks

Organizations use the repository to:

Use CaseApplication
Gap AnalysisIdentify risks not covered by current policies
Compliance MappingMatch internal categories to regulatory frameworks
Audit ChecklistsStructured approach to AI system review
Training MaterialsComprehensive risk awareness resources

Academics leverage the database for:

  • Systematic reviews: Foundation for literature analysis
  • Taxonomy development: Building on established categorization
  • Comparative analysis: Understanding how frameworks differ
  • Trend identification: Tracking emerging risk categories

Policymakers reference the repository for:

  • Regulatory scope: Understanding what risks exist to regulate
  • International coordination: Common vocabulary across jurisdictions
  • Framework comparison: Evaluating existing approaches

In December 2025, the MIT team extended the repository with a Risk Mitigations database, mapping interventions to the risks they address. This enables:

  • Identifying which risks lack adequate mitigations
  • Comparing mitigation strategies across domains
  • Prioritizing intervention research
StrengthEvidence
Comprehensive coverage1,700+ risks from 65+ frameworks
Rigorous methodologySystematic review, expert validation
Dual taxonomyEnables multiple analysis perspectives
Regular updatesQuarterly additions of new frameworks
Open accessCC BY 4.0 license, free database access
Institutional backingMIT credibility and resources
LimitationImpact
Framework-dependentOnly captures risks identified in published sources
No quantificationDoesn’t assess likelihood or severity
Extraction methodologyInterpretation decisions affect categorization
English-language focusMay miss non-English frameworks
Static snapshotsIndividual risks don’t track evolution over time
Aggregation challengesSimilar risks may appear duplicated across frameworks
ResourceFocusCoverageUpdates
MIT AI Risk RepositoryComprehensive catalog1,700+ risksQuarterly
NIST AI RMFRisk management processProcess-focusedPeriodic
EU AI Act CategoriesRegulatory complianceRegulatory risksLegislative cycle
AISafety.infoPublic educationConceptualCommunity-driven
This Wiki (Longterm)Prioritization analysisX-risk focusedOngoing