Longterm Wiki

AI Safety Institutes (AISIs)

Introduced UK (2023), US (2024), others plannedWiki article →

Government-run institutions dedicated to evaluating frontier AI systems for dangerous capabilities and safety properties. Pioneered by the UK AISI in 2023, with analogues in the US (USAISI), EU, Japan, and others. Play a key role in pre-deployment evaluations and responsible scaling policy thresholds.

Related Pages

Top Related Pages

Organizations

US AI Safety InstituteUK AI Safety InstituteMETR

Risks

Bioweapons RiskCyberweapons Risk

Approaches

AI Governance Coordination TechnologiesAI Safety Intervention Portfolio

Analysis

AI Safety Intervention Effectiveness MatrixLongterm WikiAI Lab Whistleblower Dynamics Model

Policy

Bletchley DeclarationSingapore Consensus on AI Safety Research PrioritiesInternational AI Safety Summit Series

Concepts

State Capacity and AI GovernanceSelf-Improvement and Recursive EnhancementAbout This Wiki

Safety Research

AI Evaluations

Key Debates

AI Governance and PolicyAI Structural Risk Cruxes

Other

Elizabeth Kelly

Quick Facts

Introduced
UK (2023), US (2024), others planned

Sources