AI Safety Institutes (AISIs)
Government-run institutions dedicated to evaluating frontier AI systems for dangerous capabilities and safety properties. Pioneered by the UK AISI in 2023, with analogues in the US (USAISI), EU, Japan, and others. Play a key role in pre-deployment evaluations and responsible scaling policy thresholds.
Related Pages
Top Related Pages
International Coordination Mechanisms
International coordination on AI safety involves multilateral treaties, bilateral dialogues, and institutional networks to manage AI risks globally...
International Compute Regimes
Multilateral coordination mechanisms for AI compute governance, exploring pathways from non-binding declarations to comprehensive treaties. Assessm...
AI Whistleblower Protections
Legal and institutional frameworks for protecting AI researchers and employees who report safety concerns. The bipartisan AI Whistleblower Protecti...
Model Registries
Centralized databases of frontier AI models that enable governments to track development, enforce safety requirements, and coordinate international...
AI Lab Safety Culture
This response analyzes interventions to improve safety culture within AI labs. Evidence from 2024-2025 shows significant gaps: no company scored ab...
Organizations
Safety Research
Other
Quick Facts
- Introduced
- UK (2023), US (2024), others planned
Sources
- UK AI Safety InstituteUK Government
- US AI Safety InstituteNIST
- Inspect FrameworkUK AISI
- Seoul Declaration on AISI NetworkSummit Participants