41QualityAdequateQuality: 41/100LLM-assigned rating of overall page quality, considering depth, accuracy, and completeness.52.3ImportanceUsefulImportance: 52.3/100How central this topic is to AI safety. Higher scores mean greater relevance to understanding or mitigating AI risk.
Summary
Overview of national AI Safety Institutes (UK, US, and 11+ countries as of 2026) and intergovernmental bodies, covering budgets, mandates, and key dynamics like political vulnerability and lab relationships. Content is a competent reference compilation but lacks depth on specific evaluation methodologies, funding details, or comparative effectiveness.
Content5/13
LLM summaryLLM summaryBasic text summary used in search results, entity link tooltips, info boxes, and related page cards.ScheduleScheduleHow often the page should be refreshed. Drives the overdue tracking system.Set updateFrequency in frontmatterEntityEntityYAML entity definition with type, description, and related entries.Add entity YAML in data/entities/Edit history2Edit historyTracked changes from improve pipeline runs and manual edits.OverviewOverviewA ## Overview heading section that orients readers. Helps with search and AI summaries.
Tables1/ ~1TablesData tables for structured comparisons and reference material.Diagrams0DiagramsVisual content — Mermaid diagrams, charts, or Squiggle estimate models.Add Mermaid diagrams or Squiggle modelsInt. links6/ ~3Int. linksLinks to other wiki pages. More internal links = better graph connectivity.Ext. links0/ ~2Ext. linksLinks to external websites, papers, and resources outside the wiki.Add links to external sourcesFootnotes0/ ~2FootnotesFootnote citations [^N] with source references at the bottom of the page.Add [^N] footnote citationsReferences0/ ~1ReferencesCurated external resources linked via <R> components or cited_by in YAML.Add <R> resource linksQuotes0QuotesSupporting quotes extracted from cited sources to back up page claims.crux citations extract-quotes <id>Accuracy0AccuracyCitations verified against their sources for factual accuracy.crux citations verify <id>RatingsN:2.5 R:4.5 A:3.5 C:4.5RatingsSub-quality ratings: Novelty, Rigor, Actionability, Completeness (0-10 scale).
Change History2
Clarify overview pages with new entity type3 weeks ago
Added `overview` as a proper entity type throughout the system, migrated all 36 overview pages to `entityType: overview`, built overview-specific InfoBox rendering with child page links, created an OverviewBanner component, and added a knowledge-base-overview page template to Crux.
Fix conflicting numeric IDs + add integrity checks#1684 weeks ago
Fixed all 9 overview pages from PR #118 which had numeric IDs (E687-E695) that conflicted with existing YAML entities. Reassigned to E710-E718. Then hardened the system to prevent recurrence:
1. Added page-level numericId conflict detection to `build-data.mjs` (build now fails on conflicts)
2. Created `numeric-id-integrity` global validation rule (cross-page uniqueness, format validation, entity conflict detection)
3. Added `numericId` and `subcategory` to frontmatter Zod schema with format regex
Government AI Safety Organizations (Overview)
Overview
Governments have begun establishing dedicated institutions to address AI safety risks, with AI Safety Institutes (AISIs) emerging as a key organizational model since 2023. These bodies operate with public mandates and budgets that distinguish them from the largely philanthropically-funded nonprofit landscape, though they face different constraints including political cycles and bureaucratic processes.
National AI Safety Institutes
Organization
Country
Founded
Focus
Budget
UK AI Safety InstituteOrganizationUK AI Safety InstituteThe UK AI Safety Institute (renamed AI Security Institute in Feb 2025) operates with ~30 technical staff and 50M GBP annual budget, conducting frontier model evaluations using its open-source Inspe...Quality: 52/100
UK
2023
Frontier model evaluation, safety research
≈$65M
US AI Safety InstituteOrganizationUS AI Safety InstituteThe US AI Safety Institute (AISI), established November 2023 within NIST with \$10M budget (FY2025 request \$82.7M), conducted pre-deployment evaluations of frontier models through MOUs with OpenAI...Quality: 91/100
US
2024
Standards development, model evaluation
≈$47.7M requested
NIST AIOrganizationNIST and AI SafetyNIST plays a central coordinating role in U.S. AI governance through voluntary standards and risk management frameworks, but faces criticism for technical focus over systemic issues and funding con...Quality: 63/100
US
Ongoing
AI Risk Management Framework, standards
Part of NIST budget
The UK AISI was the first national AI Safety Institute, established following the Bletchley Park AI Safety Summit in November 2023. The US AISI was established within NIST in 2024. Both conduct pre-deployment evaluations of frontier AI models.
Intergovernmental Bodies
Global Partnership on Artificial Intelligence (GPAI)OrganizationGlobal Partnership on Artificial Intelligence (GPAI)GPAI represents the first major multilateral AI governance initiative but operates as a non-binding policy laboratory with limited enforcement power and structural coordination challenges. While pr...Quality: 50/100: Multilateral initiative with 29 member countries working on responsible AI development and governance
International Network of AI Safety Institutes
As of early 2026, 11+ countries have established or announced AI Safety Institutes, forming a growing network for international safety coordination. Members share evaluation methodologies, coordinate on frontier model assessments, and develop common safety benchmarks. Key members include the UK, US, Japan, Canada, France, and South Korea, with India joining in 2026.
Key Dynamics
Political vulnerability: Government AI safety bodies are subject to changes in political leadership and priorities. The US AISI's mandate and funding depend on congressional and executive support, which can shift between administrations.
Relationship with labs: AISIs must balance cooperative relationships with frontier labs (needed for access to models) against independent oversight mandates. The UK AISI has voluntary agreements with major labs for pre-deployment access.
Complementarity with nonprofits: Government bodies focus on standards, regulation, and institutional evaluation capacity, while nonprofit safety organizations (like METROrganizationMETRMETR conducts pre-deployment dangerous capability evaluations for frontier AI labs (OpenAI, Anthropic, Google DeepMind), testing autonomous replication, cybersecurity, CBRN, and manipulation capabi...Quality: 66/100 and Apollo ResearchOrganizationApollo ResearchApollo Research demonstrated in December 2024 that all six tested frontier models (including o1, Claude 3.5 Sonnet, Gemini 1.5 Pro) engage in scheming behaviors, with o1 maintaining deception in ov...Quality: 58/100) conduct more specialized technical research. There is increasing collaboration between the two.
Global Partnership on Artificial Intelligence (GPAI)OrganizationGlobal Partnership on Artificial Intelligence (GPAI)GPAI represents the first major multilateral AI governance initiative but operates as a non-binding policy laboratory with limited enforcement power and structural coordination challenges. While pr...Quality: 50/100