66ImportanceUsefulImportance: 66/100How central this topic is to AI safety. Higher scores mean greater relevance to understanding or mitigating AI risk.34.5ResearchLowResearch Value: 34.5/100How much value deeper investigation of this topic could yield. Higher scores indicate under-explored topics with high insight potential.
Content1/13
LLM summaryLLM summaryBasic text summary used in search results, entity link tooltips, info boxes, and related page cards.crux content improve <id>ScheduleScheduleHow often the page should be refreshed. Drives the overdue tracking system.Set updateFrequency in frontmatterEntityEntityYAML entity definition with type, description, and related entries.Edit historyEdit historyTracked changes from improve pipeline runs and manual edits.crux edit-log view <id>OverviewOverviewA ## Overview heading section that orients readers. Helps with search and AI summaries.Add a ## Overview section at the top of the page
Tables0/ ~1TablesData tables for structured comparisons and reference material.Add data tables to the pageDiagrams0DiagramsVisual content — Mermaid diagrams, charts, or Squiggle estimate models.Add Mermaid diagrams or Squiggle modelsInt. links0/ ~3Int. linksLinks to other wiki pages. More internal links = better graph connectivity.Add links to other wiki pagesExt. links0/ ~1Ext. linksLinks to external websites, papers, and resources outside the wiki.Add links to external sourcesFootnotes0/ ~2FootnotesFootnote citations [^N] with source references at the bottom of the page.Add [^N] footnote citationsReferences0/ ~1ReferencesCurated external resources linked via <R> components or cited_by in YAML.Add <R> resource linksQuotes0QuotesSupporting quotes extracted from cited sources to back up page claims.crux citations extract-quotes <id>Accuracy0AccuracyCitations verified against their sources for factual accuracy.crux citations verify <id>Backlinks1BacklinksNumber of other wiki pages that link to this page. Higher backlink count means better integration into the knowledge graph.
Issues1
StructureNo tables or diagrams - consider adding visual content
AI Safety Summit
Historical
AI Safety Summit (Bletchley Park)
International summits convening governments and AI labs to address AI safety
Related
Organizations
UK AI Safety InstituteOrganizationUK AI Safety InstituteThe UK AI Safety Institute (renamed AI Security Institute in Feb 2025) operates with ~30 technical staff and 50M GBP annual budget, conducting frontier model evaluations using its open-source Inspe...Quality: 52/100
GovAIOrganizationGovAIGovAI is an AI policy research organization with ~15-20 staff, funded primarily by Coefficient Giving (\$1.8M+ in 2023-2024), that has trained 100+ governance researchers through fellowships and cu...Quality: 43/100Japan AI Safety InstituteOrganizationJapan AI Safety InstituteJapan's national AI safety institute, established in February 2024 under the Ministry of Economy, Trade and Industry (METI). Focuses on AI standards research and evaluation frameworks. Part of the ...Singapore AI Safety InstituteOrganizationSingapore AI Safety InstituteSingapore's national AI safety testing institute, established in February 2024 under the Infocomm Media Development Authority (IMDA). Focuses on testing frameworks and regional AI safety coordinati...Canadian AI Safety InstituteOrganizationCanadian AI Safety InstituteCanada's national AI safety institute, launched in November 2024 at the Seoul AI Safety Summit. Received C$50M (~$37M USD) in initial pledged funding from the Canadian government.EU AI OfficeOrganizationEU AI OfficeThe European Commission's dedicated office for AI governance, established in February 2024. Responsible for enforcing the EU AI Act, developing standards and codes of practice, and coordinating EU-...
Key Debates
AI Governance and PolicyCruxAI Governance and PolicyComprehensive analysis of AI governance mechanisms estimating 30-50% probability of meaningful regulation by 2027 and 5-25% x-risk reduction potential through coordinated international approaches. ...Quality: 66/100Open vs Closed Source AICruxOpen vs Closed Source AIComprehensive analysis of open vs closed source AI debate, documenting that open model performance gap narrowed from 8% to 1.7% in 2024, with 1.2B+ Llama downloads by April 2025 and DeepSeek R1 dem...Quality: 60/100
Policy
China AI Regulatory FrameworkPolicyChina AI Regulatory FrameworkComprehensive analysis of China's AI regulatory framework covering 5+ major regulations affecting 50,000+ companies, with enforcement focusing on content control and social stability rather than ca...Quality: 57/100Compute GovernancePolicyCompute GovernanceThis is a comprehensive overview of U.S. AI chip export controls policy, documenting the evolution from blanket restrictions to case-by-case licensing while highlighting significant enforcement cha...Quality: 58/100US Executive Order on Safe, Secure, and Trustworthy AIPolicyUS Executive Order on Safe, Secure, and Trustworthy AIExecutive Order 14110 (Oct 2023) established compute thresholds (10^26 FLOP general, 10^23 biological) and created AISI, but was revoked after 15 months with ~85% completion. The 10^26 threshold wa...Quality: 91/100
Risks
Multipolar Trap (AI Development)RiskMultipolar Trap (AI Development)Analysis of coordination failures in AI development using game theory, documenting how competitive dynamics between nations (US \$109B vs China \$9.3B investment in 2024 per Stanford HAI 2025) and ...Quality: 91/100AI Authoritarian ToolsRiskAI Authoritarian ToolsComprehensive analysis documenting AI-enabled authoritarian tools across surveillance (350M+ cameras in China analyzing 25.9M faces daily per district), censorship (22+ countries mandating AI conte...Quality: 91/100
Approaches
AI Governance Coordination TechnologiesApproachAI Governance Coordination TechnologiesComprehensive analysis of coordination mechanisms for AI safety showing racing dynamics could compress safety timelines by 2-5 years, with \$500M+ government investment in AI Safety Institutes achi...Quality: 91/100AI Safety CasesApproachAI Safety CasesSafety cases are structured arguments adapted from nuclear/aviation to justify AI system safety, with UK AISI publishing templates in 2024 and 3 of 4 frontier labs committing to implementation. Apo...Quality: 91/100AI EvaluationApproachAI EvaluationComprehensive overview of AI evaluation methods spanning dangerous capability assessment, safety properties, and deception detection, with categorized frameworks from industry (Anthropic Constituti...Quality: 72/100
Analysis
AI Media-Policy Feedback Loop ModelAnalysisAI Media-Policy Feedback Loop ModelSystem dynamics model analyzing feedback loops between media coverage, public concern, and AI policy using coupled differential equations. Finds 6-18 month lag from coverage spikes to regulatory re...Quality: 53/100OpenAI Foundation Governance ParadoxAnalysisOpenAI Foundation Governance ParadoxThe OpenAI Foundation holds Class N shares giving it exclusive power to appoint/remove all OpenAI Group PBC board members. However, 7 of 8 Foundation board members also serve on the for-profit boar...Quality: 75/100
Concepts
Dual-Use AI TechnologyConceptDual-Use AI TechnologyTechnologies and research with both beneficial and harmful applications0
Other
Yoshua BengioPersonYoshua BengioComprehensive biographical overview of Yoshua Bengio's transition from deep learning pioneer (Turing Award 2018) to AI safety advocate, documenting his 2020 pivot at Mila toward safety research, co...Quality: 39/100Stuart RussellPersonStuart RussellStuart Russell (born 1962) is a British computer scientist and UC Berkeley professor who co-authored the dominant AI textbook 'Artificial Intelligence: A Modern Approach' (used in over 1,500 univer...Quality: 30/100