76.5ImportanceHighImportance: 76.5/100How central this topic is to AI safety. Higher scores mean greater relevance to understanding or mitigating AI risk.18.5ResearchMinimalResearch Value: 18.5/100How much value deeper investigation of this topic could yield. Higher scores indicate under-explored topics with high insight potential.
Content1/13
LLM summaryLLM summaryBasic text summary used in search results, entity link tooltips, info boxes, and related page cards.crux content improve <id>ScheduleScheduleHow often the page should be refreshed. Drives the overdue tracking system.Set updateFrequency in frontmatterEntityEntityYAML entity definition with type, description, and related entries.Edit historyEdit historyTracked changes from improve pipeline runs and manual edits.crux edit-log view <id>OverviewOverviewA ## Overview heading section that orients readers. Helps with search and AI summaries.Add a ## Overview section at the top of the page
Tables0/ ~1TablesData tables for structured comparisons and reference material.Add data tables to the pageDiagrams0DiagramsVisual content — Mermaid diagrams, charts, or Squiggle estimate models.Add Mermaid diagrams or Squiggle modelsInt. links0/ ~3Int. linksLinks to other wiki pages. More internal links = better graph connectivity.Add links to other wiki pagesExt. links0/ ~1Ext. linksLinks to external websites, papers, and resources outside the wiki.Add links to external sourcesFootnotes0/ ~2FootnotesFootnote citations [^N] with source references at the bottom of the page.Add [^N] footnote citationsReferences0/ ~1ReferencesCurated external resources linked via <R> components or cited_by in YAML.Add <R> resource linksQuotes0QuotesSupporting quotes extracted from cited sources to back up page claims.crux citations extract-quotes <id>Accuracy0AccuracyCitations verified against their sources for factual accuracy.crux citations verify <id>Backlinks2BacklinksNumber of other wiki pages that link to this page. Higher backlink count means better integration into the knowledge graph.
Issues1
StructureNo tables or diagrams - consider adding visual content
Dual-Use AI Technology
Concept
Dual-Use AI Technology
Technologies and research with both beneficial and harmful applications
AI Governance Coordination TechnologiesApproachAI Governance Coordination TechnologiesComprehensive analysis of coordination mechanisms for AI safety showing racing dynamics could compress safety timelines by 2-5 years, with \$500M+ government investment in AI Safety Institutes achi...Quality: 91/100AI Safety CasesApproachAI Safety CasesSafety cases are structured arguments adapted from nuclear/aviation to justify AI system safety, with UK AISI publishing templates in 2024 and 3 of 4 frontier labs committing to implementation. Apo...Quality: 91/100AI EvaluationApproachAI EvaluationComprehensive overview of AI evaluation methods spanning dangerous capability assessment, safety properties, and deception detection, with categorized frameworks from industry (Anthropic Constituti...Quality: 72/100
Risks
Multipolar Trap (AI Development)RiskMultipolar Trap (AI Development)Analysis of coordination failures in AI development using game theory, documenting how competitive dynamics between nations (US \$109B vs China \$9.3B investment in 2024 per Stanford HAI 2025) and ...Quality: 91/100AI Authoritarian ToolsRiskAI Authoritarian ToolsComprehensive analysis documenting AI-enabled authoritarian tools across surveillance (350M+ cameras in China analyzing 25.9M faces daily per district), censorship (22+ countries mandating AI conte...Quality: 91/100AI-Driven Institutional Decision CaptureRiskAI-Driven Institutional Decision CaptureComprehensive analysis of how AI systems could capture institutional decision-making across healthcare, criminal justice, hiring, and governance through systematic biases. Documents 85% racial bias...Quality: 73/100Compute ConcentrationRiskCompute ConcentrationAll six major AI infrastructure spenders (Amazon, Alphabet, Microsoft, Meta, Oracle, xAI) are US companies subject to CLOUD Act and FISA 702, giving the US government effective legal access to the ...Quality: 70/100
Analysis
AI Proliferation Risk ModelAnalysisAI Proliferation Risk ModelQuantitative model of AI capability diffusion across 5 actor tiers, documenting compression from 24-36 months (2020) to 12-18 months (2024) with projections of 6-12 months by 2025-2026. Identifies ...Quality: 65/100AI Media-Policy Feedback Loop ModelAnalysisAI Media-Policy Feedback Loop ModelSystem dynamics model analyzing feedback loops between media coverage, public concern, and AI policy using coupled differential equations. Finds 6-18 month lag from coverage spikes to regulatory re...Quality: 53/100OpenAI Foundation Governance ParadoxAnalysisOpenAI Foundation Governance ParadoxThe OpenAI Foundation holds Class N shares giving it exclusive power to appoint/remove all OpenAI Group PBC board members. However, 7 of 8 Foundation board members also serve on the for-profit boar...Quality: 75/100Long-Term Benefit Trust (Anthropic)AnalysisLong-Term Benefit Trust (Anthropic)Anthropic's Long-Term Benefit Trust represents an innovative but potentially limited governance mechanism where financially disinterested trustees can appoint board members to balance public benefi...Quality: 70/100
Policy
US Executive Order on Safe, Secure, and Trustworthy AIPolicyUS Executive Order on Safe, Secure, and Trustworthy AIExecutive Order 14110 (Oct 2023) established compute thresholds (10^26 FLOP general, 10^23 biological) and created AISI, but was revoked after 15 months with ~85% completion. The 10^26 threshold wa...Quality: 91/100Voluntary AI Safety CommitmentsPolicyVoluntary AI Safety CommitmentsComprehensive empirical analysis of voluntary AI safety commitments showing 53% mean compliance rate across 30 indicators (ranging from 13% for Apple to 83% for OpenAI), with strongest adoption in ...Quality: 91/100
Organizations
US AI Safety InstituteOrganizationUS AI Safety InstituteThe US AI Safety Institute (AISI), established November 2023 within NIST with \$10M budget (FY2025 request \$82.7M), conducted pre-deployment evaluations of frontier models through MOUs with OpenAI...Quality: 91/100OpenAIOrganizationOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to Public Benefit Corporation, with detailed analysis of governance crisis, 2024-2025 ownership restructuri...Quality: 62/100
Key Debates
Open vs Closed Source AICruxOpen vs Closed Source AIComprehensive analysis of open vs closed source AI debate, documenting that open model performance gap narrowed from 8% to 1.7% in 2024, with 1.2B+ Llama downloads by April 2025 and DeepSeek R1 dem...Quality: 60/100Government Regulation vs Industry Self-GovernanceCruxGovernment Regulation vs Industry Self-GovernanceComprehensive comparison of government regulation versus industry self-governance for AI, documenting that US federal AI regulations doubled to 59 in 2024 while industry lobbying surged 141% to 648...Quality: 54/100
Historical
AI Safety Summit (Bletchley Park)HistoricalAI Safety Summit (Bletchley Park)International summits convening governments and AI labs to address AI safety0
Other
Yoshua BengioPersonYoshua BengioComprehensive biographical overview of Yoshua Bengio's transition from deep learning pioneer (Turing Award 2018) to AI safety advocate, documenting his 2020 pivot at Mila toward safety research, co...Quality: 39/100Stuart RussellPersonStuart RussellStuart Russell (born 1962) is a British computer scientist and UC Berkeley professor who co-authored the dominant AI textbook 'Artificial Intelligence: A Modern Approach' (used in over 1,500 univer...Quality: 30/100