66ImportanceUsefulImportance: 66/100How central this topic is to AI safety. Higher scores mean greater relevance to understanding or mitigating AI risk.37.5ResearchLowResearch Value: 37.5/100How much value deeper investigation of this topic could yield. Higher scores indicate under-explored topics with high insight potential.
Content1/13
LLM summaryLLM summaryBasic text summary used in search results, entity link tooltips, info boxes, and related page cards.crux content improve <id>ScheduleScheduleHow often the page should be refreshed. Drives the overdue tracking system.Set updateFrequency in frontmatterEntityEntityYAML entity definition with type, description, and related entries.Edit historyEdit historyTracked changes from improve pipeline runs and manual edits.crux edit-log view <id>OverviewOverviewA ## Overview heading section that orients readers. Helps with search and AI summaries.Add a ## Overview section at the top of the page
Tables0/ ~1TablesData tables for structured comparisons and reference material.Add data tables to the pageDiagrams0DiagramsVisual content — Mermaid diagrams, charts, or Squiggle estimate models.Add Mermaid diagrams or Squiggle modelsInt. links0/ ~3Int. linksLinks to other wiki pages. More internal links = better graph connectivity.Add links to other wiki pagesExt. links0/ ~1Ext. linksLinks to external websites, papers, and resources outside the wiki.Add links to external sourcesFootnotes0/ ~2FootnotesFootnote citations [^N] with source references at the bottom of the page.Add [^N] footnote citationsReferences0/ ~1ReferencesCurated external resources linked via <R> components or cited_by in YAML.Add <R> resource linksQuotes0QuotesSupporting quotes extracted from cited sources to back up page claims.crux citations extract-quotes <id>Accuracy0AccuracyCitations verified against their sources for factual accuracy.crux citations verify <id>Backlinks1BacklinksNumber of other wiki pages that link to this page. Higher backlink count means better integration into the knowledge graph.
Issues1
StructureNo tables or diagrams - consider adding visual content
AI Executive Order
Policy
Biden AI Executive Order
US executive orders establishing AI safety requirements and oversight
Related
Organizations
US AI Safety InstituteOrganizationUS AI Safety InstituteThe US AI Safety Institute (AISI), established November 2023 within NIST with \$10M budget (FY2025 request \$82.7M), conducted pre-deployment evaluations of frontier models through MOUs with OpenAI...Quality: 91/100
GovAIOrganizationGovAIGovAI is an AI policy research organization with ~15-20 staff, funded primarily by Coefficient Giving (\$1.8M+ in 2023-2024), that has trained 100+ governance researchers through fellowships and cu...Quality: 43/100Effective Institutions ProjectOrganizationEffective Institutions ProjectResearch and advocacy organization working to improve institutional effectiveness.Center for Global DevelopmentOrganizationCenter for Global DevelopmentIndependent think tank focused on reducing global poverty and inequality.RAND CorporationOrganizationRAND CorporationNonprofit global policy think tank. Active in AI policy, security studies, and technology assessment.Legal Priorities ProjectOrganizationLegal Priorities ProjectResearch organization studying legal questions relevant to reducing existential and catastrophic risks.Simon Institute for Longterm GovernanceOrganizationSimon Institute for Longterm GovernanceGeneva-based think tank supporting international governance of AI and other emerging technologies.
Risks
AI ProliferationRiskAI ProliferationAI proliferation accelerated dramatically as the capability gap narrowed from 18 to 6 months (2022-2024), with open-source models like DeepSeek R1 now matching frontier performance. US export contr...Quality: 60/100AI-Driven Economic DisruptionRiskAI-Driven Economic DisruptionComprehensive survey of AI labor displacement evidence showing 40-60% of jobs in advanced economies exposed to automation, with IMF warning of inequality worsening in most scenarios and 13% early-c...Quality: 42/100
Analysis
AI Regulatory Capacity Threshold ModelAnalysisAI Regulatory Capacity Threshold ModelQuantitative model estimating current US/UK regulatory capacity at 0.15-0.25 versus 0.4-0.6 threshold needed, with capacity ratio declining from 0.20 to 0.02 by 2028 under baseline assumptions. Con...Quality: 56/100AI Media-Policy Feedback Loop ModelAnalysisAI Media-Policy Feedback Loop ModelSystem dynamics model analyzing feedback loops between media coverage, public concern, and AI policy using coupled differential equations. Finds 6-18 month lag from coverage spikes to regulatory re...Quality: 53/100
Key Debates
AI Governance and PolicyCruxAI Governance and PolicyComprehensive analysis of AI governance mechanisms estimating 30-50% probability of meaningful regulation by 2027 and 5-25% x-risk reduction potential through coordinated international approaches. ...Quality: 66/100Government Regulation vs Industry Self-GovernanceCruxGovernment Regulation vs Industry Self-GovernanceComprehensive comparison of government regulation versus industry self-governance for AI, documenting that US federal AI regulations doubled to 59 in 2024 while industry lobbying surged 141% to 648...Quality: 54/100
Policy
Safe and Secure Innovation for Frontier Artificial Intelligence Models ActPolicySafe and Secure Innovation for Frontier Artificial Intelligence Models ActCalifornia's SB 1047 required safety testing, shutdown capabilities, and third-party audits for AI models exceeding 10^26 FLOP or \$100M training cost; it passed the legislature (Assembly 45-11, Se...Quality: 66/100China AI Regulatory FrameworkPolicyChina AI Regulatory FrameworkComprehensive analysis of China's AI regulatory framework covering 5+ major regulations affecting 50,000+ companies, with enforcement focusing on content control and social stability rather than ca...Quality: 57/100EU AI ActPolicyEU AI ActComprehensive overview of the EU AI Act's risk-based regulatory framework, particularly its two-tier approach to foundation models that distinguishes between standard and systemic risk AI systems. ...Quality: 55/100
Other
Geoffrey HintonPersonGeoffrey HintonComprehensive biographical profile of Geoffrey Hinton documenting his 2023 shift from AI pioneer to safety advocate, estimating 10-20% extinction risk in 5-20 years. Covers his media strategy, poli...Quality: 42/100Yoshua BengioPersonYoshua BengioComprehensive biographical overview of Yoshua Bengio's transition from deep learning pioneer (Turing Award 2018) to AI safety advocate, documenting his 2020 pivot at Mila toward safety research, co...Quality: 39/100
Concepts
Dual-Use AI TechnologyConceptDual-Use AI TechnologyTechnologies and research with both beneficial and harmful applications0