LLM summaryLLM summaryBasic text summary used in search results, entity link tooltips, info boxes, and related page cards.crux content improve <id>ScheduleScheduleHow often the page should be refreshed. Drives the overdue tracking system.Set updateFrequency in frontmatterEntityEntityYAML entity definition with type, description, and related entries.Add entity YAML in data/entities/Edit historyEdit historyTracked changes from improve pipeline runs and manual edits.crux edit-log view <id>OverviewOverviewA ## Overview heading section that orients readers. Helps with search and AI summaries.
Tables1/ ~1TablesData tables for structured comparisons and reference material.Diagrams0DiagramsVisual content — Mermaid diagrams, charts, or Squiggle estimate models.Add Mermaid diagrams or Squiggle modelsInt. links8/ ~3Int. linksLinks to other wiki pages. More internal links = better graph connectivity.Ext. links0/ ~1Ext. linksLinks to external websites, papers, and resources outside the wiki.Add links to external sourcesFootnotes0/ ~2FootnotesFootnote citations [^N] with source references at the bottom of the page.Add [^N] footnote citationsReferences0/ ~1ReferencesCurated external resources linked via <R> components or cited_by in YAML.Add <R> resource linksQuotes0QuotesSupporting quotes extracted from cited sources to back up page claims.crux citations extract-quotes <id>Accuracy0AccuracyCitations verified against their sources for factual accuracy.crux citations verify <id>
Worldviews
Overview
People working on AI safety hold diverse worldviews that lead to different risk assessments and priorities. Understanding these worldviews helps explain disagreements and enables more productive dialogue.
Major Worldviews
DoomerConceptAI Doomer WorldviewComprehensive overview of the 'doomer' worldview on AI risk, characterized by 30-90% P(doom) estimates, 10-15 year AGI timelines, and belief that alignment is fundamentally hard. Documents core arg...Quality: 38/100
Believes AI existential risk is very high (often >50% p(doom)):
Alignment is fundamentally hard
Current approaches are inadequate
We may not get many chances to get it right
Often associated with: MIRIOrganizationMachine Intelligence Research InstituteComprehensive organizational history documenting MIRI's trajectory from pioneering AI safety research (2000-2020) to policy advocacy after acknowledging research failure, with detailed financial da...Quality: 50/100, Eliezer YudkowskyPersonEliezer YudkowskyComprehensive biographical profile of Eliezer Yudkowsky covering his foundational contributions to AI safety (CEV, early problem formulation, agent foundations) and notably pessimistic views on AI ...Quality: 35/100, some AI safety researchers
Long TimelinesConceptLong-Timelines Technical WorldviewComprehensive overview of the long-timelines worldview (20-40+ years to AGI, 5-20% P(doom)), arguing for foundational research over rushed solutions based on historical AI overoptimism, current sys...Quality: 91/100
Believes transformative AI is further away (20+ years):
More time to solve alignment
Current risks are more speculative
Near-term concerns deserve more attention
Associated with: Some ML researchers, AI ethics community
Governance FocusedConceptGovernance-Focused WorldviewThis worldview argues governance/coordination is the bottleneck for AI safety (not just technical solutions), estimating 10-30% P(doom) by 2100. Evidence includes: compute export controls reduced H...Quality: 67/100
Prioritizes policy and institutional solutions:
Technical alignment is necessary but insufficient
Racing dynamicsRiskAI Development Racing DynamicsRacing dynamics analysis shows competitive pressure has shortened safety evaluation timelines by 40-60% since ChatGPT's launch, with commercial labs reducing safety work from 12 weeks to 4-6 weeks....Quality: 72/100 are the key problem
International coordination is critical
Associated with: GovAIOrganizationGovAIGovAI is an AI policy research organization with ~15-20 staff, funded primarily by Coefficient Giving (\$1.8M+ in 2023-2024), that has trained 100+ governance researchers through fellowships and cu...Quality: 43/100, policy researchers, some EA organizations
OptimisticConceptOptimistic Alignment WorldviewComprehensive overview of the optimistic AI alignment worldview, estimating under 5% existential risk by 2100 based on beliefs that alignment is tractable, current techniques (RLHF, Constitutional ...Quality: 91/100
Believes alignment is likely to succeed:
Current research is making real progress
Markets and institutions create safety incentives
Past technology fears were often overblown
Associated with: Some lab researchers, techno-optimists
GovAIOrganizationGovAIGovAI is an AI policy research organization with ~15-20 staff, funded primarily by Coefficient Giving (\$1.8M+ in 2023-2024), that has trained 100+ governance researchers through fellowships and cu...Quality: 43/100
Other
Eliezer YudkowskyPersonEliezer YudkowskyComprehensive biographical profile of Eliezer Yudkowsky covering his foundational contributions to AI safety (CEV, early problem formulation, agent foundations) and notably pessimistic views on AI ...Quality: 35/100
Concepts
AI Doomer WorldviewConceptAI Doomer WorldviewComprehensive overview of the 'doomer' worldview on AI risk, characterized by 30-90% P(doom) estimates, 10-15 year AGI timelines, and belief that alignment is fundamentally hard. Documents core arg...Quality: 38/100
Analysis
Worldview-Intervention MappingAnalysisWorldview-Intervention MappingThis framework maps beliefs about AI timelines (short/medium/long), alignment difficulty (hard/medium/tractable), and coordination feasibility (feasible/difficult/impossible) to intervention priori...Quality: 62/100