91.5ImportanceEssentialImportance: 91.5/100How central this topic is to AI safety. Higher scores mean greater relevance to understanding or mitigating AI risk.22ResearchMinimalResearch Value: 22/100How much value deeper investigation of this topic could yield. Higher scores indicate under-explored topics with high insight potential.
Content1/13
LLM summaryLLM summaryBasic text summary used in search results, entity link tooltips, info boxes, and related page cards.crux content improve <id>ScheduleScheduleHow often the page should be refreshed. Drives the overdue tracking system.Set updateFrequency in frontmatterEntityEntityYAML entity definition with type, description, and related entries.Edit historyEdit historyTracked changes from improve pipeline runs and manual edits.crux edit-log view <id>OverviewOverviewA ## Overview heading section that orients readers. Helps with search and AI summaries.Add a ## Overview section at the top of the page
Tables0/ ~1TablesData tables for structured comparisons and reference material.Add data tables to the pageDiagrams0DiagramsVisual content — Mermaid diagrams, charts, or Squiggle estimate models.Add Mermaid diagrams or Squiggle modelsInt. links0/ ~3Int. linksLinks to other wiki pages. More internal links = better graph connectivity.Add links to other wiki pagesExt. links0/ ~1Ext. linksLinks to external websites, papers, and resources outside the wiki.Add links to external sourcesFootnotes0/ ~2FootnotesFootnote citations [^N] with source references at the bottom of the page.Add [^N] footnote citationsReferences0/ ~1ReferencesCurated external resources linked via <R> components or cited_by in YAML.Add <R> resource linksQuotes0QuotesSupporting quotes extracted from cited sources to back up page claims.crux citations extract-quotes <id>Accuracy0AccuracyCitations verified against their sources for factual accuracy.crux citations verify <id>Backlinks2BacklinksNumber of other wiki pages that link to this page. Higher backlink count means better integration into the knowledge graph.
Issues1
StructureNo tables or diagrams - consider adding visual content
Autonomous Replication
Risk
Autonomous Replication
AI systems capable of copying themselves and acquiring resources without human oversight
Agentic AICapabilityAgentic AIAnalysis of agentic AI capabilities and deployment challenges, documenting industry forecasts (40% of enterprise apps by 2026, \$199B market by 2034) alongside implementation difficulties (40%+ pro...Quality: 68/100
Autonomous Cyber Attack TimelineAnalysisAutonomous Cyber Attack TimelineThis model projects AI achieving fully autonomous cyber attack capability (Level 4) by 2029-2033, with current systems at ~50% progress and Level 3 attacks already documented in September 2025. Pro...Quality: 63/100Carlsmith's Six-Premise ArgumentAnalysisCarlsmith's Six-Premise ArgumentCarlsmith's framework decomposes AI existential risk into six conditional premises (timelines, incentives, alignment difficulty, power-seeking, disempowerment scaling, catastrophe), yielding ~5% ri...Quality: 65/100Preference Manipulation Drift ModelAnalysisPreference Manipulation Drift ModelThis model analyzes gradual AI-driven preference shifts. It estimates 5-15% probability of significant harm from drift, with 20-40% reduction in preference diversity after 5 years of heavy use.
Risks
AI Preference ManipulationRiskAI Preference ManipulationDescribes AI systems that shape human preferences rather than just beliefs, distinguishing it from misinformation. Presents a 5-stage manipulation mechanism (profile→model→optimize→shape→lock) and ...Quality: 55/100Bioweapons RiskRiskBioweapons RiskComprehensive synthesis of AI-bioweapons evidence through early 2026, including the FRI expert survey finding 5x risk increase from AI capabilities (0.3% → 1.5% annual epidemic probability), Anthro...Quality: 91/100AI-Induced IrreversibilityRiskAI-Induced IrreversibilityComprehensive analysis of irreversibility in AI development, distinguishing between decisive catastrophic events and accumulative risks through gradual lock-in. Quantifies current trends (60-70% al...Quality: 64/100AI-Driven Concentration of PowerRiskAI-Driven Concentration of PowerDocuments how AI development is concentrating in ~20 organizations due to \$100M+ compute costs, with 5 firms controlling 80%+ of cloud infrastructure and projections reaching \$1-10B per model by ...Quality: 65/100AI-Enabled Authoritarian TakeoverRiskAI-Enabled Authoritarian TakeoverComprehensive analysis documenting how 72% of global population (5.7 billion) now lives under autocracy with AI surveillance deployed in 80+ countries, showing 15 consecutive years of declining int...Quality: 61/100AI Value Lock-inRiskAI Value Lock-inComprehensive analysis of AI lock-in scenarios where values, systems, or power structures become permanently entrenched. Documents evidence including Big Tech's 66-70% cloud control, AI surveillanc...Quality: 64/100
Organizations
Apollo ResearchOrganizationApollo ResearchApollo Research demonstrated in December 2024 that all six tested frontier models (including o1, Claude 3.5 Sonnet, Gemini 1.5 Pro) engage in scheming behaviors, with o1 maintaining deception in ov...Quality: 58/100Center for AI SafetyOrganizationCenter for AI SafetyCAIS is a nonprofit research organization founded by Dan Hendrycks that has distributed compute grants to researchers, published technical AI safety papers including the representation engineering ...Quality: 42/100
Concepts
SuperintelligenceConceptSuperintelligenceAI systems with cognitive abilities vastly exceeding human intelligenceQuality: 92/100Existential Risk from AIConceptExistential Risk from AIHypotheses concerning risks from advanced AI systems that some researchers believe could result in human extinction or permanent global catastropheQuality: 92/100
Other
Geoffrey HintonPersonGeoffrey HintonComprehensive biographical profile of Geoffrey Hinton documenting his 2023 shift from AI pioneer to safety advocate, estimating 10-20% extinction risk in 5-20 years. Covers his media strategy, poli...Quality: 42/100Eliezer YudkowskyPersonEliezer YudkowskyComprehensive biographical profile of Eliezer Yudkowsky covering his foundational contributions to AI safety (CEV, early problem formulation, agent foundations) and notably pessimistic views on AI ...Quality: 35/100Toby OrdPersonToby OrdComprehensive biographical profile of Toby Ord documenting his 10% AI extinction estimate and role founding effective altruism, with detailed tables on risk assessments, academic background, and in...Quality: 41/100Holden KarnofskyPersonHolden KarnofskyHolden Karnofsky directed \$300M+ in AI safety funding through Coefficient Giving (formerly Open Philanthropy), growing the field from ~20 to 400+ FTE researchers and developing influential framewo...Quality: 40/100Yoshua BengioPersonYoshua BengioComprehensive biographical overview of Yoshua Bengio's transition from deep learning pioneer (Turing Award 2018) to AI safety advocate, documenting his 2020 pivot at Mila toward safety research, co...Quality: 39/100Nick BostromPersonNick BostromBiographical profile of Nick Bostrom covering his founding of the Future of Humanity Institute, his 2014 book 'Superintelligence' on AI existential risk, and key philosophical contributions includi...Quality: 25/100