Edited today32 words2 backlinksUpdated every 6 weeksDue in 6 weeks
12QualityStubQuality: 12/100LLM-assigned rating of overall page quality, considering depth, accuracy, and completeness.93.5ImportanceEssentialImportance: 93.5/100How central this topic is to AI safety. Higher scores mean greater relevance to understanding or mitigating AI risk.72ResearchHighResearch Value: 72/100How much value deeper investigation of this topic could yield. Higher scores indicate under-explored topics with high insight potential.
Summary
Presents two core cruxes in the AI x-risk debate: whether advanced AI would develop dangerous goals (instrumental convergence vs. trainable safety) and whether we'll get warning signs (gradual failures vs. deception/fast takeoff). No quantitative analysis, primary sources, or novel framing provided.
Content4/13
LLM summaryLLM summaryBasic text summary used in search results, entity link tooltips, info boxes, and related page cards.ScheduleScheduleHow often the page should be refreshed. Drives the overdue tracking system.EntityEntityYAML entity definition with type, description, and related entries.Edit history2Edit historyTracked changes from improve pipeline runs and manual edits.OverviewOverviewA ## Overview heading section that orients readers. Helps with search and AI summaries.Add a ## Overview section at the top of the page
Tables0/ ~1TablesData tables for structured comparisons and reference material.Add data tables to the pageDiagrams0DiagramsVisual content — Mermaid diagrams, charts, or Squiggle estimate models.Add Mermaid diagrams or Squiggle modelsInt. links0/ ~3Int. linksLinks to other wiki pages. More internal links = better graph connectivity.Add links to other wiki pagesExt. links0/ ~1Ext. linksLinks to external websites, papers, and resources outside the wiki.Add links to external sourcesFootnotes0/ ~2FootnotesFootnote citations [^N] with source references at the bottom of the page.Add [^N] footnote citationsReferences0/ ~1ReferencesCurated external resources linked via <R> components or cited_by in YAML.Add <R> resource linksQuotes0QuotesSupporting quotes extracted from cited sources to back up page claims.crux citations extract-quotes <id>Accuracy0AccuracyCitations verified against their sources for factual accuracy.crux citations verify <id>RatingsN:1.5 R:2 A:1 C:1.5RatingsSub-quality ratings: Novelty, Rigor, Actionability, Completeness (0-10 scale).Backlinks2BacklinksNumber of other wiki pages that link to this page. Higher backlink count means better integration into the knowledge graph.
Change History2
Auto-improve (standard): Is AI Existential Risk Real?2 days ago
Improved "Is AI Existential Risk Real?" via standard pipeline (1281.2s). Quality score: 81. Issues resolved: Footnote [^rc-346d] cites a Wikipedia article as evidence fo; Footnote [^rc-ada8] cites an aggregated page of forum writin; Footnote [^rc-7838] attributes a review to 'Zvi Mowshowitz' .
1281.2s · $5-8
Surface tacticalValue in /wiki table and score 53 pages3 weeks ago
Added `tacticalValue` to `ExploreItem` interface, `getExploreItems()` mappings, the `/wiki` explore table (new sortable "Tact." column), and the card view sort dropdown. Scored 49 new pages with tactical values (4 were already scored), bringing total to 53.
sonnet-4 · ~30min
Issues1
StructureNo tables or diagrams - consider adding visual content
Is AI Existential Risk Real?
Crux
Is AI Existential Risk Real?
Presents two core cruxes in the AI x-risk debate: whether advanced AI would develop dangerous goals (instrumental convergence vs. trainable safety) and whether we'll get warning signs (gradual failures vs. deception/fast takeoff). No quantitative analysis, primary sources, or novel framing provided.
Should We Pause AI Development?CruxShould We Pause AI Development?Comprehensive synthesis of the AI pause debate showing moderate expert support (35-40% of 2,778 researchers) and high public support (72%) but very low implementation feasibility, with all major la...Quality: 47/100The Case For AI Existential RiskArgumentThe Case For AI Existential RiskComprehensive formal argument that AI poses 5-14% median extinction risk by 2100 (per 2,788 researcher survey), structured around four premises: capabilities will advance, alignment is hard (with d...Quality: 66/100Is Scaling All You Need?CruxIs Scaling All You Need?Comprehensive survey of the 2024-2025 scaling debate, documenting the shift from pure pretraining to 'scaling-plus' approaches after o3 achieved 87.5% on ARC-AGI-1 but GPT-5 faced 2-year delays. Ex...Quality: 42/100The Case Against AI Existential RiskArgumentThe Case Against AI Existential RiskComprehensive synthesis of skeptical arguments against AI x-risk from prominent researchers (LeCun, Marcus, Ng, Brooks), concluding x-risk probability is <5% (likely ~2%) based on challenges to sca...Quality: 58/100When Will AGI Arrive?CruxWhen Will AGI Arrive?Comprehensive survey of AGI timeline predictions ranging from 2025-2027 (ultra-short) to never with current approaches, with median expert estimates around 2032-2037. Key cruxes include whether sca...Quality: 33/100
Analysis
Carlsmith's Six-Premise ArgumentAnalysisCarlsmith's Six-Premise ArgumentCarlsmith's framework decomposes AI existential risk into six conditional premises (timelines, incentives, alignment difficulty, power-seeking, disempowerment scaling, catastrophe), yielding ~5% ri...Quality: 65/100
Organizations
Future of Humanity InstituteOrganizationFuture of Humanity InstituteThe Future of Humanity Institute (2005-2024) was a pioneering Oxford research center that founded existential risk studies and AI alignment research, growing from 3 to ~50 researchers and receiving...Quality: 51/100University of OxfordOrganizationUniversity of OxfordHistoric UK research university, home to the Global Priorities Institute and formerly the Future of Humanity Institute.University of CambridgeOrganizationUniversity of CambridgeHistoric UK research university. Home to CSER.Berkeley Existential Risk InitiativeOrganizationBerkeley Existential Risk InitiativeNonprofit supporting university-based existential risk research by providing operational and financial support.Global Catastrophic Risk InstituteOrganizationGlobal Catastrophic Risk InstituteThink tank focused on analysis of global catastrophic risks including AI, nuclear, biological, and environmental threats.Alliance to Feed the Earth in DisastersOrganizationAlliance to Feed the Earth in DisastersResearch organization focused on food supply resilience during global catastrophic events.