Longterm Wiki
Updated 2026-03-13HistoryData
Page StatusContent
Edited today32 words2 backlinksUpdated every 6 weeksDue in 6 weeks
12QualityStub93.5ImportanceEssential72ResearchHigh
Summary

Presents two core cruxes in the AI x-risk debate: whether advanced AI would develop dangerous goals (instrumental convergence vs. trainable safety) and whether we'll get warning signs (gradual failures vs. deception/fast takeoff). No quantitative analysis, primary sources, or novel framing provided.

Content4/13
LLM summaryScheduleEntityEdit history2Overview
Tables0/ ~1Diagrams0Int. links0/ ~3Ext. links0/ ~1Footnotes0/ ~2References0/ ~1Quotes0Accuracy0RatingsN:1.5 R:2 A:1 C:1.5Backlinks2
Change History2
Auto-improve (standard): Is AI Existential Risk Real?2 days ago

Improved "Is AI Existential Risk Real?" via standard pipeline (1281.2s). Quality score: 81. Issues resolved: Footnote [^rc-346d] cites a Wikipedia article as evidence fo; Footnote [^rc-ada8] cites an aggregated page of forum writin; Footnote [^rc-7838] attributes a review to 'Zvi Mowshowitz' .

1281.2s · $5-8

Surface tacticalValue in /wiki table and score 53 pages3 weeks ago

Added `tacticalValue` to `ExploreItem` interface, `getExploreItems()` mappings, the `/wiki` explore table (new sortable "Tact." column), and the card view sort dropdown. Scored 49 new pages with tactical values (4 were already scored), bringing total to 53.

sonnet-4 · ~30min

Issues1
StructureNo tables or diagrams - consider adding visual content

Is AI Existential Risk Real?

Crux

Is AI Existential Risk Real?

Presents two core cruxes in the AI x-risk debate: whether advanced AI would develop dangerous goals (instrumental convergence vs. trainable safety) and whether we'll get warning signs (gradual failures vs. deception/fast takeoff). No quantitative analysis, primary sources, or novel framing provided.

QuestionDoes AI pose genuine existential risk?
StakesDetermines priority of AI safety work
Expert ConsensusSignificant disagreement
32 words · 2 backlinks
Crux

AI Existential Risk Debate

QuestionDoes AI pose genuine existential risk?
StakesDetermines priority of AI safety work
Expert ConsensusSignificant disagreement

This is the foundational question in AI safety. Everything else depends on whether you believe AI could actually pose existential risk.

Key Cruxes

What would change your mind on this debate?

Key Questions

  • ?If we built human-level AI, would it naturally develop dangerous goals?
    Yes - instrumental convergence applies

    Power-seeking emerges from almost any goal. Training won't reliably prevent it.

    X-risk is real; alignment is critical

    Confidence: medium
    No - we can train safe systems

    Goals come from training. We can instill safe goals and verify them.

    X-risk is manageable with standard safety engineering

    Confidence: medium
  • ?Will we get warning signs before catastrophe?
    Yes - problems will be visible first

    Weaker systems will fail in detectable ways. We can iterate to safety.

    Can learn from experience; less urgent

    Confidence: low
    No - deception or fast takeoff prevents warning

    Sufficiently capable AI might hide misalignment. Jump to dangerous capability.

    Must solve alignment before building dangerous AI

    Confidence: medium

Related Pages

Top Related Pages

Key Debates

Should We Pause AI Development?The Case For AI Existential RiskIs Scaling All You Need?The Case Against AI Existential RiskWhen Will AGI Arrive?

Analysis

Carlsmith's Six-Premise Argument

Organizations

Future of Humanity InstituteUniversity of OxfordUniversity of CambridgeBerkeley Existential Risk InitiativeGlobal Catastrophic Risk InstituteAlliance to Feed the Earth in Disasters