Edited today3.3k words1 backlinksUpdated every 6 weeksDue in 6 weeks
53QualityAdequate •Quality: 53/100LLM-assigned rating of overall page quality, considering depth, accuracy, and completeness.Structure suggests 9343.5ImportanceReferenceImportance: 43.5/100How central this topic is to AI safety. Higher scores mean greater relevance to understanding or mitigating AI risk.64.5ResearchModerateResearch Value: 64.5/100How much value deeper investigation of this topic could yield. Higher scores indicate under-explored topics with high insight potential.
Summary
Analyzes probability (1-15%) of novel AI paradigms emerging before transformative AI, systematically reviewing historical prediction failures (expert AGI timelines shifted 43 years in 4 years, 13 years in one survey cycle) and comparing alternative approaches like neuro-symbolic (8-15% probability), SSMs (5-12%), and NAS (15-30%). Concludes current paradigm faces quantified limits (data exhaustion ~2028, compute costs approaching economic constraints) but near-term timelines favor incumbent approaches.
Content8/13
LLM summaryLLM summaryBasic text summary used in search results, entity link tooltips, info boxes, and related page cards.ScheduleScheduleHow often the page should be refreshed. Drives the overdue tracking system.EntityEntityYAML entity definition with type, description, and related entries.Edit historyEdit historyTracked changes from improve pipeline runs and manual edits.crux edit-log view <id>OverviewOverviewA ## Overview heading section that orients readers. Helps with search and AI summaries.
Tables26/ ~13TablesData tables for structured comparisons and reference material.Diagrams2/ ~1DiagramsVisual content — Mermaid diagrams, charts, or Squiggle estimate models.–Int. links2/ ~27Int. linksLinks to other wiki pages. More internal links = better graph connectivity.Add links to other wiki pagesExt. links79/ ~17Ext. linksLinks to external websites, papers, and resources outside the wiki.Footnotes0/ ~10FootnotesFootnote citations [^N] with source references at the bottom of the page.Add [^N] footnote citationsReferences10/ ~10ReferencesCurated external resources linked via <R> components or cited_by in YAML.Quotes0QuotesSupporting quotes extracted from cited sources to back up page claims.crux citations extract-quotes <id>Accuracy0AccuracyCitations verified against their sources for factual accuracy.crux citations verify <id>RatingsN:4.5 R:5.8 A:4.2 C:6.5RatingsSub-quality ratings: Novelty, Rigor, Actionability, Completeness (0-10 scale).Backlinks1BacklinksNumber of other wiki pages that link to this page. Higher backlink count means better integration into the knowledge graph.
Issues2
QualityRated 53 but structure suggests 93 (underrated by 40 points)
Links29 links could use <R> components
Novel / Unknown Approaches
Capability
Novel / Unknown Approaches
Analyzes probability (1-15%) of novel AI paradigms emerging before transformative AI, systematically reviewing historical prediction failures (expert AGI timelines shifted 43 years in 4 years, 13 years in one survey cycle) and comparing alternative approaches like neuro-symbolic (8-15% probability), SSMs (5-12%), and NAS (15-30%). Concludes current paradigm faces quantified limits (data exhaustion ~2028, compute costs approaching economic constraints) but near-term timelines favor incumbent approaches.
This category represents the probability mass we should assign to approaches not yet discovered or not included in our current taxonomy. History shows that transformative technologies often come from unexpected directions, and intellectual humility requires acknowledging this. The field of AI has undergone cyclical periods of growth and decline, known as AI summers and winters, with each cycle bringing unexpected architectural innovations. We are currently in the third AI summer, characterized by the transformer paradigm, but historical patterns suggest eventual disruption.
The challenge of forecasting AI development is well-documented. According to 80,000 Hours' analysis of expert forecasts, mean estimates on MetaculusOrganizationMetaculusMetaculus is a reputation-based forecasting platform with 1M+ predictions showing AGI probability at 25% by 2027 and 50% by 2031 (down from 50 years away in 2020). Analysis finds good short-term ca...Quality: 50/100 for when AGI will be developed plummeted from 50 years to 5 years between 2020 and 2024. The AI Impacts 2023 survey found machine learning researchers expected AGI by 2047, compared to 2060 in the 2022 survey. This 13-year shift in a single year demonstrates the difficulty of prediction in this domain.
Beyond the "known unknowns" such as scaling limits and alignment challenges, we face a vast terrain of "unknown unknowns": emergent capabilitiesRiskEmergent CapabilitiesEmergent capabilities—abilities appearing suddenly at scale without explicit training—pose high unpredictability risks. Wei et al. documented 137 emergent abilities; recent models show step-functio...Quality: 61/100, unforeseen risks, and transformative shifts that defy prediction. The technology itself is evolving so rapidly that even experts struggle to predict its capabilities 6 months ahead.
Estimated probability of being dominant at transformative AI: 1-15% (range reflects timeline uncertainty; shorter timelines favor current paradigms, longer timelines favor novel approaches)
The history of technology provides crucial context for estimating the probability of paradigm shifts. As documented by research on technological paradigm shifts, notable figures consistently fail to predict transformative changes. Wilbur Wright famously said in 1901 that "man would not fly for 50 years"; two years later, he and his brother achieved flight.
A paradigm shift in AI development would have profound implications for AI safety research. The Stanford HAI AI Index 2025 notes that safety research investment trails capability investment by approximately 10:1. A novel paradigm could either invalidate existing safety research or provide new opportunities for alignment.
Why Novel Approaches Are Concerning
Concern
Explanation
Risk Level
Mitigation Difficulty
Unpredictability
Can't prepare for unknown risks
High
Very High
Rapid capability jumps
New paradigm might be much more capable
Very High
High
Different failure modes
Safety research might not transfer
High
Medium
Misplaced confidence
We might assume current understanding applies
Medium
Low
Compressed timelines
Less time to develop safety measures
Very High
Very High
Open-source proliferation
Novel techniques spread faster than safety measures
Data/compute limits start binding; research progresses
Medium
By 2035
10-20%
Current paradigm hits fundamental limits
Low
By 2040
15-30%
Long timeline allows paradigm maturation
Low
By 2050+
25-45%
Historical base rate of paradigm shifts
Very Low
Why 1-15% Range Is Reasonable
The range reflects uncertainty about timelines and paradigm persistence:
Lower bound (1%): If transformative AI arrives within 3-5 years via current paradigm scaling, novel approaches have insufficient time to mature. The median Metaculus estimate of AGI by ~2027 supports this scenario.
Upper bound (15%): If current paradigm hits hard limits (data exhaustion, scaling saturation) before transformative AI, alternative approaches become necessary. Epoch AI projections of 2028 data exhaustion support this possibility.
Central estimate (5-8%): Accounts for historical base rate of paradigm shifts (~1 per decade in computing), current research momentum in alternatives, and uncertainty in both timelines and scaling projections.
Critical Questions
Uncertainty
Scenarios
Current Evidence
Resolution Timeline
How locked-in is the current paradigm?
Fundamental (like the wheel) vs. Transitional (like vacuum tubes)
A comprehensive review of expert predictions on Artificial General Intelligence (AGI) from multiple groups, showing converging views that AGI could arrive before 2030. Different expert groups, including AI company leaders, researchers, and forecasters, show shortened and increasingly similar estimates.
Epoch AI is a research organization collecting and analyzing data on AI model training compute, computational performance, and technological trends in artificial intelligence.
Metaculus is an online forecasting platform that allows users to predict future events and trends across areas like AI, biosecurity, and climate change. It provides probabilistic forecasts on a wide range of complex global questions.