Skip to content

AGI Timeline

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:59 (Adequate)⚠️
Importance:74.5 (High)
Last edited:2026-01-29 (3 days ago)
Words:2.0k
Structure:
📊 15📈 1🔗 34📚 2115%Score: 14/15
LLM Summary:Comprehensive synthesis of AGI timeline forecasts showing dramatic acceleration: expert median dropped from 2061 (2018) to 2047 (2023), Metaculus from 50 years to 5 years since 2020, with current predictions clustering around 2027-2045 median (50% probability). Aggregates 9,300+ predictions across expert surveys, prediction markets, and lab leader statements, documenting key uncertainties around scaling limits, definitions, and technical bottlenecks.
Critical Insights (4):
  • Quant.Expert AGI timeline predictions have accelerated dramatically, shortening by 16 years from 2061 (2018) to 2045 (2023), representing a consistent trend of timeline compression as capabilities advance.S:4.0I:4.5A:4.0
  • DebateThere is a striking 20+ year disagreement between industry lab leaders claiming AGI by 2026-2031 and broader expert consensus of 2045, suggesting either significant overconfidence among those closest to development or insider information not reflected in academic surveys.S:4.5I:4.5A:3.5
  • GapAGI definition choice creates systematic 10-15 year timeline variations, with economic substitution definitions yielding 2040-2060 ranges while human-level performance benchmarks suggest 2030-2040, indicating definitional work is critical for meaningful forecasting.S:3.5I:4.0A:4.5
Issues (2):
  • QualityRated 59 but structure suggests 93 (underrated by 34 points)
  • Links12 links could use <R> components
DimensionAssessmentEvidence
Median Expert Forecast (2026)2040-2047 (50% HLMI)AI Impacts 2023 Survey found 50% probability of HLMI by 2047, down 13 years from 2022
Prediction Markets2027-2031 medianMetaculus forecasters predict median of November 2027 (1,700+ forecasters)
Lab Leader Estimates2026-2029Sam Altman, Dario Amodei, and Demis Hassabis converge on late 2020s
Timeline TrendRapidly shorteningExpert median dropped from 2061 (2018) → 2059 (2022) → 2047 (2023); Metaculus dropped from 50 years to 5 years since 2020
Uncertainty RangeVery high (±15-20 years)80% confidence intervals span 2026-2045+ across forecasts
Definition SensitivityHighDifferent AGI definitions shift predictions by 10-20 years
Confidence LevelLow-MediumExpert surveys show framing effects of 15+ years; historical predictions consistently too pessimistic

AGI timeline predictions represent attempts to forecast when artificial intelligence will match or exceed human cognitive abilities across all domains. Current expert consensus suggests a 50% probability of AGI development between 2040-2050, though estimates vary widely based on AGI definitions and measurement criteria.

Recent surveys show accelerating timelines compared to historical predictions. The 2023 AI Impacts survey found median expert predictions of 2045 for “High-Level Machine Intelligence,” while Metaculus prediction markets aggregate to approximately 2040-2045. However, significant uncertainty remains around capability thresholds, measurement methodologies, and potential discontinuous progress.

Loading diagram...
FactorAssessmentTimeline ImpactSource
Expert Survey Median2040-2050Baseline estimateAI Impacts 2023
Prediction Market Aggregate2040-2045Market consensusMetaculus
Lab Leader Statements2025-2035Optimistic boundOpenAI, DeepMind
Scaling Limitations2050+Conservative boundEpoch AI
SurveyYearSample SizeMedian AGI TimelineKey FindingSource
AI Impacts ESPAI20232,778 experts2047 (HLMI)13-year drop from 2060 in 2022AI Impacts
Digital Minds Survey202567 experts2050 (50% probability)20% by 2030, 40% by 2040Digital Minds Report
AI Multiple Meta-Analysis20269,300 predictions2040 (aggregated)Synthesized all public forecastsAI Multiple
Metaculus Community20261,700+ forecastersNov 2027 median80% CI: July 2026 - Feb 2031Metaculus
Samotsvety Superforecasters202315 forecasters28% by 2030Professional forecasters more conservative80,000 Hours

Expert timelines have consistently shortened over the past decade, with dramatic acceleration since 2022:

YearExpert Median (HLMI)Metaculus MedianChange from Previous
201820612070+Baseline
20222059-20602055-2 years
20232045-20472040-13 to -15 years
2024≈20402035-5 years
2025≈20352030-5 years
2026VariedNov 2027-3 years

The 80,000 Hours analysis notes that “in four years, the mean estimate on Metaculus for when AGI will be developed has plummeted from 50 years to five years.” Historical expert predictions have consistently been too pessimistic—in 2022, researchers thought AI wouldn’t write simple Python code until ~2027, but AI met that threshold by 2023-2024.

Leading AI researchers increasingly cite rapid scaling of language models and emergent capabilities as evidence for shorter timelines.

QuestionCurrent PredictionConfidence IntervalForecastersSource
First General AI AnnouncedNov 30, 2027 medianJuly 2026 - Feb 2031 (80%)1,700+Metaculus
Weakly General AINov 2033Dec 2028 - Sep 20451,800+Metaculus
Transformative AI2031 median2027-2045 (80%)1,000+AGI Dashboard
AGI by 2030≈40% probability25-55% rangeAggregatedMarket consensus
AGI by 2040≈75% probability60-85% rangeAggregatedMarket consensus
PlatformAGI Median50% Probability YearKey Difference
MetaculusMid-20302030-2031Stricter definition requiring robotics
Manifold2028≈50% before 2028More aggressive, market-based
Polymarket2029-2030≈45% by 2029Real-money incentives
Expert Surveys2040-20472040-2045Academic conservatism

Prediction markets show several notable patterns:

  • Dramatic shortening: Metaculus dropped from 50 years to 5 years median since 2020
  • Volatility spikes following major capability announcements (GPT-4, Claude 3, o1, o3)
  • Shorter timelines in technical communities vs. academic surveys (10-15 year gap)
  • Definition sensitivity with different AGI operationalizations varying by 10-20 years

Industry Timeline Claims (Updated January 2026)

Section titled “Industry Timeline Claims (Updated January 2026)”
OrganizationLeaderClaimed TimelineKey StatementSource
OpenAISam Altman2025-2028”We are now confident we know how to build AGI”; 2026 models will “amaze us”Sam Altman Blog
AnthropicDario Amodei2026-2027”AI may surpass humans in most tasks by 2027”; “rapidly running out of convincing blockers”Lex Fridman Interview
DeepMindDemis Hassabis”Within this decade” (by 2030)Nature interview 2024Internal planning
DeepMindShane Legg50% by 2028”Minimal AGI” prediction (January 2026)DeepMind cofounder
MetaYann LeCun”Many decades away”Skeptical of current paradigm reaching AGIPublic statements 2024
xAIElon Musk2026AI “smarter than any single human”Public statements

Several labs’ public roadmaps suggest aggressive acceleration:

Metric2024202520262027Source
Training Run Cost≈$100M≈$1B$10B+$100B clustersDario Amodei
Compute per TrainingBaseline3-10x30-100x300-1000xScaling projections
Data Center Power100-500 MW500 MW-1 GW1-5 GW5-10 GWIndustry reports
Researcher FTEs5,000+10,000+20,000+50,000+Lab hiring plans
AGI DefinitionTimeline RangeKey Challenge
Human-level performance2030-2040Benchmark gaming
Economic substitution2040-2060Deployment lags
Scientific breakthrough2035-2050Discovery vs. automation
Consciousness/sentience2050+Hard problem of consciousness

Current limitations that may extend timelines:

  • Reasoning capabilities: Current models struggle with complex multi-step reasoning
  • Long-horizon planning: Limited ability for extended autonomous operation
  • Robustness: Brittleness to distribution shifts and adversarial examples
  • Sample efficiency: Still require massive training data compared to humans
Constraint TypeImpact on TimelineMitigation Strategies
Compute hardware+5-10 years if hits limitsAdvanced chip architectures
Data availability+3-5 yearsSynthetic data generation
Energy requirements+2-5 yearsEfficiency improvements
Regulatory barriers+5-15 yearsInternational coordination

Recent capabilities suggest accelerating progress toward AGI:

  • Multi-modal integration: Vision, text, and code in single models
  • Tool use: Effective API calls and workflow automation
  • Emergent reasoning: Chain-of-thought and constitutional approaches
  • Scientific research: Automated hypothesis generation and testing
Approach2030 PredictionMethodologyLimitations
Scaling laws85% human performanceExtrapolate compute trendsMay hit diminishing returns
Expert elicitation60% probabilitySurvey aggregationBias and overconfidence
Benchmark tracking90% on specific tasksPerformance trajectoryNarrow evaluation
Economic modeling40% job automationLabor substitutionDeployment friction

Timeline Pessimists (2050+) argue:

  • Current paradigms (transformers, scaling) will hit fundamental limits
  • Alignment difficulty will require extensive safety research before deployment
  • Economic and regulatory barriers will slow deployment
  • Key cognitive capabilities (long-horizon planning, true reasoning) may require architectural breakthroughs

Timeline Optimists (2025-2035) contend:

  • Scaling laws will continue with current paradigms through 2030+
  • Emergent capabilities from larger models will bridge remaining capability gaps
  • Competitive pressure and $100B+ investments will accelerate development
  • Recent progress (o1, o3 reasoning, agents) shows faster-than-expected capability gains
QuestionImpact on TimelineCurrent EvidenceOptimist ViewPessimist View
Will scaling laws continue?±10 yearsMixed signals since GPT-4Compute scaling to $100B clusters will unlock new capabilitiesDiminishing returns visible; new paradigms needed
Can transformers achieve AGI?±15-20 yearsChain-of-thought, o1/o3 reasoningArchitecture is sufficient with scaleFundamental limits on reasoning and planning
How hard is alignment?±10-15 yearsConstitutional AI, RLHF improvementsTractable with current approachesRequires deep unsolved problems
Will regulation slow progress?±5-15 yearsEU AI Act, compute governanceLight touch will prevailPrecautionary regulation inevitable
Is AGI a single threshold?±10 yearsDefinitional debatesContinuous capability improvementDiscrete capability jumps required

Different timelines imply varying urgency for:

  • Safety research: Shorter timelines require immediate focus on alignment solutions
  • Governance frameworks: International coordination becomes critical
  • Economic preparation: Labor market disruption planning
  • Coordination mechanisms: Preventing dangerous racing dynamics

Timeline uncertainty affects regulation approaches:

  • Precautionary principle: Plan for shortest reasonable timelines
  • Adaptive governance: Build flexible frameworks for multiple scenarios
  • Research prioritization: Balance capability and safety advancement
CategorySourceKey Contribution
Expert SurveysAI Impacts 2023 SurveyLargest expert survey (2,778 respondents)
Prediction MarketsMetaculus AGI QuestionsContinuous probability tracking (1,700+ forecasters)
Technical AnalysisEpoch AI Scaling ReportsCompute and training cost projections
Industry PerspectivesOpenAI Planning DocumentsLab development roadmaps
Meta-Analysis80,000 Hours Timeline ReviewSynthesis of forecaster disagreements
SourceDateKey FindingURL
Sam Altman “Gentle Singularity”Jan 2025”We know how to build AGI”; 2026 will see “systems that figure out novel insights”Blog
Dario Amodei Lex Fridman InterviewNov 2024”Rapidly running out of convincing blockers”; 2026-2027 possibleTranscript
AI Multiple Meta-AnalysisJan 20269,300 predictions analyzed; aggregated median ≈2040Analysis
Digital Minds Forecasting202567 experts: 20% by 2030, 50% by 2050Report
AGI Timelines DashboardJan 2026Combined forecasts: 2031 median (80% CI: 2027-2045)Dashboard
OrganizationFocus AreaKey Resources
AI ImpactsExpert surveys and trend analysisAnnual ESPAI survey reports
MetaculusPrediction marketsAGI timeline questions, AGI Horizons tournament
Epoch AICompute trends and scaling lawsTechnical reports, training cost projections
Future of Humanity InstituteLong-term forecastingAcademic papers (now closed)
Samotsvety ForecastingSuperforecaster aggregationAGI probability estimates
  • Scaling debates: See scaling law discussion
  • Capability analysis: Review core capabilities development
  • Timeline uncertainty: Explore forecasting methodology
  • Risk implications: Consider takeoff dynamics scenarios