Skip to content
Longterm Wiki
Back

Epoch AI: AI Trends & Metrics Dashboard

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Epoch AI

Epoch AI's trends dashboard is a key empirical reference for AI safety researchers assessing capability trajectories and compute scaling; useful for grounding risk assessments and policy arguments in observed data.

Metadata

Importance: 62/100tool pagedataset

Summary

Epoch AI's trends page provides data-driven tracking of key metrics in AI development, including compute scaling, model capabilities, and training trends. It serves as a quantitative reference for understanding the trajectory of AI progress across multiple dimensions. The resource aggregates empirical data to help researchers and policymakers assess the pace and direction of AI advancement.

Key Points

  • Tracks historical and current trends in AI training compute, model size, and benchmark performance over time
  • Provides empirical data relevant to forecasting AI capability thresholds and potential risk timelines
  • Useful for grounding AI safety and governance discussions in quantitative evidence about development pace
  • Covers trends across multiple modalities and model types, offering a broad view of the AI landscape
  • Maintained by Epoch AI, a research organization focused on understanding and forecasting AI progress

Cited by 5 pages

PageTypeQuality
Large Language ModelsConcept62.0
AI Compute Scaling MetricsAnalysis78.0
AI Capability Threshold ModelAnalysis72.0
Epoch AIOrganization51.0
Compute ThresholdsConcept91.0

Cached Content Preview

HTTP 200Fetched Apr 9, 202610 KB
Trends in Artificial Intelligence | Epoch AI 

 
 
 
 
 
 
 

 

 
 What drives AI progress? The story is dominated by scale. Training AI systems with more compute, power and data has consistently led to better performance. Since 2010, the compute used to train notable AI models has increased 4.5× per year. Meanwhile, researchers have made the underlying algorithms far more efficient — each year, the same performance can be achieved with 3× less compute.

 This massive scale-up in training compute comes from three sources: deploying more chips in parallel, running training for longer, and leveraging increasingly powerful AI processors. The consequences are striking. Training costs are climbing by 2.5× annually, while power requirements double each year. Today’s cutting-edge AI training runs consume tens to hundreds of megawatts — comparable to a medium-sized power plant. These trends appear set to continue through 2030 .

 Show growth values in Increase/year Doubling time OOM/year LLM inference prices 

 
 
 
 40 ×/year 
 
 
 2 months 
 
 
 2 OOM/year The cost to inference an LLM at a fixed level of performance has fallen rapidly, but unevenly across tasks. 

 The cost to inference an LLM at a fixed level of performance has been halving every 2 months. 

 The cost to inference an LLM at a fixed level of performance has fallen by 2 OOMs per year. 

 Model Performance 90% CI 10× to 900× 1 to 4 months 1 to 3 OOM Compute stock growth 

 
 
 
 2.3 ×/year 
 
 
 10 months 
 
 
 0.36 OOM/year The total computing power of the stock of NVIDIA chips is growing at a rate of 2.3×/year. 

 The total computing power of the stock of NVIDIA chips is doubling every 10 months. 

 The total computing power of the stock of NVIDIA chips is growing by 0.36 OOMs per year. 

 AI Companies 90% CI 2.2× to 2.5× 9 to 11 months 0.34 to 0.40 OOM Training compute 

 
 
 
 5 ×/year 
 
 
 5.2 months 
 
 
 0.7 OOM/year Training compute for frontier language models has been growing at 5× per year since 2020. 

 Training compute for frontier language models has been doubling every 5.2 months since 2020. 

 Training compute for frontier language models has been growing at 0.7 OOMs per year since 2020. 

 Training Runs 90% CI 4× to 6× 4.6 to 6.0 months 0.6 to 0.8 OOM Algorithmic progress 

 ÷ 3.0 ×/year 7.6 months 0.5 OOM/year Pre-training compute efficiency is improving at roughly 3.0× per year. 

 Pre-training compute efficiency is doubling roughly every 7.6 months. 

 Pre-training compute efficiency is improving by roughly 0.5 OOMs per year. 

 Training Runs 90% CI 2.8× to 4.4× 5.6 to 8.1 months 0.4 to 0.6 OOM Largest AI data center 

 600,000 H100e The largest known AI data center has computing power equivalent to 600,000 NVIDIA H100 chips. 

 Data Centers 90% CI 400k to 900k H100e FLOP/s per dollar 

 
 
 
 1.37 ×/year 
 
 
 2.2 years 
 
 
 0.14 OOM/year AI chip performance per dollar has improved by 37% per year. 

 AI chip performance per dollar has doubled every 2.2 years. 

 

... (truncated, 10 KB total)
Resource ID: b029bfc231e620cc | Stable ID: ZDMyMmEwOW