Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Epoch AI

Epoch AI's compute database is a widely cited empirical resource for understanding AI scaling trends; used frequently in governance and forecasting discussions to ground claims about AI progress trajectories.

Metadata

Importance: 62/100datasettool

Summary

An interactive visualization tool from Epoch AI displaying their database of AI training compute, tracking how computational resources used in notable ML models have evolved over time. It provides empirical data on training compute trends, enabling researchers and forecasters to analyze the scaling of AI systems.

Key Points

  • Tracks training compute (measured in FLOPs) across hundreds of notable AI models over time
  • Enables visual analysis of compute scaling trends, doubling times, and shifts in hardware usage
  • Supports AI forecasting and governance work by grounding predictions in empirical compute data
  • Data spans decades of ML history, illustrating the dramatic acceleration in compute investment
  • Useful for identifying inflection points in AI development trajectories

Cited by 1 page

PageTypeQuality
Epoch AIOrganization51.0

Cached Content Preview

HTTP 200Fetched Apr 9, 20268 KB
Data on AI Models | Epoch AI 

 
 
 

 

 

 

 
 Updated Apr. 8, 2026 AI Models

 Our comprehensive database of over 3500 models tracks key factors driving machine learning progress.

 Download this data Notable AI models Frontier AI models Large-scale AI models All AI models 
 
 
 Graph 
 
 
 Table Settings 
 
 
 
 
 
 
 Graph 
 
 
 

 Y Axis Training compute (FLOP) Training compute (FLOP) Parameters Training dataset size (total) Training compute cost (2023 USD) Training time (days) Training power draw (W) 
 
 
 Training compute (FLOP) 
 
 
 
 Parameters 
 
 
 
 Training dataset size (total) 
 
 
 
 Training compute cost (2023 USD) 
 
 
 
 Training time (days) 
 
 
 
 Training power draw (W) 
 
 
 
 X Axis Publication date Publication date Training compute (FLOP) Parameters Training dataset size (total) Training compute cost (2023 USD) Training time (days) Training power draw (W) 
 
 
 Publication date 
 
 
 
 Training compute (FLOP) 
 
 
 
 Parameters 
 
 
 
 Training dataset size (total) 
 
 
 
 Training compute cost (2023 USD) 
 
 
 
 Training time (days) 
 
 
 
 Training power draw (W) 
 
 
 
 Color by None Domain Organization Country Accessibility Display 
 
 
 

 Show Deep Learning Era 
 
 
 Show regressions 
 
 
 Show regresions as Nx/year OOM/year Doubling time Confidence interval width 
 
 
 % Filter 
 
 
 

 Filter by text 
 
 
 Show only top N models, where N is 
 
 
 Apply More about this dataset

 Documentation Downloads Citations FAQ Documentation

 Models in this dataset have been collected from various sources, including literature reviews, Papers With Code, historical accounts, highly-cited publications, proceedings of top conferences, and suggestions from individuals. The list of models is non-exhaustive, but aims to cover most models that were state-of-the-art when released, have over 1000 citations, one million monthly active users, or an equivalent level of historical significance. Additional information about our approach to measuring parameter counts, dataset size, and training compute can be found in the accompanying documentation.

 Read the complete documentation Frequently asked questions

 What is a notable model? 
 
 
 

 A notable model meets any of the following criteria: (i) state-of-the-art improvement on a recognized benchmark; (ii) highly cited (over 1000 citations); (iii) historical relevance; (iv) significant use.

 How was the AI Models dataset created? 
 
 
 

 The dataset was originally created for the report “Compute Trends Across Three Eras of Machine Learning” and has continually grown and expanded since then.

 What are notable, frontier, and large-scale models? 
 
 
 

 We flag models as notable if they advanced the state of the art, achieved many citations in an academic publication, had over a million monthly users, were highly significant historically, or were developed at a cost of over one million dollars. You can learn more about these notability criteria by reading our AI Models Documentation 

... (truncated, 8 KB total)
Resource ID: 835981f69d1bf99a | Stable ID: Y2EzOGU5Nj