Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Our World in Data

A frequently cited interactive chart used in AI governance and safety discussions to illustrate compute scaling trends; draws on Epoch AI data and is useful for contextualizing arguments about compute thresholds and regulatory triggers.

Metadata

Importance: 62/100dataset

Summary

An interactive data visualization tracking the computational resources (measured in FLOPs) used to train notable AI systems over time. It illustrates the dramatic exponential growth in training compute across decades, highlighting key milestones and trends in AI capability scaling.

Key Points

  • Training compute for leading AI models has grown exponentially, roughly doubling every few months in recent years.
  • Computational requirements are measured in floating-point operations (FLOPs), providing a hardware-agnostic metric for comparing training runs.
  • The chart spans decades of AI development, showing inflection points corresponding to deep learning and large language model eras.
  • Massive compute scaling is closely tied to capability jumps, underpinning debates about compute as a leading indicator of AI progress.
  • Data is sourced from published research and Epoch AI's dataset, making it a widely-cited reference in AI governance and policy discussions.

Review

This source provides an informative overview of computational requirements in artificial intelligence, focusing on the measurement and complexity of training processes. It highlights that training computation is quantified using petaFLOPs, with one petaFLOP representing one quadrillion floating-point operations, which underscores the immense computational complexity of modern AI systems. The analysis emphasizes multiple factors influencing training computation, including dataset size, model architecture complexity, and parallel processing capabilities. By detailing these aspects, the source offers insights into the computational challenges and scaling requirements of AI development. While not presenting specific research findings, it provides a foundational understanding of the computational landscape in machine learning, which is crucial for understanding the resources and infrastructure needed to develop advanced AI technologies.

Cited by 1 page

PageTypeQuality
Epoch AIOrganization51.0

Cached Content Preview

HTTP 200Fetched Apr 9, 20269 KB
Computation used to train notable artificial intelligence systems, by domain - Our World in Data Data Computation used to train notable artificial intelligence systems, by domain

 See all data and research on: Artificial Intelligence What you should know about this indicator

 In the context of artificial intelligence (AI), training computation is predominantly measured using floating-point operations or “FLOP”. One FLOP represents a single arithmetic operation involving floating-point numbers, such as addition, subtraction, multiplication, or division. To adapt to the vast computational demands of AI systems, the measurement unit of petaFLOP is commonly used. One petaFLOP stands as a staggering one quadrillion FLOPs, underscoring the magnitude of computational operations within AI.
 Modern AI systems are rooted in machine learning and deep learning techniques. These methodologies are notorious for their computational intensity, involving complex mathematical processes and algorithms. During the training phase, AI models process large volumes of data, while continuously adapting and refining their parameters to optimize performance, rendering the training process computationally intensive.
 Many factors influence the magnitude of training computation within AI systems. Notably, the size of the dataset employed for training significantly impacts the computational load. Larger datasets necessitate more processing power. The complexity of the model's architecture also plays a pivotal role; more intricate models lead to more computations. Parallel processing, involving the simultaneous use of multiple processors, also has a substantial effect. Beyond these factors, specific design choices and other variables further contribute to the complexity and scale of training computation within AI.
 Computation used to train notable artificial intelligence systems, by domain Computation is measured in total petaFLOP, which is 10¹⁵ floating-point operations estimated from AI literature, albeit with some uncertainty. Source Epoch AI (2025) – with major processing by Our World in Data Last updated March 12, 2025 Next expected update May 2026 Explore charts that include this data

 Computation used to train notable AI systems, by affiliation of researchers 
 Training computation vs. parameters in notable AI systems, by domain 
 Training computation vs. parameters in notable AI systems, by researcher affiliation 
 Training computation vs. dataset size in notable AI systems, by researcher affiliation 
 Chart 1 of 4 More Data on Artificial Intelligence 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Sources and processing

 This data is based on the following sources

 Epoch AI – Parameter, Compute and Data Trends in Machine Learning

 Retrieved on March 7, 2026 Retrieved from https://epoch.ai/mlinputs/visualization Citation This is the citation of the original data obtained from the source, prior to any processing or adaptation by Our World in Data. To cite data downloaded f

... (truncated, 9 KB total)
Resource ID: 87ae03cc6eaca6c6 | Stable ID: MTRhNThkOT