Skip to content
Longterm Wiki
Back

Epoch AI OpenAI compute spend

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Epoch AI

Useful reference for understanding the financial and resource scale of frontier AI development; relevant to governance discussions about compute as a lever for AI oversight and policy.

Metadata

Importance: 55/100organizational reportanalysis

Summary

Epoch AI estimates OpenAI spent approximately $5 billion on R&D compute and $2 billion on inference compute in 2024. The analysis suggests that the majority of compute expenditure went toward experimental and unreleased model training rather than deployed products, highlighting the scale of frontier AI development investments.

Key Points

  • OpenAI's total 2024 compute spend estimated at ~$7 billion, with $5B for R&D and $2B for inference.
  • Most R&D compute was likely used for experimental or unreleased model training, not publicly deployed systems.
  • The inference-to-R&D ratio (~2:5) reflects heavy investment in capability development relative to current deployment.
  • Analysis provides rare quantitative insight into the resource scale of a leading frontier AI lab.
  • Large compute expenditures signal continued rapid scaling efforts at OpenAI despite high costs.

Review

The Epoch AI analysis provides a comprehensive breakdown of OpenAI's computational expenditure in 2024, revealing significant investments in cloud computing infrastructure. By examining reports from The Information and The New York Times, the researchers estimated OpenAI's total compute spending at approximately $7 billion, with $5 billion dedicated to research and development and $2 billion to inference compute. The study's methodology involves detailed estimates of training compute costs for models like GPT-4.5, GPT-4o, and Sora Turbo, using confidence intervals and assumptions about cluster sizes, training durations, and GPU costs. The analysis highlights that most of OpenAI's compute resources were likely allocated to experimental and unreleased model training runs, rather than final production models. This insight offers valuable transparency into the computational resources required for cutting-edge AI development and underscores the massive investments needed to maintain leadership in frontier AI technologies.

Cited by 2 pages

PageTypeQuality
Dense TransformersConcept58.0
Compute ThresholdsConcept91.0

Cached Content Preview

HTTP 200Fetched Apr 9, 20267 KB
Most of OpenAI’s 2024 compute went to experiments | Epoch AI 

 
 
 

 

 
 

 

 

 
 OpenAI spent billions of dollars on compute in 2024, in the form of renting cloud compute from Microsoft. This reportedly included around $5 billion in research and development compute, including all training and research compute, and around $2 billion in inference compute.

 Enable JavaScript to see an interactive visualization.

 Based on our compute and cost estimates for OpenAI’s released models from Q2 2024 through Q1 2025, the majority of OpenAI’s R&D compute in 2024 was likely allocated to research, experimental training runs, or training runs for unreleased models, rather than the final, primary training runs of released models like GPT-4.5, GPT-4o, and o3.

 
Epoch's work is free to use, distribute, and reproduce provided the source and authors are credited under the Creative Commons BY
 license.

 Learn more about this graph

 We estimate the breakdown of OpenAI’s cloud compute spending in 2024, based on reports breaking down OpenAI’s total compute expenses, as well as our estimates of the cloud compute cost to train the models OpenAI published or released from Q2 of 2024 through the end of Q1 2025. OpenAI did not own significant amounts of AI compute in 2024, so this spending represents all of OpenAI’s compute resources in 2024. For these released models, we estimate the cost of the final training run (i.e. the training run that produced the version OpenAI released), not including experimental runs (often called “derisking runs”) used to prepare for the final training run.

 Analysis 
 
 
 

 OpenAI’s overall compute expenses 

 Data for OpenAI’s overall compute spending in 2024 come from Epoch’s AI Companies dataset, ultimately via reports in The Information and The New York Times . These reports indicate that OpenAI spent $3 billion on training compute, $1.8 billion on inference compute, and $1 billion on research compute amortized over “multiple years”. For the purpose of this visualization, we estimate that the amortization schedule for research compute was two years, for $2 billion in research compute expenses incurred in 2024. These are OpenAI’s cloud compute expenses in 2024, not the upfront capital cost of the data centers: OpenAI currently relies on cloud companies to access compute.

 How exactly OpenAI distinguishes training and research compute is not known, so we group these expenses as $5 billion in total “R&D” (research and development) compute expenses.

 Final training cost of released OpenAI models 

 For the training expenses of OpenAI models, we looked at OpenAI’s released or announced models from the end of Q1 2024 through the end of Q1 2025, as a rough heuristic to account for the delay between when a model was trained and its release/announcement date. Significant models from this time period include GPT-4.5, GPT-4o, o3 (December preview), and Sora Turbo.

 We estimated the cost of the final training runs for these mo

... (truncated, 7 KB total)
Resource ID: e5457746f2524afb | Stable ID: sid_W1IF6KcU8q