Back
Can AI Scaling Continue Through 2030? (Epoch AI)
webCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Epoch AI
Epoch AI is a leading research organization tracking AI trends; this analysis is widely cited in discussions about future AI capabilities trajectories and is relevant to forecasting transformative AI timelines.
Metadata
Importance: 72/100blog postanalysis
Summary
Epoch AI analyzes the key constraints and bottlenecks that could limit continued AI scaling through 2030, examining factors such as compute availability, energy infrastructure, data availability, and algorithmic progress. The analysis assesses whether current scaling trends in large language models and other AI systems can realistically be sustained over the next several years.
Key Points
- •Examines multiple potential bottlenecks to AI scaling: compute supply chains, energy infrastructure, training data exhaustion, and financial constraints.
- •Assesses whether current exponential growth in training compute can realistically continue at historical rates through 2030.
- •Considers algorithmic efficiency improvements as a potential offset to hardware and data limitations.
- •Evaluates geopolitical and supply chain risks (e.g., chip manufacturing) that could constrain scaling trajectories.
- •Provides quantitative projections and scenario analysis for AI capabilities development over the next several years.
Cited by 7 pages
| Page | Type | Quality |
|---|---|---|
| The Case For AI Existential Risk | Argument | 66.0 |
| Is Scaling All You Need? | Crux | 42.0 |
| AGI Development | -- | 52.0 |
| Novel / Unknown Approaches | Capability | 53.0 |
| AI Risk Critical Uncertainties Model | Crux | 71.0 |
| Epoch AI | Organization | 51.0 |
| Long-Timelines Technical Worldview | Concept | 91.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 202698 KB
Can AI scaling continue through 2030? | Epoch AI
Introduction
In recent years, the capabilities of AI models have significantly improved. Our research suggests that this growth in computational resources accounts for a significant portion of AI performance improvements . 1 The consistent and predictable improvements from scaling have led AI labs to aggressively expand the scale of training , with training compute expanding at a rate of approximately 4x per year.
To put this 4x annual growth in AI training compute into perspective, it outpaces even some of the fastest technological expansions in recent history. It surpasses the peak growth rates of mobile phone adoption (2x/year, 1980-1987), solar energy capacity installation (1.5x/year, 2001-2010), and human genome sequencing (3.3x/year, 2008-2015).
Here, we examine whether it is technically feasible for the current rapid pace of AI training scaling—approximately 4x per year—to continue through 2030. We investigate four key factors that might constrain scaling: power availability, chip manufacturing capacity, data scarcity, and the “latency wall”, a fundamental speed limit imposed by unavoidable delays in AI training computations.
Our analysis incorporates the expansion of production capabilities, investment, and technological advancements. This includes, among other factors, examining planned growth in advanced chip packaging facilities, construction of additional power plants, and the geographic spread of data centers to leverage multiple power networks. To account for these changes, we incorporate projections from various public sources: semiconductor foundries’ planned expansions, electricity providers’ capacity growth forecasts, other relevant industry data, and our own research.
We find that training runs of 2e29 FLOP will likely be feasible by the end of this decade. In other words, by 2030 it will be very likely possible to train models that exceed GPT-4 in scale to the same degree that GPT-4 exceeds GPT-2 in scale. 2 If pursued, we might see by the end of the decade advances in AI as drastic as the difference between the rudimentary text generation of GPT-2 in 2019 and the sophisticated problem-solving abilities of GPT-4 in 2023.
Whether AI developers will actually pursue this level of scaling depends on their willingness to invest hundreds of billions of dollars in AI expansion over the coming years. While we briefly discuss the economics of AI investment later, a thorough analysis of investment decisions is beyond the scope of this report.
Enable JavaScript to see an interactive visualization.
Figure 1: Estimates of the scale constraints imposed by the most important bottlenecks to scale. Each estimate is based on historical projections. The dark shaded box corresponds to an interquartile range and light shaded region to an 80% confidence interval.
For each bottleneck we offer a conservative estimate of the relevant supply and the largest
... (truncated, 98 KB total)Resource ID:
9587b65b1192289d | Stable ID: NTZlMGFiY2