Back
Epoch AI Brief, October 2025 (https://epochai.substack.com/p/the-epoch-ai-brief-october-2025)
blogepochai.substack.com·epochai.substack.com/p/the-epoch-ai-brief-october-2025
Data Status
Not fetched
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Capability-Alignment Race Model | Analysis | 62.0 |
Cached Content Preview
HTTP 200Fetched Feb 23, 202610 KB
The Epoch AI Brief - October 2025
Epoch AI
Subscribe Sign in The Epoch Brief The Epoch AI Brief - October 2025
Report on decentralized training, new Epoch Capabilities Index for tracking AI progress, FrontierMath evaluations of leading models, revenue insights on OpenAI, and hiring for two open positions.
Epoch AI & various writers Oct 31, 2025 11 Share Hi! In this edition of the Epoch AI brief:
We published a report analyzing whether decentralized training could help solve power bottlenecks, and found that 10 GW training runs across thousands of kilometers is feasible.
We launched the Epoch Capabilities Index (ECI) , a new unified metric that combines scores from dozens of AI benchmarks into a single “general capability” scale to track long-term AI progress trends.
We’ve published four new Data Insights covering open-weight vs SotA models on the ECI, OpenAI’s rapid revenue growth , how OpenAI spends its compute , and steady AI capability improvements .
We’ve published four new Gradient Updates including analysis of FrontierMath difficulty bounds , OpenAI’s unprecedented revenue projections , and potential deployment of digital workers .
We benchmarked leading compute-intensive settings of major LLM models on FrontierMath and released many interviews from the Benchmark’s contributors.
We’re hiring a new Researcher for our Data Team to accelerate our work studying the future of AI, and a Lead Editor to help communicate our work.
Subscribe Publications & Announcements
Could decentralized training solve AI’s power problem?
Conventional wisdom in AI is that large scale pretraining needs to happen in contiguous massive datacenter campuses. But is this true? Our research suggests that conducting 10 GW training runs across 23 sites — linked by a network spanning 4,800 km long — is feasible and could help alleviate power bottlenecks.
While this approach requires substantial network bandwidth—over 25x that of the highest-capacity transatlantic fiber cable for training a model with 72 trillion parameters—the incremental cost is manageable at an estimated 0.5% of datacenter construction costs.
The bottom line is that conducting large decentralized training runs is perfectly possible without a large increase in either training time or budget. However, distributed clusters have many downsides and we expect that AI companies will prefer to scale AI campuses as much as they can, and only resort to distributed clusters to go beyond the scale that utilities are willing to provide through the grid.
Find detailed analysis, calculations and sources in the full articl
... (truncated, 10 KB total)Resource ID:
f23e98169a4b7257 | Stable ID: ZWZkZDk3Yz