Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Our World in Data

Data Status

Full text fetchedFetched Dec 28, 2025

Summary

Our World in Data provides analysis of GPU computational performance, measuring calculations per dollar for AI training hardware. The data focuses on GPUs used in large AI models, adjusted for inflation.

Key Points

  • Measures GPU computational performance in FLOP/s per inflation-adjusted dollar
  • Focuses on GPUs used in major AI model training
  • Recognizes importance of both hardware and software improvements

Review

This source offers a critical analysis of GPU computational performance, examining how many floating-point operations per second can be achieved per dollar of hardware investment. By tracking GPUs specifically used for training large AI models (over 1 billion parameters), the research provides insights into the evolving landscape of AI computational infrastructure. The methodology is particularly noteworthy for its nuanced approach, acknowledging that raw hardware metrics only tell part of the story. The analysis recognizes that software and algorithmic advances can deliver substantial performance improvements independent of hardware upgrades. By using 32-bit precision measurements and noting that real-world performance might differ due to lower precision calculations, the source provides a balanced and forward-looking perspective on AI computational capabilities.
Resource ID: 84cf97372586911e | Stable ID: Y2M0N2QyNz