Skip to content
Longterm Wiki
Back

Kaplan et al. (2020)

paper

Authors

Jared Kaplan·Sam McCandlish·Tom Henighan·Tom B. Brown·Benjamin Chess·Rewon Child·Scott Gray·Alec Radford·Jeffrey Wu·Dario Amodei

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

Foundational empirical study establishing power-law scaling relationships for language model loss across model size, dataset size, and compute, which is critical for understanding AI capability development and resource requirements in AI safety research.

Paper Details

Citations
7,388
546 influential
Year
2020

Metadata

arxiv preprintprimary source

Abstract

We study empirical scaling laws for language model performance on the cross-entropy loss. The loss scales as a power-law with model size, dataset size, and the amount of compute used for training, with some trends spanning more than seven orders of magnitude. Other architectural details such as network width or depth have minimal effects within a wide range. Simple equations govern the dependence of overfitting on model/dataset size and the dependence of training speed on model size. These relationships allow us to determine the optimal allocation of a fixed compute budget. Larger models are significantly more sample-efficient, such that optimally compute-efficient training involves training very large models on a relatively modest amount of data and stopping significantly before convergence.

Summary

Kaplan et al. (2020) empirically characterize scaling laws for language model performance, demonstrating that cross-entropy loss follows power-law relationships with model size, dataset size, and compute budget across seven orders of magnitude. The study reveals that architectural details like width and depth have minimal impact, while overfitting and training speed follow predictable patterns. Crucially, the findings show that larger models are significantly more sample-efficient, implying that optimal compute-efficient training involves training very large models on modest datasets and stopping before convergence.

Cited by 8 pages

Cached Content Preview

HTTP 200Fetched Apr 9, 202698 KB
[2001.08361] Scaling Laws for Neural Language Models 
 
 
 
 
 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 Scaling Laws for Neural Language Models

 
 
 
Jared Kaplan 
 Johns Hopkins University, OpenAI 
 jaredk@jhu.edu 
 &Sam McCandlish ∗ 
 OpenAI 
 sam@openai.com 
 \AND Tom Henighan 
 OpenAI 
 henighan@openai.com 
 &Tom B. Brown 
 OpenAI 
 tom@openai.com 
 &Benjamin Chess
 OpenAI 
 bchess@openai.com 
 &Rewon Child
 OpenAI 
 rewon@openai.com 
 &Scott Gray
 OpenAI 
 scott@openai.com 
 &Alec Radford
 OpenAI 
 alec@openai.com 
 &Jeffrey Wu
 OpenAI 
 jeffwu@openai.com 
 &Dario Amodei 
 OpenAI 
 damodei@openai.com 
 
 Equal contribution. 
 
 Contributions: Jared Kaplan and Sam McCandlish led the research. Tom Henighan contributed the LSTM experiments. Tom Brown, Rewon Child, and Scott Gray, and Alec Radford developed the optimized Transformer implementation. Jeff Wu, Benjamin Chess, and Alec Radford developed the text datasets. Dario Amodei provided guidance throughout the project. 
 
 
 

 
 Abstract

 We study empirical scaling laws for language model performance on the cross-entropy loss.
The loss scales as a power-law with model size, dataset size, and the amount of compute used for training, with some trends spanning more than seven orders of magnitude.
Other architectural details such as network width or depth have minimal effects within a wide range.
Simple equations govern the dependence of overfitting on model/dataset size and the dependence of training speed on model size.
These relationships allow us to determine the optimal allocation of a fixed compute budget.
Larger models are significantly more sample-efficient, such that optimally compute-efficient training involves training very large models on a relatively modest amount of data and stopping significantly before convergence. 

 
 

 
 
 1 Introduction

 
 Language provides a natural domain for the study of artificial intelligence, as the vast majority of reasoning tasks can be efficiently expressed and evaluated in language, and the world’s text provides a wealth of data for unsupervised learning via generative modeling. Deep learning has recently seen rapid progress in language modeling, with state of the art models [ RNSS18 , DCLT18 , YDY + 19 , LOG + 19 , RSR + 19 ] approaching human-level performance on many specific tasks [ WPN + 19 ] , including the composition of coherent multi-paragraph prompted text samples [ RWC + 19 ] .

 
 
 One might expect language modeling performance to depend on model architecture, the size of neural models, the computing power used to train them, and the data available for this training process. In this work we will empirically investigate the dependence of language modeling loss on all of these factors, focusing on the Transformer architecture [ VSP + 17 , LSP + 18 ] . The high ceiling and low floor for performance on language tasks allows us to study trends over more than seven orders of magnitude in scale.

 
 
 Throughout we will observe precise power-law sc

... (truncated, 98 KB total)
Resource ID: 85f66a6419d173a7 | Stable ID: sid_AnJXaIj5sg