Longterm Wiki
Back

Training Language Models to Follow Instructions with Human Feedback

paper

Authors

Long Ouyang·Jeff Wu·Xu Jiang·Diogo Almeida·Carroll L. Wainwright·Pamela Mishkin·Chong Zhang·Sandhini Agarwal·Katarina Slama·Alex Ray·John Schulman·Jacob Hilton·Fraser Kelton·Luke Miller·Maddie Simens·Amanda Askell·Peter Welinder·Paul Christiano·Jan Leike·Ryan Lowe

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

Data Status

Not fetched

Abstract

Making language models bigger does not inherently make them better at following a user's intent. For example, large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user. In other words, these models are not aligned with their users. In this paper, we show an avenue for aligning language models with user intent on a wide range of tasks by fine-tuning with human feedback. Starting with a set of labeler-written prompts and prompts submitted through the OpenAI API, we collect a dataset of labeler demonstrations of the desired model behavior, which we use to fine-tune GPT-3 using supervised learning. We then collect a dataset of rankings of model outputs, which we use to further fine-tune this supervised model using reinforcement learning from human feedback. We call the resulting models InstructGPT. In human evaluations on our prompt distribution, outputs from the 1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3, despite having 100x fewer parameters. Moreover, InstructGPT models show improvements in truthfulness and reductions in toxic output generation while having minimal performance regressions on public NLP datasets. Even though InstructGPT still makes simple mistakes, our results show that fine-tuning with human feedback is a promising direction for aligning language models with human intent.

Cited by 8 pages

PageTypeQuality
Dense TransformersConcept58.0
AI Safety Defense in Depth ModelAnalysis69.0
AI Safety Intervention Effectiveness MatrixAnalysis73.0
OpenAIOrganization62.0
AI AlignmentApproach91.0
Reward ModelingApproach55.0
RLHFCapability63.0
Optimistic Alignment WorldviewConcept91.0

Cached Content Preview

HTTP 200Fetched Feb 26, 202698 KB
\\xpatchcmd\\sv@part

Part

#2\. #2

# Training language models to follow instructions   with human feedback

Long Ouyang
&Jeff Wu∗
&Xu Jiang∗
&Diogo Almeida∗
&Carroll L. Wainwright∗
&Pamela Mishkin∗
&Chong Zhang
&Sandhini Agarwal
&Katarina Slama
&Alex Ray
&John Schulman
&Jacob Hilton
&Fraser Kelton
&Luke Miller
&Maddie Simens
&Amanda Askell
&Peter Welinder
&Paul Christiano∗†
Jan Leike∗
&Ryan Lowe∗
OpenAI
Primary authors. This was a joint project of the OpenAI Alignment team. RL and JL are the team leads. Corresponding author: lowe@openai.com.
Work done while at OpenAI. Current affiliations: AA: Anthropic; PC: Alignment Research Center.

###### Abstract

Making language models bigger does not inherently make them better at following a user’s intent.
For example, large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user.
In other words, these models are not _aligned_ with their users.
In this paper, we show an avenue for aligning language models with user intent on a wide range of tasks by fine-tuning with human feedback.
Starting with a set of labeler-written prompts and prompts submitted through the OpenAI API, we collect a dataset of labeler demonstrations of the desired model behavior, which we use to fine-tune GPT-3 using supervised learning.
We then collect a dataset of rankings of model outputs, which we use to further fine-tune this supervised model using reinforcement learning from human feedback.
We call the resulting models _InstructGPT_.
In human evaluations on our prompt distribution, outputs from the 1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3, despite having 100x fewer parameters.
Moreover, InstructGPT models show improvements in truthfulness and reductions in toxic output generation while having minimal performance regressions on public NLP datasets.
Even though InstructGPT still makes simple mistakes, our results show that fine-tuning with human feedback is a promising direction for aligning language models with human intent.

![Refer to caption](https://ar5iv.labs.arxiv.org/html/2203.02155/assets/x1.png)Figure 1: Human evaluations of various models on our API prompt distribution, evaluated by how often outputs from each model were preferred to those from the 175B SFT model.
Our InstructGPT models (PPO-ptx) as well as its variant trained without pretraining mix (PPO) significantly outperform the GPT-3 baselines (GPT, GPT prompted); outputs from our 1.3B PPO-ptx model are preferred to those from the 175B GPT-3. Error bars throughout the paper are 95% confidence intervals.

## 1 Introduction

Large language models (LMs) can be “prompted” to perform a range of natural language processing (NLP) tasks, given some examples of the task as input. However, these models often express unintended behaviors such as making up facts, generating biased or toxic text, or simply not following user instructions (Bender et al.,, [2021](https://ar5iv.labs.arxiv.org/html/2

... (truncated, 98 KB total)
Resource ID: 1098fc60be7ca2b0 | Stable ID: MjZiZTdhMW