Back
[2109.10862] Recursively Summarizing Books with Human Feedback
paperarxiv.org·arxiv.org/abs/2109.10862
Data Status
Not fetched
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| Large Language Models | Capability | 60.0 |
| Jan Leike | Person | 27.0 |
Cached Content Preview
HTTP 200Fetched Feb 23, 202698 KB
\\xpatchcmd\\sv@part
Part
#2\. #2
# Recursively Summarizing Books with Human Feedback
Jeff Wu
Long Ouyang∗
Daniel M. Ziegler∗
Nisan Stiennon∗
Ryan Lowe∗
Jan Leike∗
Paul Christiano∗
OpenAI
This was a joint project of the OpenAI Alignment team. JW and LO contributed equally. DMZ, NS, and RL were full-time contributors for most of the duration. JL and PC managed the team. Corresponding author jeffwu@openai.com.
###### Abstract
A major challenge for scaling machine learning is training models to perform tasks that are very difficult or time-consuming for humans to evaluate. We present progress on this problem on the task of abstractive summarization of entire fiction novels. Our method combines learning from human feedback with recursive task decomposition:
we use models trained on smaller parts of the task to assist humans in giving feedback on the broader task.
We collect a large volume of demonstrations and comparisons from human labelers, and fine-tune GPT-3 using behavioral cloning and reward modeling to do summarization recursively.
At inference time, the model first summarizes small sections of the book and then recursively summarizes these summaries to produce a summary of the entire book. Our human labelers are able to supervise and evaluate the models quickly, despite not having read the entire books themselves.
Our resulting model generates sensible summaries of entire books, even matching the quality of human-written summaries in a few cases (∼5%similar-toabsentpercent5\\sim 5\\% of books).
We achieve state-of-the-art results on the recent BookSum dataset for book-length summarization.
A zero-shot question-answering model using these summaries achieves competitive results on the challenging NarrativeQA benchmark for answering questions about books and movie scripts. We release datasets of samples from our model.111See [https://openaipublic.blob.core.windows.net/recursive-book-summ/website/index.html](https://openaipublic.blob.core.windows.net/recursive-book-summ/website/index.html "")
\\doparttoc\\faketableofcontents
### 1 Introduction
To train an ML model on a new task, we need a training signal that tells the model which behaviors are better and which are worse. For some tasks, like playing a video game, this training signal can be calculated automatically.
However, for many useful tasks an accurate training signal can only be provided via a human in the loop. For example, humans can provide demonstrations of the correct behavior (Bain and Sammut,, [1995](https://ar5iv.labs.arxiv.org/html/2109.10862#bib.bib5 "")) or compare two outputs from the model being trained (Christiano et al.,, [2017](https://ar5iv.labs.arxiv.org/html/2109.10862#bib.bib12 "")), and this data is used to train the model.
In this paper we focus on tasks that are difficult for humans to supervise or evaluate, either because the tasks take a lot of time or because they require specialized knowledge and expertise to evaluate. For example, imagine training a m
... (truncated, 98 KB total)Resource ID:
54eec19853953598 | Stable ID: NGM1YjYxMm