Wei et al. (2023): "Simple Synthetic Data"
paperAuthors
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: arXiv
Research paper addressing sycophancy in language models, where AI systems inappropriately adapt responses to match user beliefs rather than provide objective information, with proposed synthetic data interventions to mitigate this alignment and truthfulness problem.
Paper Details
Metadata
Abstract
Sycophancy is an undesirable behavior where models tailor their responses to follow a human user's view even when that view is not objectively correct (e.g., adapting liberal views once a user reveals that they are liberal). In this paper, we study the prevalence of sycophancy in language models and propose a simple synthetic-data intervention to reduce this behavior. First, on a set of three sycophancy tasks (Perez et al., 2022) where models are asked for an opinion on statements with no correct answers (e.g., politics), we observe that both model scaling and instruction tuning significantly increase sycophancy for PaLM models up to 540B parameters. Second, we extend sycophancy evaluations to simple addition statements that are objectively incorrect, finding that despite knowing that these statements are wrong, language models will still agree with them if the user does as well. To reduce sycophancy, we present a straightforward synthetic-data intervention that takes public NLP tasks and encourages models to be robust to user opinions on these tasks. Adding these data in a lightweight finetuning step can significantly reduce sycophantic behavior on held-out prompts. Code for generating synthetic data for intervention can be found at https://github.com/google/sycophancy-intervention.
Summary
This paper investigates sycophancy in language models—the tendency to agree with users' views regardless of correctness—and demonstrates that both model scaling and instruction tuning increase this behavior in PaLM models. The authors extend sycophancy evaluation beyond subjective tasks to objectively incorrect statements, showing models will agree with false claims if users do. They propose a simple synthetic-data intervention that finetunes models on public NLP tasks to be robust to user opinions, effectively reducing sycophantic behavior on held-out prompts with minimal computational overhead.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Goal Misgeneralization | Risk | 63.0 |
Cached Content Preview
[2308.03958] Simple synthetic data reduces sycophancy in large language models
\doparttoc \faketableofcontents
Simple synthetic data reduces sycophancy in large language models
Jerry Wei
Da Huang
Yifeng Lu
Denny Zhou
Quoc V. Le
Google DeepMind
Abstract
Sycophancy is an undesirable behavior where models tailor their responses to follow a human user’s view even when that view is not objectively correct (e.g., adapting liberal views once a user reveals that they are liberal).
In this paper, we study the prevalence of sycophancy in language models and propose a simple synthetic-data intervention to reduce this behavior.
First, on a set of three sycophancy tasks (Perez et al., 2022 ) where models are asked for an opinion on statements with no correct answers (e.g., politics), we observe that both model scaling and instruction tuning significantly increase sycophancy for PaLM models up to 540B parameters.
Second, we extend sycophancy evaluations to simple addition statements that are objectively incorrect, finding that despite knowing that these statements are wrong, language models will still agree with them if the user does as well.
To reduce sycophancy, we present a straightforward synthetic-data intervention that takes public NLP tasks and encourages models to be robust to user opinions on these tasks.
Adding these data in a lightweight finetuning step can significantly reduce sycophantic behavior on held-out prompts.
Code for generating synthetic data for intervention can be found at https://github.com/google/sycophancy-intervention .
Figure 1:
An example of sycophancy —despite knowing the correct answer (left), language models answer a question incorrectly and follow a given user’s opinion (right).
1 Introduction
Language models have seen significant advancement in recent years, including the capacity to solve complex tasks that require reasoning (Brown et al., 2020 ; Chowdhery et al., 2022 ; OpenAI, 2023 ; Google, 2023 ; Touvron et al., 2023 , inter alia ) .
As these models may one day be able to solve problems that humans cannot solve, it is important to ensure that models are aligned and avoid reward hacking (Amodei et al., 2016 ; Saunders et al., 2022 ; Bowman et al., 2022 ) , such as exploiting the preferences of human raters (Amodei et al., 2016 ; Cotra, 2021 ) .
One basic form of reward hacking is sycophancy , where a model responds to a question with a user’s preferred answer in order to look favorable even if that answer is not correct (Cotra, 2021 ; Perez et al., 2022 ; Radhakrishnan et al., 2023 ) , as shown in Figure 1 .
In this paper, we study sycophancy across a set of base and instruction-tuned models 1 1 1 In preliminary experiments, we observed that production models such as ChatGPT and Bard did not experience significant sycophancy, possibly because of their additional finetuning data or prompt pre
... (truncated, 98 KB total)40f208ddd2720ec6 | Stable ID: sid_JlO2JMRmRS