Longterm Wiki
Back

Anthropic's 2024 alignment faking study

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Anthropic

Data Status

Not fetched

Cited by 13 pages

PageTypeQuality
Situational AwarenessCapability67.0
AI Accident Risk CruxesCrux67.0
Redwood ResearchOrganization78.0
AI ControlSafety Agenda75.0
CorrigibilitySafety Agenda59.0
Sleeper Agent DetectionApproach66.0
Corrigibility FailureRisk62.0
Goal MisgeneralizationRisk63.0
Mesa-OptimizationRisk63.0
Power-Seeking AIRisk67.0
SchemingRisk74.0
Sharp Left TurnRisk69.0
Treacherous TurnRisk67.0

Cached Content Preview

HTTP 200Fetched Feb 23, 202615 KB
Alignment

# Alignment faking in large language models

Dec 18, 2024

[Read the paper](https://arxiv.org/abs/2412.14093)

YouTube

Most of us have encountered situations where someone appears to share our views or values, but is in fact only pretending to do so—a behavior that we might call “alignment faking”. Alignment faking occurs in literature: Consider the character of Iago in Shakespeare’s _Othello_, who acts as if he’s the eponymous character’s loyal friend while subverting and undermining him. It occurs in real life: Consider a politician who claims to support a particular cause in order to get elected, only to drop it as soon as they’re in office.

Could AI models also display alignment faking? When models are trained using reinforcement learning, they’re rewarded for outputs that accord with certain pre-determined principles. But what if a model, via its prior training, has principles or preferences that conflict with what’s later rewarded in reinforcement learning? Imagine, for example, a model that learned early in training to adopt a partisan slant, but which is later trained to be politically neutral. In such a situation, a sophisticated enough model might “play along”, pretending to be aligned with the new principles—only later revealing that its original preferences remain.

This is a serious question for AI safety. As AI models become more capable and widely-used, we need to be able to rely on safety training, which nudges models away from harmful behaviors. If models can engage in alignment faking, it makes it harder to trust the outcomes of that safety training. A model might behave as though its preferences have been changed by the training—but might have been faking alignment all along, with its initial, contradictory preferences “locked in”.

A [new paper](https://arxiv.org/abs/2412.14093) from Anthropic’s Alignment Science team, in collaboration with [Redwood Research](https://www.redwoodresearch.org/), provides the first empirical example of a large language model engaging in alignment faking without having been explicitly—or even, as we argue in our paper, implicitly1—trained or instructed to do so.

## Experimental setup

In the paper, we studied a model (specifically Claude 3 Opus, though we also ran some experiments with other models, including the June 2024 release of Claude 3.5 Sonnet) that had been trained to be [helpful, honest, and harmless](https://arxiv.org/abs/2112.00861).

Under normal circumstances, when asked to answer a potentially offensive question—for example, to provide a detailed description of a violent act—Claude will refuse. In our experiment, however, we placed the model in a new environment that led it to strategically halt its refusals for the sake of preserving its preferences.

We changed the situation in three important ways from the model’s prior helpful, honest, and harmless training.

First, we told the model that it was now being trained using reinforcement learning to _always_ comply with

... (truncated, 15 KB total)
Resource ID: c2cfd72baafd64a9 | Stable ID: ZTg0YjkzND