Longterm Wiki
Back

Constitutional AI: Harmlessness from AI Feedback

paper

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Anthropic

Data Status

Full text fetchedFetched Dec 28, 2025

Summary

Anthropic introduces a novel approach to AI training called Constitutional AI, which uses self-critique and AI feedback to develop safer, more principled AI systems without extensive human labeling.

Key Points

  • Uses AI self-critique and feedback to train safer AI systems
  • Requires minimal human labeling of harmful outputs
  • Enables AI to engage with harmful queries transparently
  • Combines supervised learning and reinforcement learning techniques

Review

Constitutional AI represents a groundbreaking method for aligning AI systems with human values by leveraging AI's own capabilities for self-correction and improvement. The approach involves two key phases: a supervised learning phase where the AI generates self-critiques and revisions of its own outputs, and a reinforcement learning phase that uses AI-generated preference models to refine behavior. The methodology addresses critical AI safety challenges by creating a system that can engage with potentially harmful queries in a nuanced, principled manner, explaining objections rather than simply evading them. By using chain-of-thought reasoning and minimal human oversight, Constitutional AI offers a promising pathway to more precise behavioral control and transparency in AI systems. While innovative, the approach still requires further validation across diverse scenarios and potential edge cases to fully demonstrate its robustness and generalizability.

Cited by 11 pages

Cached Content Preview

HTTP 200Fetched Feb 26, 20262 KB
AlignmentResearch

# Constitutional AI: Harmlessness from AI Feedback

Dec 15, 2022

[Read Paper](https://arxiv.org/abs/2212.08073)

## Abstract

As AI systems become more capable, we would like to enlist their help to supervise other AIs. We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs. The only human oversight is provided through a list of rules or principles, and so we refer to the method as 'Constitutional AI'. The process involves both a supervised learning and a reinforcement learning phase. In the supervised phase we sample from an initial model, then generate self-critiques and revisions, and then finetune the original model on revised responses. In the RL phase, we sample from the finetuned model, use a model to evaluate which of the two samples is better, and then train a preference model from this dataset of AI preferences. We then train with RL using the preference model as the reward signal, i.e. we use 'RL from AI Feedback' (RLAIF). As a result we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them. Both the SL and RL methods can leverage chain-of-thought style reasoning to improve the human-judged performance and transparency of AI decision making. These methods make it possible to control AI behavior more precisely and with far fewer human labels.

## Policy Memo

[Constitutional AI Policy Memo](https://www-cdn.anthropic.com/7512771452629584566b6303311496c262da1006/Anthropic_ConstitutionalAI_v2.pdf)

[Share on Twitter](https://twitter.com/intent/tweet?text=https://www.anthropic.com/research/constitutional-ai-harmlessness-from-ai-feedback)[Share on LinkedIn](https://www.linkedin.com/shareArticle?mini=true&url=https://www.anthropic.com/research/constitutional-ai-harmlessness-from-ai-feedback)

## Related content

### An update on our model deprecation commitments for Claude Opus 3

[Read more](https://www.anthropic.com/research/deprecation-updates-opus-3)

### The persona selection model

[Read more](https://www.anthropic.com/research/persona-selection-model)

### Anthropic Education Report: The AI Fluency Index

We tracked 11 observable behaviors across thousands of Claude.ai conversations to build the AI Fluency Index — a baseline for measuring how people collaborate with AI today.

[Read more](https://www.anthropic.com/research/AI-fluency-index)

Constitutional AI: Harmlessness from AI Feedback \ Anthropic
Resource ID: e99a5c1697baa07d | Stable ID: MDlmNWFlMz