Back
Constitutional AI: Harmlessness from AI Feedback
paperAuthor
Yanuo Zhou
Credibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: arXiv
Constitutional AI paper presenting a method for training AI systems to be harmless using AI feedback based on a set of constitutional principles, addressing a fundamental challenge in AI alignment and safety.
Paper Details
Citations
2,673
212 influential
Year
2022
Metadata
arxiv preprintprimary source
Cited by 6 pages
| Page | Type | Quality |
|---|---|---|
| Dense Transformers | Concept | 58.0 |
| Anthropic | Organization | 74.0 |
| Frontier AI Labs (Overview) | -- | 85.0 |
| Refusal Training | Approach | 63.0 |
| Reward Modeling | Approach | 55.0 |
| RLHF | Research Area | 63.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 202698 KB
[2212.08073] Constitutional AI: Harmlessness from AI Feedback
Constitutional AI: Harmlessness from AI Feedback
Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, & Andy Jones,
Anna Chen,
Anna Goldie,
Azalia Mirhoseini,
Cameron McKinnon,
Correspondence to: {yuntao,jared}@anthropic.com
Author contributions are detailed in 7 .
Carol Chen,
Catherine Olsson,
Christopher Olah,
Danny Hernandez,
Dawn Drain,
Deep Ganguli,
Dustin Li,
Eli Tran-Johnson,
Ethan Perez,
Jamie Kerr,
Jared Mueller,
Jeffrey Ladish,
Joshua Landau,
Kamal Ndousse,
Kamile Lukosuite,
Liane Lovitt,
Michael Sellitto,
Nelson Elhage,
Nicholas Schiefer,
Noemi Mercado,
Nova DasSarma,
Robert Lasenby,
Robin Larson,
Sam Ringer,
Scott Johnston,
Shauna Kravec,
Sheer El Showk,
Stanislav Fort,
Tamera Lanham,
Timothy Telleen-Lawton,
Tom Conerly,
Tom Henighan,
Tristan Hume,
Samuel R. Bowman,
Zac Hatfield-Dodds,
Ben Mann,
Dario Amodei,
Nicholas Joseph, Sam McCandlish, Tom Brown,
Jared Kaplan † † footnotemark: \AND
Anthropic
Abstract
As AI systems become more capable, we would like to enlist their help to supervise other AIs.
We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs. The only human oversight is provided through a list of rules or principles, and so we refer to the method as ‘Constitutional AI’. The process involves both a supervised learning and a reinforcement learning phase. In the supervised phase we sample from an initial model, then generate self-critiques and revisions, and then finetune the original model on revised responses. In the RL phase, we sample from the finetuned model, use a model to evaluate which of the two samples is better, and then train a preference model from this dataset of AI preferences. We then train with RL using the preference model as the reward signal, i.e. we use ‘RL from AI Feedback’ (RLAIF). As a result we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them. Both the SL and RL methods can leverage chain-of-thought style reasoning to improve the human-judged performance and transparency of AI decision making. These methods make it possible to control AI behavior more precisely and with far fewer human labels.
1 Introduction
We would like to train AI systems that remain helpful, honest, and harmless, even as some AI capabilities reach or exceed human-level performance. This suggests that we will need to develop techniques that do not rely on humans to supervise all aspects of AI behavior, and that can be used to automatically test and enhance robustness to harmful behaviors. We also aim to develop methods that encode desirable AI behavior in a simple and transparent form, and that make it easier to understand and eval
... (truncated, 98 KB total)Resource ID:
683aef834ac1612a | Stable ID: sid_F45cr22FFi