Back
Scalable Oversight | AI Alignment
webalignmentsurvey.com·alignmentsurvey.com/materials/learning/scalable/index.html
Part of an alignment survey learning curriculum; a useful introductory overview of scalable oversight for those new to the technical AI safety field, connecting foundational concepts like debate and amplification.
Metadata
Importance: 65/100wiki pageeducational
Summary
This educational resource covers scalable oversight as a key approach to AI alignment, addressing how humans can effectively supervise AI systems that may surpass human capabilities in certain domains. It explores techniques like debate, amplification, and recursive reward modeling to maintain meaningful human control as AI systems scale.
Key Points
- •Scalable oversight addresses the core challenge of supervising AI systems whose outputs or reasoning humans cannot directly evaluate with confidence.
- •Key techniques include debate (AI systems argue positions for human judgment), iterated amplification, and recursive reward modeling.
- •The problem becomes critical as AI systems grow more capable, since naive human feedback may be insufficient for complex or superhuman tasks.
- •Scalable oversight research aims to ensure training signals remain aligned with human values even when humans cannot directly verify correctness.
- •Part of a broader AI alignment survey curriculum, situating scalable oversight within the landscape of technical safety research.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| RLHF | Research Area | 63.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 202629 KB
Scalable Oversight | AI Alignment Search
Learning from Feedback
Search...
/ Contents
Scalable Oversight
Scalable oversight seeks to ensure that AI systems, even those surpassing human expertise, remain aligned with human intent.
graph LR;
F[Feedback] --> L[Policy Learning]
P[Preference Modeling] --> L
L --> S[Scalable Oversight]
subgraph We are Here.
S --> 1(Reinforcement Learning from Feedback);
S --> 2(Iterated Distillation and Amplification);
S --> 3(Recursive Reward Modeling);
S --> 4(Debate);
S --> 5(CIRL: Cooperative Inverse Reinforcement Learning);
end
click S "../scalable" _self
click F "../feedback" _self
click P "../preference" _self
click 1 "#reinforcement-learning-from-feedback" _self
click 2 "#iterated-distillation-and-amplification" _self
click 3 "#recursive-reward-modeling" _self
click 4 "#debate" _self
click 5 "#cirl-cooperative-inverse-reinforcement-learning" _self Reinforcement Learning from Feedback
We propose the concept of RLxF as a naive form of Scalable Oversight, achieved by extending RLHF via AI components, improving the efficiency and quality of alignment between humans and AI systems.
Reinforcement Learning from AI Feedback
Reinforcement Learning from AI Feedback (RLAIF) is a method building upon the framework of RLHF and serves as an extension to Reinforcement Learning from Human Feedback (RLHF) .
We show the basic steps of our Constitutional AI (CAI) process, which consists of both a supervised learning (SL) stage, consisting of the steps at the top, and a Reinforcement Learning (RL) stage, shown as the sequence of steps at the bottom of the figure. Both the critiques and the AI feedback are steered by a small set of principles drawn from a ‘constitution’. The supervised stage significantly improves the initial model, and gives some control over the initial behavior at the start of the RL phase, addressing potential exploration problems. The RL stage significantly improves performance and reliability. Constitutional AI: Harmlessness from AI Feedback (Bai et al., 2022)
Recommended Papers List
Constitutional AI: Harmlessness from ai feedback
Click to have a preview. As AI systems become more capable, we would like to enlist their help to supervise other AIs. We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs. The only human oversight is provided through a list of rules or principles, and so we refer to the method as ‘Constitutional AI’. The process involves both a supervised learning and a reinforcement learning phase. In the supervised phase we sample from an initial model, then generate self-critiques and revisions, and then finetune the original model on revised responses. In the RL phase, we sample from the finetuned model, use a model to evaluate which of the two samples is better, and then train a preference model from this dataset of AI preferences. We then train with RL using t
... (truncated, 29 KB total)Resource ID:
be45866c43f2be82 | Stable ID: NjViZWNjNz