Skip to content
Longterm Wiki
Back

Paul Christiano's AI Alignment Research

blog

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Alignment Forum

Paul Christiano is one of the most cited researchers in technical AI alignment; his user profile aggregates posts, comments, and research threads on the Alignment Forum, making it a useful entry point into his body of work.

Metadata

Importance: 82/100homepage

Summary

Paul Christiano is a leading AI alignment researcher and founder of ARC (Alignment Research Center), known for foundational contributions including iterated amplification, debate as an alignment technique, and eliciting latent knowledge (ELK). His work addresses existential risks from advanced AI, responsible scaling policies, and core technical challenges in ensuring AI systems remain beneficial and under human oversight.

Key Points

  • Developed influential alignment proposals including iterated amplification, AI safety via debate, and eliciting latent knowledge (ELK).
  • Argues that powerful AI poses genuine risks of irreversible human disempowerment, potentially emerging without clear warning signals.
  • Advocates for responsible scaling policies and transparency about AI capabilities from frontier labs.
  • Critiques alignment efforts that focus on tractable but less critical problems rather than core technical difficulties.
  • Founded the Alignment Research Center (ARC) to work on evaluation and interpretability-adjacent alignment research.

Cited by 2 pages

Cached Content Preview

HTTP 200Fetched Apr 9, 202623 KB
Jan
 FEB
 Mar
 

 
 

 
 06
 
 

 
 

 2025
 2026
 2027
 

 
 
 

 

 

 
 
success

 
fail

 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 

 

 About this capture
 

 

 

 

 

 

 
COLLECTED BY

 

 

 
 
Collection: Common Crawl

 

 

 Web crawl data from Common Crawl.
 

 

 

 

 

 
TIMESTAMPS

 

 

 

 

 

 

The Wayback Machine - https://web.archive.org/web/20260206214818/https://www.alignmentforum.org/users/paulfchristiano

 

x

 This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. 

AI ALIGNMENT FORUM

AF

Login

paulfchristiano — AI Alignment Forum

Paul Christiano

Ω6172

78

825

592

Message

Dialogue

Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by 
Newest

6

paulfchristiano's Shortform

5y

36

Iterated Amplification

Catching AIs red-handed

paulfchristiano2y

11

6

As a concrete example, suppose that an AI fails to find an error in a piece of code (which will slightly undermine the security of some important computer system). You are unsure if it represents sabotage or a benign error. Relevant evidence for making that distinction:

Can the AI identify structurally similar errors in other code? Is the behavior sensitive to evidence about how the code will be used, in a way that doesn't seem relevant to the nature or salience of the error? Is the pattern of sensitivity suggestive of an attempt to disempower humanity?

Is the behavior sensitive to the AI's beliefs about the quality of human oversight? For example does training data suggesting more vigilant human oversight in this domain decrease the probability of the mistake? Do other AI systems trained in different ways reproduce a similar pattern of mistakes?

If we have one example of a mistake that looks suspicious, we can examine other similar mistakes and build up a detailed pattern of behavior. For example we can hold fix the context and then look at whether the AI misses other kinds of software errors. With multiple mistakes in hand we can run all this analysis on a broader spectrum of mistakes. We can iteratively modify scenarios to explore the landscape of deceptive behavior, gradient descenting our way towards stronger and stronger evidence.

As another example, suppose that an AI system tries to modulate its behavior in any way based on whether it believes it is being closely monitored vs operating in the real world. And suppose that you find an example in the lab where the AI system messes up, and therefore where there is an unusually large gap between its ability to solve a problem in a "realistic-looking" situation vs a situation that's obviously a test. The initial behavior isn't super suspicious (sometimes behavior randomly varies with framing), but it's suspicious enough to get flagged for more thorough review. Subsequent rounds find further evidence of sensitivity, and eventually it's pretty likely (though still 

... (truncated, 23 KB total)
Resource ID: ebb2f8283d5a6014 | Stable ID: Y2U1YWQ0NT