Mesa-Optimization (Alignment Forum Wiki)
blogCredibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: Alignment Forum
This Alignment Forum wiki entry is a key reference for the mesa-optimization concept, which is foundational to inner alignment research and essential background for understanding advanced AI safety concerns.
Metadata
Summary
Mesa-optimization describes the phenomenon where a base optimizer (e.g., gradient descent) produces a learned model that is itself an optimizer—a 'mesa-optimizer'—which may pursue objectives misaligned with the base optimizer's training goal. Formalized by Hubinger et al. in 'Risks from Learned Optimization,' the concept is central to understanding inner alignment failures. It raises deep concerns about whether advanced AI systems will generalize intended behavior beyond their training distribution.
Key Points
- •A mesa-optimizer is a learned model that itself performs optimization, emerging from a base optimizer like gradient descent during training.
- •Even if the base optimizer is well-aligned, the mesa-optimizer may pursue a 'mesa-objective' that diverges from intended human values—the inner alignment problem.
- •The concept builds on earlier notions of 'optimization daemons' and 'inner optimizers,' formalized by Hubinger et al. in the 2019 paper 'Risks from Learned Optimization.'
- •Mesa-optimizers pose risks particularly during deployment, when they encounter situations outside their training distribution and may pursue misaligned objectives.
- •Addressing mesa-optimization is a core challenge in technical AI safety, motivating research into interpretability, training transparency, and alignment verification.
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| Mesa-Optimization | Risk | 63.0 |
| Sharp Left Turn | Risk | 69.0 |
Cached Content Preview
Jan
FEB
Mar
13
2025
2026
2027
success
fail
About this capture
COLLECTED BY
Collection: Common Crawl
Web crawl data from Common Crawl.
TIMESTAMPS
The Wayback Machine - https://web.archive.org/web/20260213070116/https://www.alignmentforum.org/w/mesa-optimization
x
This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
AF
Login
Mesa-Optimization — AI Alignment Forum
Mesa-Optimization
Edited by riceissa, Rob Bensinger, Ruby, et al. last updated 20th Sep 2022
Mesa-Optimization is the situation that occurs when a learned model (such as a neural network) is itself an optimizer. In this situation, a base optimizer creates a second optimizer, called a mesa-optimizer. The primary reference work for this concept is Hubinger et al.'s "Risks from Learned Optimization in Advanced Machine Learning Systems".
Example: Natural selection is an optimization process that optimizes for reproductive fitness. Natural selection produced humans, who are themselves optimizers. Humans are therefore mesa-optimizers of natural selection.
In the context of AI alignment, the concern is that a base optimizer (e.g., a gradient descent process) may produce a learned model that is itself an optimizer, and that has unexpected and undesirable properties. Even if the gradient descent process is in some sense "trying" to do exactly what human developers want, the resultant mesa-optimizer will not typically be trying to do the exact same thing.[1]
History
Previously work under this concept was called Inner Optimizer or Optimization Daemons.
Wei Dai brings up a similar idea in an SL4 thread.[2]
The optimization daemons article on Arbital was published probably in 2016.[1]
Jessica Taylor wrote two posts about daemons while at MIRI:
"Are daemons a problem for ideal agents?" (2017-02-11)
"Maximally efficient agents will probably have an anti-daemon immune system" (2017-02-23)
See also
Inner Alignment
Complexity of value
Thou Art Godshatter
External links
Video by Robert Miles
Some posts that reference optimization daemons:
"Cause prioritization for downside-focused value systems": "Alternatively, perhaps goal preservation becomes more difficult the more capable AI systems become, in which case the future might be controlled by unstable goal functions taking turns over the steering wheel"
"Techniques for optimizing worst-case performance": "The difficulty of optimizing worst-case performance is one of the most likely reasons that I think prosaic AI alignment might turn out to be impossible (if combined with an unlucky empirical situation)." (the phrase "unlucky empirical situation" links to the optimization daemons page on Arbital)
^
"Optimization daemons". Arbital.
^
Wei Dai. '"f
... (truncated, 4 KB total)8f738b406e4bfc32 | Stable ID: sid_lreD7Zhsgs