ARC's first technical report: Eliciting Latent Knowledge
webAuthors
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: LessWrong
This 2021 ARC report is a landmark alignment document that formalized the ELK problem and launched a prize competition; it is widely cited as a key reference for understanding deceptive alignment and scalable oversight challenges.
Forum Post Details
Metadata
Summary
ARC's foundational technical report introduces Eliciting Latent Knowledge (ELK) as a central open problem in AI alignment: how to extract what an AI system actually 'knows' about the world rather than what it reports. The report surveys multiple proposed approaches to mapping between an AI's internal world-model and human concepts, and explains why this problem is both hard and critical to solving alignment.
Key Points
- •ELK addresses the core challenge of getting an AI to report its true beliefs rather than what will satisfy evaluators or pass oversight checks.
- •The problem is closely related to ontology identification: bridging the gap between an AI's internal representations and human concepts/values.
- •The report presents and critiques multiple proposed approaches, establishing a research agenda and methodology for ARC.
- •ELK is positioned as foundational to ARC's broader alignment strategy, particularly for scalable oversight in high-stakes scenarios.
- •The report launched a well-known prize competition to solicit solutions, significantly broadening community engagement with the problem.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Alignment Research Center (ARC) | Organization | 57.0 |
Cached Content Preview
# ARC's first technical report: Eliciting Latent Knowledge By paulfchristiano, Mark Xu, Ajeya Cotra Published: 2021-12-14 ARC has published a report on [Eliciting Latent Knowledge](https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit?usp=sharing), an open problem which we believe is central to alignment. We think reading this report is the clearest way to understand what problems we are working on, how they fit into our plan for solving alignment in the worst case, and our research methodology. The core difficulty we discuss is learning how to map between an AI’s model of the world and a human’s model. This is closely related to [ontology identification](https://arbital.greaterwrong.com/p/ontology_identification/) (and [other](https://www.alignmentforum.org/posts/gQY6LrTWJNkTv8YJR/the-pointers-problem-human-values-are-a-function-of-humans) [similar](https://intelligence.org/files/AlignmentMachineLearning.pdf) [statements](https://www.alignmentforum.org/posts/k54rgSg7GcjtXnMHX/model-splintering-moving-from-one-imperfect-model-to-another-1)). Our main contribution is to present many possible approaches to the problem and a more precise discussion of why it seems to be difficult and important. The report is available [here](https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit?usp=sharing) as a google document. If you're excited about this research, [we're hiring](https://www.alignmentforum.org/posts/dLoK6KGcHAoudtwdo/arc-is-hiring)! ### Q&A We're particularly excited about answering questions posted here throughout December. We welcome any questions no matter how basic or confused; we would love to help people understand what research we’re doing and how we evaluate progress in enough detail that they could start to do it themselves. *Thanks to María Gutiérrez-Rojas for the illustrations in this piece (the good ones, blame us for the ugly diagrams). Thanks to Buck Shlegeris, Jon Uesato, Carl Shulman, and especially Holden Karnofsky for helpful discussions and comments.*
37f4871113caa2ab | Stable ID: sid_f2VflHqhlx