Longterm Wiki
Back

Coherent extrapolated volition - Wikipedia

reference

Data Status

Not fetched

Cited by 1 page

PageTypeQuality
Eliezer YudkowskyPerson35.0

Cached Content Preview

HTTP 200Fetched Feb 23, 202612 KB
[Jump to content](https://en.wikipedia.org/wiki/Coherent_extrapolated_volition#bodyContent)

From Wikipedia, the free encyclopedia

Ideal AI behavior if humans were maximally rational and knowledgeable

|     |     |
| --- | --- |
| [![icon](https://upload.wikimedia.org/wikipedia/commons/thumb/7/71/OOjs_UI_icon_robot.svg/40px-OOjs_UI_icon_robot.svg.png)](https://en.wikipedia.org/wiki/File:OOjs_UI_icon_robot.svg) | This article **incorporates text from a [large language model](https://en.wikipedia.org/wiki/Wikipedia:LLM "Wikipedia:LLM")**. It may include [hallucinated](https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence) "Hallucination (artificial intelligence)") information, [copyright violations](https://en.wikipedia.org/wiki/Wikipedia:COPYVIO "Wikipedia:COPYVIO"), claims not [verified](https://en.wikipedia.org/wiki/Wikipedia:V "Wikipedia:V") in cited sources, [original research](https://en.wikipedia.org/wiki/Wikipedia:OR "Wikipedia:OR"), or [fictitious references](https://en.wikipedia.org/wiki/Wikipedia:Fictitious_references "Wikipedia:Fictitious references"). Any such material should be [removed](https://en.wikipedia.org/w/index.php?title=Coherent_extrapolated_volition&action=edit), and content with an [unencyclopedic tone](https://en.wikipedia.org/wiki/Wikipedia:TONE "Wikipedia:TONE") should be rewritten._(October 2025)__([Learn how and when to remove this message](https://en.wikipedia.org/wiki/Help:Maintenance_template_removal "Help:Maintenance template removal"))_ |

**Coherent extrapolated volition** ( **CEV**) is a theoretical framework in the field of [AI alignment](https://en.wikipedia.org/wiki/AI_alignment "AI alignment") describing an approach by which an [artificial superintelligence](https://en.wikipedia.org/wiki/Artificial_superintelligence "Artificial superintelligence") (ASI) would act on a benevolent supposition of what humans would want if they were more knowledgeable, more rational, had more time to think, and had matured together as a society, as opposed to humanity's current individual or collective preferences.[\[1\]](https://en.wikipedia.org/wiki/Coherent_extrapolated_volition#cite_note-1) It was proposed by [Eliezer Yudkowsky](https://en.wikipedia.org/wiki/Eliezer_Yudkowsky "Eliezer Yudkowsky") in 2004 as part of his work on [friendly AI](https://en.wikipedia.org/wiki/Friendly_artificial_intelligence "Friendly artificial intelligence").[\[2\]](https://en.wikipedia.org/wiki/Coherent_extrapolated_volition#cite_note-Yudkowsky2004-2)

## Concept

\[ [edit](https://en.wikipedia.org/w/index.php?title=Coherent_extrapolated_volition&action=edit&section=1 "Edit section: Concept")\]

CEV proposes that an advanced AI system should derive its goals by extrapolating the idealized volition of humanity. This means aggregating and projecting human preferences into a coherent utility function that reflects what people would desire under ideal [epistemic](https://en.wikipedia.org/wiki/Epistemology "Epistemology") and moral

... (truncated, 12 KB total)
Resource ID: 5d9024c39c4f4102 | Stable ID: Yzg2OTA3NW