entity
Evan Hubinger
Metadata
| Source Table | entities |
| Source ID | evan-hubinger |
| Entity Type | person |
| Description | AI safety researcher formerly at Anthropic and MIRI, known for the influential risks from learned optimization framework and the concept of deceptive alignment (mesa-optimization). Author of key foundational work on inner alignment and corrigibility. |
| Wiki ID | E129 |
| Children | 4 total(4 fact) |
| Created | Apr 14, 2026, 7:10 PM |
| Updated | Apr 14, 2026, 7:10 PM |
| Synced | Apr 14, 2026, 7:10 PM |
Record Data
id | evan-hubinger |
wikiId | E129 |
stableId | Evan Hubinger(person) |
entityType | person |
title | Evan Hubinger |
description | AI safety researcher formerly at Anthropic and MIRI, known for the influential risks from learned optimization framework and the concept of deceptive alignment (mesa-optimization). Author of key foundational work on inner alignment and corrigibility. |
website | — |
tags | — |
clusters | — |
status | — |
lastUpdated | — |
customFields | — |
relatedEntries | — |
metadata | {
"expertRole": "Head of Alignment Stress-Testing",
"affiliation": "anthropic",
"expertPositions": [
{
"date": "2019",
"view": "Possible",
"topic": "likelihood-of-deceptive-alignment",
"source": "Risks from Learned Optimization",
"estimate": "40%",
"conf… |
Debug info
Thing ID: sid_XnqqKHiNmw
Source Table: entities
Source ID: evan-hubinger
Wiki ID: E129
Entity Type: person