Jan Leike — Description: German AI safety researcher; VP of Alignment Science at Anthropic since 2024 after leaving OpenAI, where he co-led the Superalignment team with Ilya Sutskever. His research focuses on scalable oversight, recursive reward modeling, and AI alignment. Prominent public voice on AI safety resource allocation.
The source confirms: (1) German background (University of Freiburg), (2) joined Anthropic in May 2024, (3) left OpenAI, (4) co-led Superalignment team with Ilya Sutskever. However, the source does NOT confirm: (1) his specific title at Anthropic as 'VP of Alignment Science', (2) his specific research focus areas (scalable oversight, recursive reward modeling), (3) characterization as 'prominent public voice on AI safety resource allocation.' The Wikipedia article is more general and does not provide these specific details. The claim is partially supported but includes unverified details not present in the source.
Our claim
entire record- Subject
- Jan Leike
- Property
- Description
- Value
German AI safety researcher; VP of Alignment Science at Anthropic since 2024 after leaving OpenAI, where he co-led the Superalignment team with Ilya Sutskever. His research focuses on scalable oversight, recursive reward modeling, and AI alignment. Prominent public voice on AI sa… expand
German AI safety researcher; VP of Alignment Science at Anthropic since 2024 after leaving OpenAI, where he co-led the Superalignment team with Ilya Sutskever. His research focuses on scalable oversight, recursive reward modeling, and AI alignment. Prominent public voice on AI safety resource allocation.- As Of
- April 2026
Source evidence
1 src · 1 checkNoteThe source confirms: (1) German background (University of Freiburg), (2) joined Anthropic in May 2024, (3) left OpenAI, (4) co-led Superalignment team with Ilya Sutskever. However, the source does NOT confirm: (1) his specific title at Anthropic as 'VP of Alignment Science', (2) his specific research focus areas (scalable oversight, recursive reward modeling), (3) characterization as 'prominent public voice on AI safety resource allocation.' The Wikipedia article is more general and does not provide these specific details. The claim is partially supported but includes unverified details not present in the source.