Skip to content
Longterm Wiki
Back

Jan Leike - Wikipedia

reference

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Wikipedia

Useful background reference for understanding key figures in AI safety; Leike's public resignation from OpenAI was a significant event in AI safety discourse regarding organizational priorities at frontier labs.

Metadata

Importance: 45/100wiki pagereference

Summary

Wikipedia biography of Jan Leike, a prominent AI safety researcher known for his work at DeepMind and OpenAI, where he co-led the Superalignment team focused on alignment of superintelligent AI systems. He departed OpenAI in 2024 citing concerns about safety culture, becoming a notable voice on organizational AI safety practices.

Key Points

  • Jan Leike served as alignment research lead at OpenAI and co-led the Superalignment team alongside Ilya Sutskever
  • Previously worked at DeepMind on AI safety and reinforcement learning research
  • Resigned from OpenAI in May 2024, publicly stating safety culture had been deprioritized in favor of capabilities
  • His departure and public statements drew significant attention to internal safety practices at frontier AI labs
  • Subsequently joined Anthropic to continue AI safety research

2 FactBase facts citing this source

Cached Content Preview

HTTP 200Fetched Apr 7, 20266 KB
Jan Leike - Wikipedia 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Jump to content 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 From Wikipedia, the free encyclopedia 
 
 
 
 
 
 AI alignment researcher 
 
 Jan Leike (born 1986 or 1987) [ 1 ] is an AI alignment researcher who has worked at DeepMind and OpenAI . He joined Anthropic in May 2024.

 
 Education

 [ edit ] 
 Jan Leike obtained his undergraduate degree from the University of Freiburg in Germany. After earning a master's degree in computer science , he pursued a PhD in machine learning at the Australian National University under the supervision of Marcus Hutter . [ 2 ] 

 Career

 [ edit ] 
 Leike made a six-month postdoctoral fellowship at the Future of Humanity Institute before joining DeepMind to focus on empirical AI safety research, [ 2 ] where he collaborated with Shane Legg . [ 1 ] 

 OpenAI

 [ edit ] 
 In 2021, Leike joined OpenAI. [ 1 ] In June 2023, he and Ilya Sutskever became the co-leaders of the newly introduced "superalignment" project, which aimed to determine how to align future artificial superintelligences within four years to ensure their safety. This project involved automating AI alignment research using relatively advanced AI systems. At the time, Sutskever was OpenAI's Chief Scientist, and Leike was the Head of Alignment. [ 3 ] [ 1 ] Leike was featured in Time's list of the 100 most influential personalities in AI, both in 2023 [ 1 ] and in 2024. [ 4 ] In May 2024, Leike announced his resignation from OpenAI, following the departure of Sutskever, Daniel Kokotajlo and several other AI safety employees from the company. Leike wrote that "Over the past years, safety culture and processes have taken a backseat to shiny products", and that he "gradually lost trust" in OpenAI's leadership. [ 5 ] [ 6 ] [ 7 ] 

 In May 2024, Leike joined Anthropic , an AI company founded by former OpenAI employees. [ 8 ] 

 References

 [ edit ] 
 
 ^ a b c d e "TIME100 AI 2023: Jan Leike" . Time . 7 September 2023. Archived from the original on 19 May 2024 . Retrieved 19 May 2024 . 

 ^ a b "An AI safety researcher on how to become an AI safety researcher" . 80,000 Hours . Archived from the original on 19 May 2024 . Retrieved 19 May 2024 . 

 ^ Leike, Jan; Sutskever, Ilya (5 July 2023). "Introducing Superalignment" . OpenAI . Archived from the original on 25 May 2024 . Retrieved 20 May 2024 . 

 ^ Booth, Harry (5 September 2024). "TIME100 AI 2024: Jan Leike" . TIME . Archived from the original on 8 September 2024 . Retrieved 8 September 2024 . 

 ^ Samuel, Sigal (17 May 2024). " "I lost trust": Why the OpenAI team in charge of safeguarding humanity imploded" . Vox . Archived from the original on 18 May 2024 . Retrieved 20 May 2024 . 

 ^ Bastian, Matthias (18 May 2024). "OpenAI's AI safety teams lost at least seven researchers in re

... (truncated, 6 KB total)
Resource ID: kb-74085d6827c291f5 | Stable ID: sid_bDf8KxO1Rm