Skip to content
Longterm Wiki
Back

Jan Leike resigns, posts "safety culture has taken a backseat to shiny products"

web

This news article covers a pivotal moment in AI safety governance history when OpenAI's Superalignment co-lead publicly criticized the company's safety culture upon resigning in May 2024, sparking widespread debate about institutional commitments to safety.

Metadata

Importance: 72/100news articlenews

Summary

Jan Leike, co-lead of OpenAI's Superalignment team, publicly resigned in May 2024, stating that safety culture and processes had been deprioritized in favor of product development. His departure, alongside Ilya Sutskever, marked a significant exodus of safety-focused leadership from OpenAI. Leike's public statement raised concerns about whether OpenAI was living up to its stated safety commitments.

Key Points

  • Jan Leike resigned as co-head of OpenAI's Superalignment team, the group tasked with solving alignment for superintelligent AI.
  • Leike publicly stated that 'safety culture has taken a backseat to shiny products' at OpenAI.
  • His resignation came alongside Ilya Sutskever's departure, representing a major loss of safety-focused leadership at OpenAI.
  • Leike cited disagreements over compute resources, priorities, and organizational direction as contributing factors.
  • The resignations intensified public debate about whether OpenAI's commercial pressures undermine its safety mission.

Cited by 1 page

PageTypeQuality
Corporate Influence on AI PolicyCrux66.0

Cached Content Preview

HTTP 200Fetched Apr 9, 20266 KB
OpenAI leader Jan Leike resigns, says safety has "taken a backseat to shiny products" - CBS San Francisco 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 

 
 

 
 
 
 
 
 
 

 
 
 
 
 
 
 

 
 

 

 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 

 

 
 

 
 
 
 
 
 

 

 
 
 
 
 
 
 

 
 

 
 
 
 
 

 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 

 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 

 
 
 
 

 
 
 
 

 
 
 
 

 

 
 

 

 
 

 
 
 
 
 
 

 

 

 
 
 
 
 
 
 
 
 
 
 Bay Area Top News
 
 

 
 
 SF/Peninsula News
 
 

 
 
 East Bay News
 
 

 
 
 South Bay News
 
 

 
 
 North Bay News
 
 

 
 
 Bay Area Crime
 
 

 
 
 Tech
 
 

 
 
 Health
 
 

 
 
 Politics
 
 

 
 
 LGBTQ
 
 

 
 
 Entertainment
 
 

 
 
 KPIX Special Reports
 
 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 First Alert Weather
 
 

 
 
 Radars & Maps
 
 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 All Sports
 
 

 
 
 CBS Sports HQ
 
 

 
 
 San Francisco 49ers
 
 

 
 
 San Francisco Giants
 
 

 
 
 Golden State Warriors
 
 

 
 
 Golden State Valkyries
 
 

 
 
 San Jose Sharks
 
 

 
 
 Cal
 
 

 
 
 Stanford
 
 

 
 
 College Sports
 
 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 Station Info
 
 

 
 
 KPIX
 
 

 
 
 What's On KPIX
 
 

 
 
 Contests & Promotions
 
 

 
 
 Advertise With Us
 
 

 
 
 Contact Us
 
 

 
 
 Download the App
 
 

 
 
 Galleries
 
 

 
 
 Memorials
 
 

 

 
 
 
 
 
 
 
 
 

 

 

 
 
 
 

 
 
 
 

 
 
 
 Watch CBS News

 
 

 
 
 
 
 
 

 
 
 
 

 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 

 
 
 

 
 A former OpenAI leader who resigned from the company earlier this week said on Friday that safety has "taken a backseat to shiny products" at the influential artificial intelligence company. 

 Jan Leike, who ran OpenAI's "Super Alignment" team alongside a company co-founder who also resigned this week, wrote in a series of posts on the social media platform X that he joined the San Francisco-based company because he thought it would be the best place to do AI research. 

 "However, I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point," wrote Leike, whose last day was Thursday. 

 An AI researcher by training, Leike said he believes there should be more focus on preparing for the next generation of AI models, including on things like safety and analyzing the societal impacts of such technologies. He said building "smarter-than-human machines is an inherently dangerous endeavor" and that the company "is shouldering an enormous responsibility on behalf of all of humanity."

 "OpenAI must become a safety-first AGI company," wrote Leike using the abbreviated version of artificial general intelligence, a futuristic vision of machines that are as broadly smart as humans or at least can do many things as well as people can.

 Leike's resignation came after OpenAI co-founder and chief scientist Ilya Sutskever said Tuesday that he was leaving the company after nearly a decade. Sutskever was one of four

... (truncated, 6 KB total)
Resource ID: e751ccb632c5857b | Stable ID: sid_zdhec5IYfo