Skip to content
Longterm Wiki
Back

Reporting on OpenAI Superalignment team departures, May-July 2024

web

Key news coverage documenting the 2024 collapse of OpenAI's Superalignment team, widely cited as evidence of organizational safety culture failures at a leading AI lab.

Metadata

Importance: 62/100news articlenews

Summary

Vox's Kelsey Piper reports on the wave of departures from OpenAI's Superalignment team in mid-2024, including co-founder Ilya Sutskever and alignment leads Jan Leike and others, raising concerns about whether OpenAI is genuinely prioritizing AI safety. The piece examines internal tensions, restrictive offboarding agreements, and what the departures signal about OpenAI's organizational commitment to safety research.

Key Points

  • Key Superalignment team members including Ilya Sutskever and Jan Leike departed OpenAI in May 2024, citing safety concerns.
  • Leike publicly stated that safety culture at OpenAI had been consistently deprioritized in favor of product development.
  • OpenAI's offboarding documents reportedly included non-disparagement clauses that could threaten departing employees' equity.
  • Sam Altman acknowledged the offboarding agreements were problematic and pledged to revise them after public backlash.
  • The departures raised broader questions about whether AI labs can maintain genuine safety commitments under commercial pressures.

Cited by 1 page

PageTypeQuality
AI Talent Market DynamicsAnalysis52.0

Cached Content Preview

HTTP 200Fetched Apr 7, 202615 KB
OpenAI departures: Why can’t former employees talk, but the new ChatGPT release can? | Vox Skip to main content The homepage Vox Vox logo Menu The homepage Vox Vox logo Navigation Drawer Login / Sign Up 
 close Close Search Become a Member Facebook 
 Instagram 
 Youtube 
 RSS 
 TikTok 
 The context you need, when you need it

 When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

 We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today? 

 Join now Future Perfect 
 ChatGPT can talk, but OpenAI employees sure can’t

 Why is OpenAI’s superalignment team imploding?

 by Kelsey Piper Updated May 18, 2024, 11:31 PM UTC Share 
 Gift 
 Sam Altman (left), CEO of artificial intelligence company OpenAI, and the company’s co-founder and then-chief scientist Ilya Sutskever, speak together at Tel Aviv University in Tel Aviv on June 5, 2023. Jack Guez/AFP via Getty Images Kelsey Piper is a contributing editor at Future Perfect, Vox’s effective altruism-inspired section on the world’s biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter. Editor’s note, May 18, 2024, 7:30 pm ET: This story has been updated to reflect OpenAI CEO Sam Altman’s tweet on Saturday afternoon that the company was in the process of changing its offboarding documents. 

 On Monday, OpenAI announced exciting new product news: ChatGPT can now talk like a human.

 It has a cheery, slightly ingratiating feminine voice that sounds impressively non-robotic, and a bit familiar if you’ve seen a certain 2013 Spike Jonze film. “Her,” tweeted OpenAI CEO Sam Altman, referencing the movie in which a man falls in love with an AI assistant voiced by Scarlett Johansson.

 But the product release of ChatGPT 4o was quickly overshadowed by much bigger news out of OpenAI: the resignation of the company’s co-founder and chief scientist, Ilya Sutskever, who also led its superalignment team, as well as that of his co-team leader Jan Leike (who we put on the Future Perfect 50 list last year ).

 The resignations didn’t come as a total surprise. Sutskever had been involved in the boardroom revolt that led to Altman’s temporary firing last year, before the CEO quickly returned to his perch. Sutskever publicly regretted his actions and backed Altman’s return, but he’s been mostly absent from the company since , even as other members of OpenAI’s policy, alignment, and safety teams have departed. 

 But what has really stirred speculation was the radio silence from former employees. Sutskever posted a pretty typical resignation message, saying “I’m confident that OpenAI will build AGI that is both safe and benef

... (truncated, 15 KB total)
Resource ID: 401a476b413c707b | Stable ID: sid_iR4j2y6vdH