Skip to content
Longterm Wiki
Back

OpenAI's Safety Departures: Ethics, Culture, and Accountability Concerns

web

Relevant to discussions of AI lab governance, safety culture, and whether organizational incentives at frontier labs adequately support safety research; part of broader 2024 coverage of OpenAI's internal tensions.

Metadata

Importance: 52/100news articlenews

Summary

A Vox analysis examining the wave of high-profile departures from OpenAI, focusing on concerns raised by departing employees about the company's commitment to safety and ethics under Sam Altman's leadership. The piece explores what these exits signal about internal culture and whether safety priorities are being subordinated to commercial pressures.

Key Points

  • Multiple prominent researchers and safety-focused employees left OpenAI in 2024, citing concerns about the company's direction and commitment to responsible AI development.
  • Departures included members of OpenAI's superalignment and safety teams, raising questions about whether safety research is being adequately prioritized.
  • Departing employees described a culture where raising safety concerns was discouraged or deprioritized relative to product speed and commercial goals.
  • The exodus reflects broader tensions in AI labs between competitive pressures to ship products quickly and careful, safety-conscious development practices.
  • These departures raise governance questions about accountability mechanisms at frontier AI labs and whether self-governance can be trusted.

Cited by 1 page

PageTypeQuality
Multipolar Trap Dynamics ModelAnalysis61.0

Cached Content Preview

HTTP 200Fetched Apr 9, 202615 KB
OpenAI departures: Why can’t former employees talk, but the new ChatGPT release can? | Vox Skip to main content The homepage Vox Vox logo Menu The homepage Vox Vox logo Navigation Drawer Login / Sign Up 
 close Close Search Become a Member Facebook 
 Instagram 
 Youtube 
 RSS 
 TikTok 
 The context you need, when you need it

 When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

 We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today? 

 Join now Future Perfect 
 ChatGPT can talk, but OpenAI employees sure can’t

 Why is OpenAI’s superalignment team imploding?

 by Kelsey Piper Updated May 18, 2024, 11:31 PM UTC Share 
 Gift 
 Sam Altman (left), CEO of artificial intelligence company OpenAI, and the company’s co-founder and then-chief scientist Ilya Sutskever, speak together at Tel Aviv University in Tel Aviv on June 5, 2023. Jack Guez/AFP via Getty Images Kelsey Piper is a contributing editor at Future Perfect, Vox’s effective altruism-inspired section on the world’s biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter. Editor’s note, May 18, 2024, 7:30 pm ET: This story has been updated to reflect OpenAI CEO Sam Altman’s tweet on Saturday afternoon that the company was in the process of changing its offboarding documents. 

 On Monday, OpenAI announced exciting new product news: ChatGPT can now talk like a human.

 It has a cheery, slightly ingratiating feminine voice that sounds impressively non-robotic, and a bit familiar if you’ve seen a certain 2013 Spike Jonze film. “Her,” tweeted OpenAI CEO Sam Altman, referencing the movie in which a man falls in love with an AI assistant voiced by Scarlett Johansson.

 But the product release of ChatGPT 4o was quickly overshadowed by much bigger news out of OpenAI: the resignation of the company’s co-founder and chief scientist, Ilya Sutskever, who also led its superalignment team, as well as that of his co-team leader Jan Leike (who we put on the Future Perfect 50 list last year ).

 The resignations didn’t come as a total surprise. Sutskever had been involved in the boardroom revolt that led to Altman’s temporary firing last year, before the CEO quickly returned to his perch. Sutskever publicly regretted his actions and backed Altman’s return, but he’s been mostly absent from the company since , even as other members of OpenAI’s policy, alignment, and safety teams have departed. 

 But what has really stirred speculation was the radio silence from former employees. Sutskever posted a pretty typical resignation message, saying “I’m confident that OpenAI will build AGI that is both safe and benef

... (truncated, 15 KB total)
Resource ID: a9d4263acec736d0 | Stable ID: sid_5L0QsEmUDA