Back
OpenAI dissolves Superalignment AI safety team
webCredibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: CNBC
This news event is frequently cited as a notable indicator of organizational tensions between safety priorities and product development at OpenAI, and is relevant to discussions of AI lab governance and safety culture.
Metadata
Importance: 72/100news articlenews
Summary
OpenAI disbanded its Superalignment team in May 2024, less than a year after launching it with a pledge of 20% compute resources toward controlling advanced AI. The dissolution followed the departures of team leaders Ilya Sutskever and Jan Leike, with Leike publicly criticizing OpenAI's safety culture as subordinated to product development.
Key Points
- •OpenAI dissolved its Superalignment team in May 2024, only ~1 year after announcing it with significant resource commitments.
- •Both team leaders Ilya Sutskever and Jan Leike departed days before the team was disbanded.
- •Jan Leike publicly stated that OpenAI's safety culture and processes had 'taken a backseat to shiny products.'
- •OpenAI had originally committed 20% of its compute to the Superalignment initiative over four years.
- •Team members were reassigned to other internal teams rather than the group continuing its dedicated long-term safety mission.
Review
The dissolution of OpenAI's Superalignment team represents a significant setback in the organization's commitment to AI safety research. Originally launched in 2023 with a pledge to dedicate 20% of computing power to controlling superintelligent AI systems, the team's dismantling signals potential shifts in OpenAI's strategic priorities and approach to potential existential risks posed by advanced artificial intelligence.
The departure of team leaders Jan Leike and Ilya Sutskever highlights deeper internal conflicts about the company's direction. Leike explicitly criticized OpenAI's safety culture, arguing that 'safety culture and processes have taken a backseat to shiny products' and expressing concern about the trajectory of AI development. This suggests a growing tension between rapid technological advancement and careful, responsible AI development, which could have significant implications for the broader AI safety landscape and the approach to managing potentially transformative AI technologies.
Cited by 4 pages
| Page | Type | Quality |
|---|---|---|
| Frontier AI Labs (Overview) | -- | 85.0 |
| AI-Assisted Alignment | Approach | 63.0 |
| Corporate Influence on AI Policy | Crux | 66.0 |
| AI Alignment Research Agendas | Crux | 69.0 |
Cached Content Preview
HTTP 200Fetched Apr 7, 20267 KB
OpenAI dissolves Superalignment AI safety team Skip Navigation Markets Business Investing Tech Politics Video Watchlist Investing Club PRO Livestream Menu
Key Points OpenAI has disbanded its team focused on the long-term risks of artificial intelligence, a person familiar with the situation confirmed to CNBC.
The news comes days after both team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures from the Microsoft-backed startup.
OpenAI's Superalignment team, announced in 2023, has been working to achieve "scientific and technical breakthroughs to steer and control AI systems much smarter than us."
At the time, OpenAI said it would commit 20% of its computing power to the initiative over four years.
Sam Altman, CEO of OpenAI, speaks at the Hope Global Forums annual meeting in Atlanta on Dec. 11, 2023. Dustin Chambers | Bloomberg | Getty Images OpenAI has disbanded its team focused on the long-term risks of artificial intelligence just one year after the company announced the group, a person familiar with the situation confirmed to CNBC on Friday.
The person, who spoke on condition of anonymity, said some of the team members are being reassigned to multiple other teams within the company.
The news comes days after both team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures from the Microsoft -backed startup. Leike on Friday wrote that OpenAI's "safety culture and processes have taken a backseat to shiny products."
OpenAI's Superalignment team, announced last year, has focused on "scientific and technical breakthroughs to steer and control AI systems much smarter than us." At the time, OpenAI said it would commit 20% of its computing power to the initiative over four years.
OpenAI did not provide a comment and instead directed CNBC to co-founder and CEO Sam Altman's recent post on X, where he shared that he was sad to see Leike leave and that the company had more work to do. On Saturday, OpenAI co-founder Greg Brockman posted a statement attributed to both himself and Altman on X, asserting that the company has "raised awareness of the risks and opportunities of AGI so that the world can better prepare for it."
News of the team's dissolution was first reported by Wired .
Sutskever and Leike on Tuesday announced their departures on social media platform X , hours apart, but on Friday, Leike shared more details about why he left the startup.
"I joined because I thought OpenAI would be the best place in the world to do this research," Leike wrote on X . "However, I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point."
Leike wrote that he believes much more of the company's bandwidth should be focused on security, monitoring, preparedness, safety and societal impact.
"These problems are quite hard to get right, and I am concerned we a
... (truncated, 7 KB total)Resource ID:
33a4513e1449b55d | Stable ID: sid_GJ1z6zfNYK