Back
OpenAI dissolves Superalignment AI safety team
webCredibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: CNBC
Data Status
Full text fetchedFetched Dec 28, 2025
Summary
OpenAI has disbanded its Superalignment team, which was dedicated to controlling advanced AI systems. The move follows the departure of key team leaders Ilya Sutskever and Jan Leike, who raised concerns about the company's safety priorities.
Key Points
- •OpenAI's Superalignment team, focused on AI safety, has been disbanded after just one year
- •Key team leaders Leike and Sutskever departed, citing concerns about safety priorities
- •The move raises questions about OpenAI's commitment to long-term AI risk management
Review
The dissolution of OpenAI's Superalignment team represents a significant setback in the organization's commitment to AI safety research. Originally launched in 2023 with a pledge to dedicate 20% of computing power to controlling superintelligent AI systems, the team's dismantling signals potential shifts in OpenAI's strategic priorities and approach to potential existential risks posed by advanced artificial intelligence.
The departure of team leaders Jan Leike and Ilya Sutskever highlights deeper internal conflicts about the company's direction. Leike explicitly criticized OpenAI's safety culture, arguing that 'safety culture and processes have taken a backseat to shiny products' and expressing concern about the trajectory of AI development. This suggests a growing tension between rapid technological advancement and careful, responsible AI development, which could have significant implications for the broader AI safety landscape and the approach to managing potentially transformative AI technologies.
Cited by 3 pages
| Page | Type | Quality |
|---|---|---|
| AI-Assisted Alignment | Approach | 63.0 |
| Corporate Influence on AI Policy | Crux | 66.0 |
| AI Alignment Research Agendas | Crux | 69.0 |
Cached Content Preview
HTTP 200Fetched Feb 23, 20266 KB
OpenAI dissolves Superalignment AI safety team Skip Navigation Markets Business Investing Tech Politics Video Watchlist Investing Club PRO Livestream Menu
Key Points OpenAI has disbanded its team focused on the long-term risks of artificial intelligence, a person familiar with the situation confirmed to CNBC.
The news comes days after both team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures from the Microsoft-backed startup.
OpenAI's Superalignment team, announced in 2023, has been working to achieve "scientific and technical breakthroughs to steer and control AI systems much smarter than us."
At the time, OpenAI said it would commit 20% of its computing power to the initiative over four years.
Sam Altman, CEO of OpenAI, speaks at the Hope Global Forums annual meeting in Atlanta on Dec. 11, 2023. Dustin Chambers | Bloomberg | Getty Images OpenAI has disbanded its team focused on the long-term risks of artificial intelligence just one year after the company announced the group, a person familiar with the situation confirmed to CNBC on Friday.
The person, who spoke on condition of anonymity, said some of the team members are being reassigned to multiple other teams within the company.
The news comes days after both team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures from the Microsoft -backed startup. Leike on Friday wrote that OpenAI's "safety culture and processes have taken a backseat to shiny products."
OpenAI's Superalignment team, announced last year, has focused on "scientific and technical breakthroughs to steer and control AI systems much smarter than us." At the time, OpenAI said it would commit 20% of its computing power to the initiative over four years.
OpenAI did not provide a comment and instead directed CNBC to co-founder and CEO Sam Altman's recent post on X, where he shared that he was sad to see Leike leave and that the company had more work to do. On Saturday, OpenAI co-founder Greg Brockman posted a statement attributed to both himself and Altman on X, asserting that the company has "raised awareness of the risks and opportunities of AGI so that the world can better prepare for it."
News of the team's dissolution was first reported by Wired .
Sutskever and Leike on Tuesday announced their departures on social media platform X , hours apart, but on Friday, Leike shared more details about why he left the startup.
"I joined because I thought OpenAI would be the best place in the world to do this research," Leike wrote on X . "However, I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point."
Leike wrote that he believes much more of the company's bandwidth should be focused on security, monitoring, preparedness, safety and societal impact.
"These problems are quite hard to get right, and I am concerned we a
... (truncated, 6 KB total)Resource ID:
33a4513e1449b55d | Stable ID: N2M1ZDFiMj