Back
OpenAI disbands another safety team, as head advisor for 'AGI Readiness' resigns
webCredibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: CNBC
This news article documents a significant organizational shift at OpenAI in late 2024, relevant to tracking institutional commitments to AI safety and the growing tension between capabilities advancement and safety oversight at frontier AI labs.
Metadata
Importance: 62/100news articlenews
Summary
In October 2024, OpenAI disbanded its 'AGI Readiness' team and lost its head policy and safety advisor Miles Brundage to resignation. This continued a pattern of safety team dissolutions and prominent researcher departures, fueling concerns that OpenAI is deprioritizing safety as it accelerates toward AGI development.
Key Points
- •OpenAI disbanded its 'AGI Readiness' team in October 2024, the latest in a series of safety-focused team dissolutions.
- •Miles Brundage, a prominent AI safety and policy researcher, resigned as head advisor for AGI Readiness.
- •This followed earlier departures of key safety figures including Ilya Sutskever and the dissolution of the Superalignment team.
- •The pattern of safety team disbandments raises questions about whether commercial pressures are overriding safety commitments at OpenAI.
- •Critics argue the repeated loss of safety infrastructure signals a structural de-emphasis of safety culture within the organization.
Cited by 3 pages
| Page | Type | Quality |
|---|---|---|
| Corporate Influence on AI Policy | Crux | 66.0 |
| AI Lab Safety Culture | Approach | 62.0 |
| AI Alignment Research Agendas | Crux | 69.0 |
Cached Content Preview
HTTP 200Fetched Apr 7, 202610 KB
OpenAI disbands another safety team, head advisor resigns Skip Navigation Markets Business Investing Tech Politics Video Watchlist Investing Club PRO Livestream Menu
Key Points OpenAI is disbanding its "AGI Readiness" team, which advised the company on OpenAI's capacity to handle artificial intelligence that could potentially equal or surpass human intellect and the world's readiness to manage such technology.
Miles Brundage, senior advisor for AGI Readiness, announced his departure from the company and wrote that he believes his research will be more impactful externally.
In May, OpenAI decided to disband its Superalignment team, which focused on technology to control and steer superintelligent AI, just one year after it announced the group.
In this article
MSFT
Follow your favorite stocks CREATE FREE ACCOUNT Didem Mente | Anadolu | Getty Images OpenAI is disbanding its "AGI Readiness" team, which advised the company on OpenAI's capacity to handle increasingly powerful artificial intelligence and the world's readiness to manage that technology, according to the head of the team.
On Wednesday, Miles Brundage, senior advisor for AGI Readiness, announced his departure from the company via a Substack post . He wrote that his primary reasons were that the opportunity cost had become too high and he thought his research would be more impactful externally, that he wanted to be less biased and that he had accomplished what he set out to do at OpenAI.
Artificial general intelligence, or AGI, is a branch of AI pursuing technology that equals or surpasses human intellect on a wide range of tasks. AGI is a hotly debated topic, with some leaders saying we're close to attaining it and some saying it's not possible at all.
In his post, Brundage also wrote, "Neither OpenAI nor any other frontier lab is ready, and the world is also not ready."
Brundage said he plans to start his own nonprofit, or join an existing one, to focus on AI policy research and advocacy. "AI is unlikely to be as safe and beneficial as possible without a concerted effort to make it so," he said.
Former AGI Readiness team members will be reassigned to other teams, according to Brundage's post.
"We fully support Miles' decision to pursue his policy research outside industry and are deeply grateful for his contributions," an OpenAI spokesperson told CNBC. "His plan to go all-in on independent research on AI policy gives him the opportunity to have an impact on a wider scale, and we are excited to learn from his work and follow its impact. We're confident that in his new role, Miles will continue to raise the bar for the quality of policymaking in industry and government."
In May, OpenAI disbanded its Superalignment team — which OpenAI said focused on "scientific and technical breakthroughs to steer and control AI systems much smarter than us" to prevent them "from going rogue" — just one year after it announced the group,
... (truncated, 10 KB total)Resource ID:
0a8da5fe117a4c50 | Stable ID: sid_793sV1SU2V