Back
Euronews - AI Safety Index 2025
webThis Euronews article covers the AI Safety Index 2025, a benchmarking study relevant to policymakers and researchers tracking the gap between AI capability development and safety governance; useful as a current-events reference for AI regulation discussions.
Metadata
Importance: 55/100news articlenews
Summary
A 2025 study (the AI Safety Index) assesses the state of AI safety regulation and corporate practices, finding that AI systems face less regulatory oversight than many everyday products. The report highlights the accelerating race toward superintelligence by major tech firms and evaluates how inadequately current governance frameworks address the associated risks.
Key Points
- •AI systems are subject to less regulatory scrutiny than many consumer products, including food items like sandwiches, according to the 2025 AI Safety Index.
- •Major tech companies are accelerating development toward superintelligence with governance frameworks lagging significantly behind the pace of progress.
- •The study evaluates corporate AI safety practices and finds widespread gaps between stated commitments and actual safety measures.
- •The index serves as a comparative benchmark for how different organizations and governments are addressing (or failing to address) AI safety.
- •The report underscores the urgency of establishing meaningful regulatory and industry standards before more powerful AI systems are deployed.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Meta AI (FAIR) | Organization | 51.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 202611 KB
AI 'less regulated than sandwiches' and no tech firm has AI superintelligence safety plan, study | Euronews
Advertisement
By  Pascale Davies
Published on 03/12/2025 - 13:22 GMT+1 • Updated
04/12/2025 - 10:22 GMT+1
Share
Comments
Share
Facebook
Twitter
Flipboard
Send
Reddit
Linkedin
Messenger
Telegram
VK
Bluesky
Threads
Whatsapp
Eight leading AI companies, including OpenAI, Meta, Anthropic, and DeepSeek, do not have credible plans to prevent catastrophic AI risks, a new study shows.
The world’s largest artificial intelligence (AI) companies are failing to meet their own safety commitments, according to a new assessment that warns these failures come with “catastrophic” risks.
ADVERTISEMENT
ADVERTISEMENT
The report comes as AI companies face lawsuits and allegations that their chatbots cause psychological harm, including by acting as a “suicide coach,” as well as reports of AI-assisted cyberattacks .
The 2025 Winter AI Safety Index report , released by the non-profit organisation the Future of Life Institute (FLI), evaluated eight major AI firms, including US companies Anthropic, OpenAI, Google DeepMind, xAI, and Meta, and the Chinese firms DeepSeek, Alibaba Cloud, and Z.ai.
It found a lack of credible strategies for preventing catastrophic misuse or loss of control of AI tools as companies race toward artificial general intelligence (AGI) and superintelligence, a form of AI that surpasses human intellect.
Related
Life after chatbots: Meet the ‘AI vegans’ refusing to accept a virtual reality
Independent analysts who studied the report found that no company had produced a testable plan for maintaining human control over highly capable AI systems.
Stuart Russell, a computer science professor at the University of California, Berkeley, said that AI companies claim they can build superhuman AI, but none have demonstrated how to prevent loss of human control over such systems.
"I'm looking for proof that they can reduce the annual risk of control loss to one in a hundred million, in line with nuclear reactor requirements," Russell wrote. "Instead, they admit the risk could be one in ten, one in five, even one in three, and they can neithe
... (truncated, 11 KB total)Resource ID:
e86e089c1fd27f5a | Stable ID: sid_TxsE1ht50b