Back
Futurism: Google AI Boss Says AI Is an Existential Threat
webA brief news item useful for tracking how AI industry leaders publicly frame existential risk; limited analytical depth but signals insider acknowledgment of catastrophic risk concerns.
Metadata
Importance: 30/100news articlenews
Summary
A news report covering statements by a senior Google AI executive acknowledging that artificial intelligence poses an existential threat to humanity. The article highlights the significance of such an admission coming from within one of the world's leading AI development organizations, reflecting growing concern among AI insiders about long-term risks.
Key Points
- •A top Google AI executive publicly acknowledged that AI represents a potential existential threat to humanity.
- •The statement is notable given it comes from inside a major AI lab actively developing frontier AI systems.
- •This reflects a broader trend of AI industry leaders voicing concern about the catastrophic risks of their own technology.
- •The admission raises questions about the gap between stated safety concerns and continued rapid AI development.
- •Such public statements from industry insiders can influence public discourse and policy discussions around AI governance.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Demis Hassabis | Person | 45.0 |
Cached Content Preview
HTTP 200Fetched Apr 10, 20267 KB
Google AI Boss Says AI Is an Existential Threat to Humankind
The Global Cyberspace War
Image: Mark Stevenson/Getty Images
Sign up to see the future, today
Can’t-miss innovations from the bleeding edge of science and tech
Email address
Sign Up
Thank you!
Red Alert
Does the development of artificial intelligence pose a threat to humanity?
Google’s head of AI in the United Kingdom Demis Hassabis thinks so, likening it to climate change in an interview with The Guardian , citing fears that humans could develop a superintelligent system that goes rogue, among other malicious possibilities.
“We must take the risks of AI as seriously as other major global challenges, like climate change,” he told the paper, also citing the possibility that AI could make it easy to create bioweapons . “It took the international community too long to coordinate an effective global response to this, and we’re living with the consequences of that now. We can’t afford the same delay with AI.”
While AI could help many sectors such as medicine, he called for an independent body governing AI akin to the United Nations’ Intergovernmental Panel on Climate Change (IPCC), a view that even former Google chief executive Eric Schmidt also espouses .
In fact, a day after Hassabis’ interview was published, Google, Microsoft, OpenAI and Anthropic announced a $10 million AI Safety Fund , meant “to advance research into the ongoing development of the tools for society to effectively test and evaluate the most capable AI models.”
Hassabis praised the move in a post on X , formerly known as Twitter, saying, “We’re at a pivotal moment in the history of AI.”
Risky Futures
Despite all this very public handwringing from AI gurus, just how serious are people like Hassabis and companies like Google on AI safety and ethics? Remember that back in 2020, Google fired renowned AI ethicist Timnit Gebru and AI researcher Margaret Mitchel l because top AI brass said a controversial paper they co-authored “ didn’t meet our bar for publication.”
Th e paper laid out several risks about AI that now feel extremely prescient: its environmental impact, how it could impact marginalized communities, biases in training data, data sets so large that auditing is difficult, and that they can be used to deceive people.
The paper has since gained mythic status among AI watchers because it predicted so many of the debates we’re now having about AI. And what about this possible superintelligent AI that might not share our best interests?
Even as Hassabis warns about a rogue AGI, he is intent on building one — a common and head-scratching contradiction in the AI bus
... (truncated, 7 KB total)Resource ID:
ff1464f3f5f237a0 | Stable ID: sid_DNHZELtXLE