Back
Fortune: Google DeepMind 145-page paper predicts AGI by 2030 (Apr 2025)
webCredibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: Fortune
This Fortune article summarizes a major DeepMind technical report; readers should seek the primary 145-page paper for full detail, as news coverage may simplify or sensationalize specific claims about AGI timelines and risk levels.
Metadata
Importance: 62/100news articlenews
Summary
A Fortune article covering Google DeepMind's comprehensive 145-page technical report predicting the arrival of AGI by 2030. The paper outlines potential risks including catastrophic and existential threats to humanity, while also detailing DeepMind's safety research agenda and frameworks for managing advanced AI development.
Key Points
- •Google DeepMind's 145-page paper forecasts AGI could be achieved by 2030, representing a significant near-term timeline prediction from a leading AI lab.
- •The report explicitly acknowledges risks that AGI could 'destroy humanity,' marking a notable public warning from a major AI developer.
- •DeepMind outlines safety frameworks and alignment research directions intended to mitigate catastrophic risks from advanced AI systems.
- •The paper represents one of the most detailed public disclosures by a frontier AI lab on both AGI timelines and associated existential risks.
- •The report highlights the dual challenge of advancing AI capabilities while simultaneously developing adequate safety measures before AGI is reached.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Demis Hassabis | Person | 45.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 202613 KB
Google DeepMind 145-page paper predicts AGI matching top human skills could arrive by 2030 | Fortune Home
Latest
Fortune 500
Finance
Tech
Leadership
Lifestyle
Rankings
Multimedia
Tech AI Google DeepMind 145-page paper predicts AGI will match human skills by 2030 — and warns of existential threats that could ‘permanently destroy humanity’
By Beatrice Nolan Beatrice Nolan Tech Reporter Down Arrow Button Icon By Beatrice Nolan Beatrice Nolan Tech Reporter Down Arrow Button Icon April 4, 2025, 12:07 PM ET Add us on Google DeepMind CEO Demis Hassabis. Researchers at the AI lab have just put out a paper saying that human-like "artificial general intelligence" could arrive by 2030 and pose an existential risk to humanity. Stefan Wermuth—Bloomberg via Getty Images
DeepMind’s latest 145-page safety paper warns AGI could arrive by 2030 and cause “severe harm.” However, some experts say the concept of AGI is still too vague and the timeline too uncertain to be properly evaluated.
Google DeepMind says in a new research paper that human-level AI could plausibly arrive by 2030 and “permanently destroy humanity.”
Recommended Video
In a discussion of the spectrum of risks posed by Artificial General Intelligence, or AGI, the paper states, “existential risks … that permanently destroy humanity are clear examples of severe harm. In between these ends of the spectrum, the question of whether a given harm is severe isn’t a matter for Google DeepMind to decide; instead it is the purview of society, guided by its collective risk tolerance and conceptualisation of harm. Given the massive potential impact of AGI, we expect that it too could pose potential risk of severe harm.”
The statements are contained in a 145-page paper outlining Google DeepMind’s approach to AI safety as it attempts to build advanced systems that may one day surpass human intelligence.
The papers’ co-authors, who include DeepMind co-founder Shane Legg, did not specifically say how AGI might result in human extinction. And most of the paper is focused on the steps Google DeepMind thinks it and other AI labs should take to reduce the threat that AGI results in what the researchers called “severe harm.”
Legg has for decades said that his “median forecast” for AGI’s arrival is 2028. Last month, Legg’s cofounder, DeepMind CEO Demis Hassabis told NBC News that he thought AGI would likely arrive in the next “five to 10 years,” putting 2030 at the earlier end of that range.
The paper separates the risks of advanced AI into four major categories: misuse, which refers to people intentionally using AI for harm; misalignment, meaning systems developing unintended harmful behavior; mistakes, categorized as unexpected failures due to design or training flaws; and structural risks, which refers to conflicting incentives between multiple parties, including different groups of people, such as countries or companies, and possibly multiple AI systems.
The researchers also
... (truncated, 13 KB total)Resource ID:
efd391c3a048b7c8 | Stable ID: sid_7kyEx1e5Bv