Skip to content
Longterm Wiki
Back

Andrew Ng Says Threat of AI Causing Human Extinction Is Overblown

web

Represents a skeptical industry perspective on existential AI risk from a high-profile AI researcher, useful for understanding the debate around prioritizing near-term vs. long-term AI safety concerns and the politics of AI regulation.

Metadata

Importance: 38/100news articlenews

Summary

Google Brain founder Andrew Ng argues that fears of AI causing human extinction are exaggerated and potentially counterproductive, suggesting that existential risk narratives distract from more immediate, concrete AI harms. He contends that the focus on speculative long-term risks may benefit incumbent AI companies by discouraging competition and open-source development.

Key Points

  • Andrew Ng dismisses AI extinction risk as overblown, arguing such fears are not grounded in near-term technical realities.
  • He suggests existential risk narratives may serve incumbent AI companies by creating regulatory moats that stifle open-source and smaller competitors.
  • Ng advocates focusing on concrete, near-term AI harms rather than speculative long-term catastrophes.
  • His position represents a prominent dissenting voice within the AI community against mainstream AI safety concerns.
  • The article reflects broader industry debates about whether existential risk framing helps or hinders responsible AI development.

Cited by 1 page

PageTypeQuality
The Case Against AI Existential RiskArgument58.0

Cached Content Preview

HTTP 200Fetched Apr 9, 20267 KB
Google Brain founder Andrew Ng says threat of AI causing human extinction is overblown - SiliconANGLE 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 

 
 
 

 
 
 
 
 
 
 
 
 
 
 You are using an outdated browser. Please upgrade your browser to improve your experience.

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 SHARE 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 UPDATED 22:21 EDT / OCTOBER 31 2023 
 
 

 
 
 
 AI 
 
 
 
 
 Google Brain founder Andrew Ng says threat of AI causing human extinction is overblown 

 
 
 
 by 
 James Farrell 
 
 

 
 
 
 SHARE 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Andrew Ng, a world leader in the development of artificial intelligence, said in an interview this week that he believes current talk about AI being an existential threat to humankind is vastly exaggerated.

 Ng, currently a professor at Stanford University, co-founded Google Brain and went on to be chief scientist at Baidu Inc.’s Artificial Intelligence Group, not to mention co-founder of DeepLearning.AI and Coursera. During his time at Stanford, Ng taught machine learning to OpenAI LP’s co-founder Sam Altman. These two men now have very different outlooks on what kind of danger AI poses to human existence.

 In May this year, Altman signed a letter along with another 375 computer scientists, academics, and business leaders that discussed what should be a first principle of “mitigating the risk of extinction from AI.” Altman’s concerns mirror some  other leaders in the AI industry, who also signed the letter.

 Speaking with The Australian Financial Review, Ng said there is a kind of doom myth that has been promulgated in the tech industry around the conviction that we humans are meddling with a powerful tool that could end civilization. He called this a “bad idea” that could “impose burdensome licensing requirements.”

 “When you put those two bad ideas together, you get the massively, colossally dumb idea of policy proposals that try to require licensing of AI,” he explained. “It would crush innovation. There are definitely large tech companies that would rather not have to try to compete with open source [AI], so they’re creating fear of AI leading to human extinction.” He says he’s all for “thoughtful regulation,” but not regulation that stunts development.

 In this regard, he agrees with the venture capitalist Marc Andreessen, one of the more outspoken proponents of the gift that AI could be to society. A person who certainly doesn’t agree with Ng is tech billionaire Elon Musk, who in the past has whipped up storms with his doomsday comments relating to AI. Today, Musk commented on Ng’s recent comments, stating, “Giant supercomputer clusters that cost billions of dollars are the risk, not some startup in a garage.”

 The so-called Godfather of AI, G

... (truncated, 7 KB total)
Resource ID: c8ec6e4903275345 | Stable ID: sid_0Bi2QXgAKl