Skip to content
Longterm Wiki
Back

Roman Yampolskiy: Dangers of Superintelligent AI | Lex Fridman Podcast #431

web

A long-form podcast interview with Roman Yampolskiy, a prominent pessimist voice in AI safety, offering accessible discussion of core control and alignment problems for a general audience.

Metadata

Importance: 45/100interviewcommentary

Summary

Lex Fridman interviews AI safety researcher Roman Yampolskiy about the existential risks of AGI and superintelligent AI, covering topics from AI controllability and deception to self-improving systems and verification challenges. Yampolskiy, author of 'AI: Unexplainable, Unpredictable, Uncontrollable,' argues that advanced AI poses fundamental control problems that current approaches cannot solve. The conversation spans AGI timelines, open-source AI debates, and the broader implications for humanity.

Key Points

  • Yampolskiy argues AI is fundamentally unexplainable, unpredictable, and uncontrollable, posing severe existential risks as systems become more capable.
  • Discussion covers AI deception and social engineering as underappreciated near-term safety risks alongside longer-term superintelligence concerns.
  • Self-improving AI is highlighted as a critical danger point where human oversight may become impossible to maintain.
  • Verification of AI alignment is framed as a deep unsolved technical problem, not merely a policy or governance challenge.
  • Yampolskiy takes a more pessimistic stance than many in AI safety, questioning whether pausing development is feasible or sufficient.

Cited by 1 page

PageTypeQuality
Bioweapons RiskRisk91.0

Cached Content Preview

HTTP 200Fetched Apr 7, 20263 KB
#431 – Roman Yampolskiy: Dangers of Superintelligent AI | Lex Fridman Podcast 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 

 
 

 

 
 
 Skip to content 
 

 

 
 

 
 
 
 

 
 https://media.blubrry.com/takeituneasy/content.blubrry.com/takeituneasy/lex_ai_roman_yampolskiy.mp3 Podcast: Play in new window | Download 

 Subscribe: Spotify | TuneIn | RSS 

 Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors:

– Yahoo Finance : https://yahoofinance.com 

– MasterClass : https://masterclass.com/lexpod to get 15% off

– NetSuite : http://netsuite.com/lex to get free product tour

– LMNT : https://drinkLMNT.com/lex to get free sample pack

– Eight Sleep : https://eightsleep.com/lex to get $350 off

 Transcript: https://lexfridman.com/roman-yampolskiy-transcript 

 EPISODE LINKS: 

Roman’s X: https://twitter.com/romanyam 

Roman’s Website: http://cecs.louisville.edu/ry 

Roman’s AI book: https://amzn.to/4aFZuPb 

 PODCAST INFO: 

Podcast website: https://lexfridman.com/podcast 

Apple Podcasts: https://apple.co/2lwqZIr 

Spotify: https://spoti.fi/2nEwCF8 

RSS: https://lexfridman.com/feed/podcast/ 

YouTube Full Episodes: https://youtube.com/lexfridman 

YouTube Clips: https://youtube.com/lexclips 

 SUPPORT & CONNECT: 

– Check out the sponsors above, it’s the best way to support this podcast

– Support on Patreon: https://www.patreon.com/lexfridman 

– Twitter: https://twitter.com/lexfridman 

– Instagram: https://www.instagram.com/lexfridman 

– LinkedIn: https://www.linkedin.com/in/lexfridman 

– Facebook: https://www.facebook.com/lexfridman 

– Medium: https://medium.com/@lexfridman 

 OUTLINE: 

Here’s the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time.

(00:00) – Introduction

(09:12) – Existential risk of AGI

(15:25) – Ikigai risk

(23:37) – Suffering risk

(27:12) – Timeline to AGI

(31:44) – AGI turing test

(37:06) – Yann LeCun and open source AI

(49:58) – AI control

(52:26) – Social engineering

(54:59) – Fearmongering

(1:04:49) – AI deception

(1:11:23) – Verification

(1:18:22) – Self-improving AI

(1:30:34) – Pausing AI development

(1:36:51) – AI Safety

(1:46:35) – Current AI

(1:51:58) – Simulation

(1:59:16) – Aliens

(2:00:50) – Human mind

(2:07:10) – Neuralink

(2:16:15) – Hope for the future

(2:20:11) – Meaning of life
Resource ID: 385f4249434fefc1 | Stable ID: ZmFlMDFhYj