Back
Nick Bostrom has argued
webA November 2023 UnHerd interview with Nick Bostrom, one of the foundational thinkers on existential risk; accessible overview of his views on AI-enabled tyranny and extinction risk for general audiences.
Metadata
Importance: 42/100opinion piececommentary
Summary
An interview with Oxford philosopher Nick Bostrom discussing existential risk, AI-enabled surveillance dystopias, and the possibility of human extinction. Bostrom explains how advanced AI could enable permanent global totalitarianism or civilizational collapse, and reflects on how his long-standing concerns about AI have moved from fringe speculation to mainstream debate.
Key Points
- •Existential risk includes not just extinction but permanent lock-in to a radically suboptimal state, such as a global totalitarian surveillance dystopia.
- •Bostrom distinguishes collapse scenarios (potentially recoverable) from true existential catastrophes (indefinitely bad and irreversible).
- •AI's rapid progression from science fiction to near-reality has validated decades of warnings from thinkers like Bostrom.
- •Governments exploiting AI for surveillance is highlighted as a concrete near-term pathway to catastrophic outcomes.
- •The interview situates AI risk within broader concerns about institutional erosion and civilizational instability.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Value Lock-in | Risk | 64.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 202618 KB
Nick Bostrom: Will AI lead to tyranny? - UnHerd
Log In
Search
Home
Mission
Newsroom
Our Writers
Watch & Listen
Events
Shop
UnHerd Club
Subscribe
Politics
Culture
Science
Faith
War
Society
UK
US
Europe
Search for:
Home
Mission
Newsroom
Our Writers
Watch & Listen
Events
Shop
UnHerd Club
Subscribe
Log In
|
Select Edition:
Search for:
X Close
Nick Bostrom: Will AI lead to tyranny? We are entering an age of existential risk
How worried should we be? (Tom Pilston for The Washington Post via Getty Images)
How worried should we be? (Tom Pilston for The Washington Post via Getty Images)
ai Artificial intelligence none Science Tech & Data Sector
Flo Read
Nov 12 2023 - 12:00am 9 mins
In the last year, artificial intelligence has progressed from a science-fiction fantasy to an impending reality. We can see its power in everything from online gadgets to whispers of a new, “post-singularity” tech frontier — as well as in renewed fears of an AI takeover.
One intellectual who anticipated these developments decades ago is Nick Bostrom, a Swedish philosopher at Oxford University and director of its Future of Humanity Institute. He joined UnHerd ’s Florence Read to discuss the AI era, how governments might exploit its power for surveillance, and the possibility of human extinction.
Florence Read: You’re particularly well-known for your work on “existential risk” — what do you mean by that?
Nick Bostrom: The concept of existential risk refers to ways that the human story could end prematurely. That might mean literal extinction. But it could also mean getting ourselves permanently locked into some radically suboptimal state, that could either collapse, or you could imagine some kind of global totalitarian surveillance dystopia that you could never overthrow. If it were sufficiently bad, that could also count as an existential catastrophe. Now, as for collapse scenarios, many of those might not be existential catastrophes, because civilisations have risen and fallen, empires have come and gone and eventually. If our own contemporary civilisation totally collapsed, perhaps out of the ashes would eventually rise another civilisation hundreds or thousands of years from now. So for something to be an existential catastrophe it would not just have to be bad, but have some sort of indefinite longevity.
FR: It might be too extreme, but to many people it feels that a state of semi-anarchy has already descended.
NB: I think there has been a general sense in the last few years tha
... (truncated, 18 KB total)Resource ID:
713ad72e6bc4d52a | Stable ID: sid_UEhq1kHT1C