Back
Carl Shulman and colleagues
webCredibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: 80,000 Hours
This is Part 2 of a two-part 80,000 Hours podcast interview with Carl Shulman, a prominent researcher on AGI risk and long-run futures; Part 1 covers more technical AI trajectory topics.
Metadata
Importance: 72/100podcast episodeanalysis
Summary
Carl Shulman discusses how advanced AI could transform governance, including AI advisory systems for policymakers, risks of value lock-in from early AGI deployment, and mechanisms for maintaining democratic resilience. The episode addresses international coordination, AI forecasting capabilities, and why Shulman opposes enforced pauses on AI research.
Key Points
- •AI advisory systems could improve policy decisions by providing better forecasting and analysis, with COVID-19 used as a concrete example of where AI could have helped.
- •Value lock-in is a central concern: early AGI deployment could entrench particular values or power structures in ways that are difficult or impossible to reverse.
- •Democratic institutions may need new mechanisms to resist AI-enabled threats like coups or authoritarian consolidation of power.
- •International coordination and auditing frameworks are essential for managing AGI transitions safely across competing nation-states.
- •Shulman argues against enforced AI research pauses, believing governance adaptation and careful deployment are more tractable than halting development.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Value Lock-in | Risk | 64.0 |
Cached Content Preview
HTTP 200Fetched Apr 7, 202698 KB
Carl Shulman on government and society after AGI (Part 2) | 80,000 Hours Search for: Our new book, a ridiculously in-depth guide to a fulfilling career, is out May 2026. Preorder now
On this page:
Introduction
1 Highlights
2 Articles, books, and other media discussed in the show
3 Transcript 3.1 Cold open [00:00:00]
3.2 Rob's intro [00:01:16]
3.3 The interview begins [00:03:24]
3.4 COVID-19 concrete example [00:11:18]
3.5 Sceptical arguments against the effect of AI advisors [00:24:16]
3.6 Value lock-in [00:33:59]
3.7 How democracies avoid coups [00:48:08]
3.8 Where AI could most easily help [01:00:25]
3.9 AI forecasting [01:04:30]
3.10 Application to the most challenging topics [01:24:03]
3.11 How to make it happen [01:37:50]
3.12 International negotiations and coordination and auditing [01:43:54]
3.13 Opportunities for listeners [02:00:09]
3.14 Why Carl doesn't support enforced pauses on AI research [02:03:58]
3.15 How Carl is feeling about the future [02:15:47]
3.16 Rob's outro [02:17:37]
4 Learn more
5 Related episodes
Read transcript See all episodes
The AI advisor would point out all of these places where the system is making the top-level objective of getting a vaccine quickly, where that’s going wrong, and clarifies which changes will make it happen quicker. “If you replace person X with person Y; if you cancel this regulation, these outcomes will happen, and you’ll get the vaccine earlier. People’s lives will be saved, the economy will be rebooted,” et cetera.
There’s just all kinds of ways in which the thing is self-destructive, and only sustainable by deep epistemic failures and the corruption of the knowledge system that very often happens to human institutions. But making it as easy as possible to avoid that would improve it. And then going forward, I think these same sort of systems advise us to change our society such that we will never again have a pandemic like that, and we would be robust even to an engineered pandemic and the like.
— Carl Shulman
This is the second part of our marathon interview with Carl Shulman. The first episode is on the economy and national security after AGI . You can listen to them in either order!
If we develop artificial general intelligence that’s reasonably aligned with human goals, it could put a fast and near-free superhuman advisor in everyone’s pocket. How would that affect culture, government, and our ability to act sensibly and coordinate together?
It’s common to worry that AI advances will lead to a proliferation of misinformation and further disconnect us from reality. But in today’s conversation, AI expert Carl Shulman argues that this underrates the powerful positive applications the technology could have in the public sphere.
As Carl explains, today the most important questions we face as a society remain in the “realm of subjective judgement&#
... (truncated, 98 KB total)Resource ID:
297ced45b445881c | Stable ID: sid_5IHloh1uFG