Back
Conversation with Robin Hanson
webaiimpacts.org·aiimpacts.org/conversation-with-robin-hanson/
Data Status
Not fetched
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Timelines | Concept | 95.0 |
Cached Content Preview
HTTP 200Fetched Feb 26, 2026159 KB
Conversation with Robin Hanson – AI Impacts AI Impacts talked to economist Robin Hanson about his views on AI risk and timelines. With his permission, we have posted and transcribed this interview. Contents Participants Robin Hanson — Associate Professor of Economics, George Mason University Asya Bergal – AI Impacts Robert Long – AI Impacts Summary We spoke with Robin Hanson on September 5, 2019. Here is a brief summary of that conversation: Hanson thinks that now is the wrong time to put a lot of effort into addressing AI risk: We will know more about the problem later, and there’s an opportunity cost to spending resources now vs later, so there has to be a compelling reason to spend resources now instead. Hanson is not compelled by existing arguments he’s heard that would argue that we need to spend resources now: Hanson famously disagrees with the theory that AI will appear very quickly and in a very concentrated way , which would suggest that we need to spend resources now because won’t have time to prepare. Hanson views the AI risk problem as essentially continuous with existing principal agent problems, and disagrees that the key difference —the agents being smarter—should clearly worsen such problems. Hanson thinks that we will see concrete signatures of problems before it’s too late– he is skeptical that there are big things that have to be coordinated ahead of time. Relatedly, he thinks useful work anticipating problems in advance usually happens with concrete designs, not with abstract descriptions of systems. Hanson thinks we are still too far away from AI for field-building to be useful. Hanson thinks AI is probably at least a century, perhaps multiple centuries away: Hanson thinks the mean estimate for human-level AI arriving is long, and he thinks AI is unlikely to be ‘lumpy’ enough to happen without much warning : Hanson is interested in how ‘lumpy’ progress in AI is likely to be: whether progress is likely to come in large chunks or in a slower and steadier stream. Measured in terms of how much a given paper is cited, academic progress is not lumpy in any field. The literature on innovation suggests that innovation is not lumpy: most innovation is lots of little things, though once in a while there are a few bigger things. From an outside view perspective, the current AI boom does not seem different from previous AI booms. We don’t have a good sense of how much research needs to be done to get to human-level AI. If we don’t expect progress to be particularly lumpy, and we don’t have a good sense of exactly how close we are, we have good reason to think we are not e.g. five-years away rather than halfway. Hanson thinks we shouldn’t believe it when AI researchers give 50-year timescales: Rephrasing the question in different ways, e.g. “When will most people lose their jobs?” causes people to give different timescales. People consistently give overconfident estimates when they’re estimating things that are abs
... (truncated, 159 KB total)Resource ID:
6d739d1ec6c11123 | Stable ID: OGMyMGMwMz