Longterm Wiki
Back

What Are Reasonable AI Fears?

blog

Data Status

Not fetched

Cited by 2 pages

PageTypeQuality
AI TimelinesConcept95.0
Robin HansonPerson53.0

Cached Content Preview

HTTP 200Fetched Feb 26, 2026237 KB
[linkpost] "What Are Reasonable AI Fears?" by Robin Hanson, 2023-04-23 — EA Forum This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. [linkpost] "What Are Reasonable AI Fears?" by Robin Hanson, 2023-04-23 by Arjun Panickssery Apr 14 2023 4 min read 3 41 AI safety Existential risk Forecasting AI risk skepticism AI alignment AI forecasting Robin Hanson Criticism of longtermism and existential risk studies Public communication on AI safety AI governance Frontpage This is a linkpost for https://quillette.com/2023/04/14/what-are-reasonable-ai-fears/ Selected quotes (all emphasis mine): Why are we so willing to “other” AIs? Part of it is probably prejudice: some recoil from the very idea of a metal mind. We have, after all, long speculated about possible future conflicts with robots. But part of it is simply fear of change, inflamed by our ignorance of what future AIs might be like. Our fears expand to fill the vacuum left by our lack of knowledge and understanding. The result is that AI doomers entertain many different fears, and addressing them requires discussing a great many different scenarios. Many of these fears, however, are either unfounded or overblown. I will start with the fears I take to be the most reasonable, and end with the most overwrought horror stories, wherein AI threatens to destroy humanity. As an economics professor, I naturally build my analyses on economics, treating AIs as comparable to both laborers and machines, depending on context. You might think this is mistaken since AIs are unprecedentedly different, but economics is rather robust. Even though it offers great insights into familiar human behaviors, most economic theory is actually based on the abstract agents of game theory, who always make exactly the best possible move. Most AI fears seem understandable in economic terms; we fear losing to them at familiar games of economic and political power. He separates a few concerns: "Doomers worry about AIs developing “misaligned” values. But in this scenario, the “values” implicit in AI actions are roughly chosen by the organisations who make them and by the customers who use them. Such value choices are constantly revealed in typical AI behaviors, and tested by trying them in unusual situations." "Some fear that, in this scenario, many disliked conditions of our world—environmental destruction, income inequality, and othering of humans—might continue and even increase . Militaries and police might integrate AIs into their surveillance and weapons. It is true that AI may not solve these problems, and may even empower those who exacerbate them. On the other hand, AI may also empower those seeking solutions. AI just doesn’t seem to be the fundamental problem here." "A related fear is that allowing technical and social change to continue indefinitely might eventually take civilization to places that we don’t want to be . Looking backward, we have benefit

... (truncated, 237 KB total)
Resource ID: 0993959ce3bdb812 | Stable ID: NjM4YzFkYz