Back
My highly personal skepticism braindump on existential risk from artificial intelligence.
webAuthor
NunoSempere
Credibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: EA Forum
An EA Forum post offering a skeptical personal take on AI existential risk arguments from within the effective altruism community; useful for understanding internal dissent and debate around mainstream AI safety narratives.
Metadata
Importance: 45/100commentary
Summary
An EA Forum post presenting a personal, skeptical perspective on mainstream AI existential risk arguments, questioning key assumptions about AI development trajectories, timelines, and the likelihood of catastrophic outcomes. The author offers informal but substantive critiques of common x-risk framings from within the EA community.
Key Points
- •Challenges widely-held assumptions in the AI safety community about the likelihood and nature of existential risk from AI systems.
- •Offers a personal, informal critique of popular x-risk narratives, including concerns about overconfidence in specific doom scenarios.
- •Raises questions about whether current AI safety framings are well-calibrated or driven by motivated reasoning within EA circles.
- •Represents an internal EA-community skeptical voice, providing a counterpoint to consensus views on AI catastrophe risk.
- •Useful for understanding the diversity of opinion within the EA/AI safety ecosystem on foundational risk assumptions.
Cached Content Preview
HTTP 200Fetched Apr 9, 202627 KB
# My highly personal skepticism braindump on existential risk from artificial intelligence.
By NunoSempere
Published: 2023-01-23
**Summary**
-----------
This document seeks to outline why I feel uneasy about high existential risk estimates from AGI (e.g., 80% doom by 2070). When I try to verbalize this, I view considerations like
* selection effects at the level of which arguments are discovered and distributed
* community epistemic problems, and
* increased uncertainty due to chains of reasoning with imperfect concepts
as real and important.
I still think that existential risk from AGI is important. But I don’t view it as certain or close to certain, and I think that something is going wrong when people see it as all but assured.
**Discussion of weaknesses**
----------------------------
I think that this document was important for me personally to write up. However, I also think that it has some significant weaknesses:
1. There is some danger in verbalization leading to rationalization.
2. It alternates controversial points with points that are dead obvious.
3. It is to a large extent a reaction to my imperfectly digested understanding of a worldview pushed around the [ESPR](https://espr-camp.org/)/CFAR/MIRI/LessWrong cluster from 2016-2019, which nobody might hold now.
In response to these weaknesses:
1. I want to keep in mind that I do want to give weight to my gut feeling, and that I might want to update on a feeling of uneasiness rather than on its accompanying reasonings or rationalizations.
2. Readers might want to keep in mind that parts of this post may look like a [bravery debate](https://slatestarcodex.com/2013/05/18/against-bravery-debates/). But on the other hand, I've seen that the points which people consider obvious and uncontroversial vary from person to person, so I don’t get the impression that there is that much I can do on my end for the effort that I’m willing to spend.
3. Readers might want to keep in mind that actual AI safety people and AI safety proponents may hold more nuanced views, and that to a large extent I am arguing against a “Nuño of the past” view.
Despite these flaws, I think that this text was personally important for me to write up, and it might also have some utility to readers.
**Uneasiness about chains of reasoning with imperfect concepts**
----------------------------------------------------------------
### **Uneasiness about conjunctiveness**
It’s not clear to me how conjunctive AI doom is. Proponents will argue that it is very disjunctive, that there are lot of ways that things could go wrong. I’m not so sure.
In particular, when you see that a parsimonious decomposition (like Carlsmith’s) tends to generate lower estimates, you can conclude:
1. That the method is producing a biased result, and trying to account for that
2. That the topic under discussion is, in itself, conjunctive: that there are several steps that need to be satisfied. For example, “AI causing a big
... (truncated, 27 KB total)Resource ID:
766dc38ac9aa91c8 | Stable ID: sid_ICobujZcT4