Longterm Wiki
Back

Biology-Inspired AGI Timelines: The Trick That Never Works

blog

Data Status

Not fetched

Cited by 1 page

PageTypeQuality
AI TimelinesConcept95.0

Cached Content Preview

HTTP 200Fetched Feb 26, 20262381 KB
x This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. Biology-Inspired AGI Timelines: The Trick That Never Works — LessWrong 2021 MIRI Conversations AI Timelines Dialogue (format) Forecasting & Prediction History Technological Forecasting AI Curated 160 Biology-Inspired AGI Timelines: The Trick That Never Works by Eliezer Yudkowsky 1st Dec 2021 AI Alignment Forum 78 min read 151 160 Ω 50 - 1988 - Hans Moravec: Behold my book Mind Children. Within, I project that, in 2010 or thereabouts, we shall achieve strong AI. I am not calling it "Artificial General Intelligence" because this term will not be coined for another 15 years or so. Eliezer (who is not actually on the record as saying this, because the real Eliezer is, in this scenario, 8 years old; this version of Eliezer has all the meta-heuristics of Eliezer from 2021, but none of that Eliezer's anachronistic knowledge): Really? That sounds like a very difficult prediction to make correctly, since it is about the future, which is famously hard to predict. Imaginary Moravec: Sounds like a fully general counterargument to me. Eliezer: Well, it is, indeed, a fully general counterargument against futurism. Successfully predicting the unimaginably far future - that is, more than 2 or 3 years out, or sometimes less - is something that human beings seem to be quite bad at, by and large. Moravec: I predict that, 4 years from this day, in 1992, the Sun will rise in the east. Eliezer: Okay, let me qualify that. Humans seem to be quite bad at predicting the future whenever we need to predict anything at all new and unfamiliar, rather than the Sun continuing to rise every morning until it finally gets eaten. I'm not saying it's impossible to ever validly predict something novel! Why, even if that was impossible, how could I know it for sure? By extrapolating from my own personal inability to make predictions like that? Maybe I'm just bad at it myself. But any time somebody claims that some particular novel aspect of the far future is predictable, they justly have a significant burden of prior skepticism to overcome. More broadly, we should not expect a good futurist to give us a generally good picture of the future. We should expect a great futurist to single out a few rare narrow aspects of the future which are, somehow, exceptions to the usual rule about the future not being very predictable. I do agree with you, for example, that we shall at some point see Artificial General Intelligence. This seems like a rare predictable fact about the future, even though it is about a novel thing which has not happened before: we keep trying to crack this problem, we make progress albeit slowly, the problem must be solvable in principle because human brains solve it, eventually it will be solved; this is not a logical necessity, but it sure seems like the way to bet. "AGI eventually" is predictable in a way that it is not pr

... (truncated, 2381 KB total)
Resource ID: 8d2bfce96c23cd28 | Stable ID: MzE5MWJhNj