Longterm Wiki
Back

Semi-Informative Priors Over AI Timelines

web

Data Status

Not fetched

Cited by 1 page

PageTypeQuality
AI TimelinesConcept95.0

Cached Content Preview

HTTP 200Fetched Feb 26, 2026843 KB
Semi-Informative Priors Over AI Timelines | Coefficient Giving Skip to Content *+*]:mt-5"> March 25, 2021 Semi-Informative Priors Over AI Timelines By Tom Davidson Editor’s note : This article was published under our former name, Open Philanthropy. Some content may be outdated. You can see our latest writing here . One of Open Phil’s major focus areas is technical research and policy work aimed at reducing potential risks from advanced AI . As part of this, we aim to anticipate and influence the development and deployment of advanced AI systems. To inform this work, I have written a report developing one approach to forecasting when artificial general intelligence (AGI) will be developed. This is the full report. An accompanying blog post starts with a short non-mathematical summary of the report, and then contains a long summary . Introduction Executive summary The goal of this report is to reason about the likely timing of the development of artificial general intelligence (AGI). By AGI, I mean computer program(s) that can perform virtually any cognitive task as well as any human, [1] Notice that this definition applies equally whether it is a single artificial agent that can perform all these tasks, or a collection of narrower systems working together. The ‘single agent’ perspective is the focus of Bostrom’s Superintelligence, while Drexler (2019) argues that general … Continue reading for no more money than it would cost for a human to do it. The field of AI is largely held to have begun in Dartmouth in 1956, and since its inception one of its central aims has been to develop AGI. [2] The proposal for the Dartmouth conference states that ‘The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find … Continue reading I forecast when AGI might be developed using a simple Bayesian framework, and choose the inputs to this framework using commonsense intuitions and reference classes from historical technological developments. The probabilities in the report represent reasonable degrees of belief, not objective chances. One rough-and-ready way to frame our question is this: Suppose you had gone into isolation in 1956 and only received annual updates about the inputs to AI R&D (e.g. # of researcher-years, amount of compute [3] ‘Compute’ means computation. In this report I operationalize this as the number of floating point operations (FLOP). used in AI R&D) and the binary fact that we have not yet built AGI? What would be a reasonable pr(AGI by year X ) for you to have in 2021? There are many ways one could go about trying to determine pr(AGI by year X ). Some are very judgment-driven and involve taking stances on difficult questions like “since AI research began in 1956, what percentage of the way are we to developing AGI?” or “what steps are needed to build

... (truncated, 843 KB total)
Resource ID: 8fe422457a2c2560 | Stable ID: ODYyYjc2MD