Back
Superforecasting the Premises in 'Is Power-Seeking AI an Existential Risk?'
blogAuthor
Joseph Carlsmith
Data Status
Not fetched
Abstract
A comparison of Carlsmith's estimates with superforecasters on the six premises of the AI x-risk argument. Reveals key cruxes: superforecasters are more optimistic about alignment tractability (P3) and skeptical of power-seeking (P4).
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Carlsmith's Six-Premise Argument | Analysis | 65.0 |
Cached Content Preview
HTTP 200Fetched Feb 23, 20266 KB
Superforecasting the premises in "Is power-seeking AI an existential risk?" - Joe Carlsmith Superforecasting the premises in “Is power-seeking AI an existential risk?” Last updated: 10.18.2023 Published: 10.18.2023 Superforecasting the premises in “Is power-seeking AI an existential risk?”
Good Judgment has solicited reviews and forecasts from superforecasters regarding my report “ Is power-seeking AI an existential risk ,” along with forecasts on three additional questions regarding timelines to AGI, and on one regarding the probability of existential catastrophe from out-of-control AGI.
A summary of the results is available on the Good Judgment website here , as are links to the individual reviews. Good Judgment has also prepared more detailed summaries of superforecaster comments and forecasts here (re: my report) and here (re: the other timelines and X-risk questions). I’ve copied key graphics below, along with a screenshot of a public spreadsheet of the probabilities from each forecaster. 1
This project was funded by Open Philanthropy, my employer. 2 The superforecasters completed a survey very similar to the one completed by other reviewers of my report (see here for links), except with an additional question (see footnote) about the “ multiple stage fallacy .” 3
Relative to my original report, the May 2023 superforecaster aggregation places higher probabilities on the first three premises – Timelines (80% relative to my 65%), Incentives (90% relative to my 80%), and Alignment Difficulty (58% relative to my 40%) – but substantially lower probabilities on the last three premises – High-Impact Failures (25% relative to my 65%), Disempowerment (5% relative to my 40%), Catastrophe (40% relative to my 95%). And their overall probability on all the premises being true – that is, roughly, on existential catastrophe from power-seeking AI by 2070 – is 1% compared to my 5% in the report. 4 (Though in the supplemental questions included in the second part of the project , they give a 6% probability to existential catastrophe from out-of-control AGI by 2200, conditional on AGI by 2070; and a 40% chance of AGI by 2070. 5 )
To the extent the superforecasters and I disagree, especially re: the overall probability of existential risk from power-seeking AI, I haven’t updated heavily in their direction, at least thus far (though I have updated somewhat ). 6 This is centrally because:
My engagement thus far with the written arguments in the reviews (which I encourage folks to check out – see links in the spreadsheet ) hasn’t moved me much. 7
I remain unsure how much to defer to raw superforecaster numbers (especially for longer-term questions where their track-record is less proven) absent object-level arguments I find persuasive. 8
I had priced in some amount of “I think that AI risk is higher than do many other thoughtful people who’ve thought about it at least somewhat” already.
In this sense, perhaps, I am similar to some of the
... (truncated, 6 KB total)Resource ID:
8d9f2fea7c1b4e3a | Stable ID: NTFjYzA1MW