Longterm Wiki
Back

Davidson On Takeoff Speeds

web

Data Status

Not fetched

Cited by 1 page

PageTypeQuality
AI TimelinesConcept95.0

Cached Content Preview

HTTP 200Fetched Feb 26, 2026324 KB
Davidson On Takeoff Speeds - by Scott Alexander Astral Codex Ten Subscribe Sign in Davidson On Takeoff Speeds Machine Alignment Monday 6/19/23 Jun 20, 2023 107 421 5 Share The face of Mt. Everest is gradual and continuous; for each point on the mountain, the points 1 mm away aren’t too much higher or lower. But you still wouldn’t want to ski down it. I thought about this when reading What A Compute-Centric Framework Says About Takeoff Speeds , by Tom Davidson. Davidson tries to model what some people (including me) have previously called “slow AI takeoff”. He thinks this is a misnomer . Like skiing down the side of Mount Everest, progress in AI capabilities can be simultaneously gradual, continuous, fast, and terrifying. Specifically, he predicts it will take about three years to go from AIs that can do 20% of all human jobs (weighted by economic value) to AIs that can do 100%, with significantly superhuman AIs within a year after that. As penance for my previous mistake, I’ll try to describe Davidson’s forecast in more depth. Raising The Biological Anchors Last year I wrote about Open Philanthropy’s Biological Anchors , a math-heavy model of when AI might arrive. It calculated how fast the amount of compute available for AI training runs was increasing, how much compute a human-level AI might take, and estimated when we might get human level AI (originally ~2050; an update says ~2040) The basic Bio Anchors model Compute-Centric Framework (from here on CCF) update Bio Anchors to include feedback loops: what happens when AIs start helping with AI research? In some sense, AIs already help with this. Probably some people at OpenAI use Codex or other programmer-assisting-AIs to help write their software. That means they finish their software a little faster, which makes the OpenAI product cycle a little faster. Let’s say Codex “does 1% of the work” in creating a new AI. Maybe some more advanced AI could do 2%, 5%, or 50%. And by definition, an AGI - one that can do anything humans do - could do 100%. AI works a lot faster than humans. And you can spin up millions of instances much cheaper than you can train millions of employees. What happens when this feedback loop starts kicking in? You get what futurists call a “takeoff” . The first graph shows a world with no takeoff. Past AI progress doesn’t speed up future AI progress. The field moves forward at some constant rate. The second graph shows a world with a gradual “slow” takeoff. Early AIs (eg Codex ) speed up AI progress a little. Intermediate AIs (eg an AI that can help predict optimal parameter values) might speed up AI research more. Later AIs (eg autonomous near-human level AIs) could do the vast majority of AI research work, speeding it up many times. We would expect the early stages of this process to take slightly less time than we would naively expect, and the latter stages to take much less time, since AIs are doing most of the work. The third graph shows a world with a sudden “fast” take

... (truncated, 324 KB total)
Resource ID: 4dccc8117fbd2235 | Stable ID: MjY4OWRhYT