Longterm Wiki
Back

Context: Current AI trends and uncertainties

web

Data Status

Not fetched

Cited by 1 page

PageTypeQuality
AI TimelinesConcept95.0

Cached Content Preview

HTTP 200Fetched Feb 26, 2026212 KB
Context: Current AI trends and uncertainties - Centre for Future Generations Skip to content 
 
 
 
 Home Context: Current AI trends and uncertainties 
 
 Context: Current AI trends and uncertainties 
 
 
 
 
 
 
 -->
 -->
 -->
 
 
 
 This section outlines the technical, social and geopolitical trends and developments underlying our scenarios. It presents the key trends, assumptions, and uncertainties that shape our view of plausible AI futures—explaining why we consider each scenario plausible. For a more detailed description of how we created these scenarios, please see Methodology: How we came up with these scenarios . Technical Background – What’s driving the AI boom—and could it stall? Four reinforcing trends have supercharged AI progress so far AI capabilities have broadened rapidly in recent years—from static question answering and pattern recognition to complex reasoning, coding, and autonomous task completion. This leap has been driven by four mutually reinforcing trends: better hardware , more efficient algorithms and data use, rising investment , and the emergence of new training paradigms enabled by an increasingly powerful AI tech stack. Trend 1: Hardware scaling. Training compute has been increasing by a factor of 4–5× every year since 2010, according to  Epoch AI estimates [1] . This trend has remained remarkably consistent over time. More compute allows for larger models to be trained on more data points and accelerates algorithmic experimentation at scale. The result is a virtuous cycle: better hardware enables bigger models, which in turn motivate further investment in hardware.  Recent releases from OpenAI and DeepSeek [2]  also illustrate how scaling inference compute—used when models are deployed rather than trained—can further improve outputs. This shift enables models to “think longer” at inference time, generating higher-quality results and opening up possibilities for more demanding downstream applications. In turn, high-quality model outputs can serve as improved training data for future systems, creating a bootstrapping loop that may help address  data availability bottlenecks [3] . Trend 2: Algorithmic and data efficiency. Algorithms are becoming significantly more efficient, improving by approximately  3× per year [4] . Smarter model architectures, better training objectives, and innovations in data selection and augmentation all contribute to higher performance per FLOP. Algorithmic progress also supports the growing use of synthetic data, where models generate high-quality training examples for themselves—augmenting scarce real-world data and potentially overcoming looming  “data wall” concerns [5] .  Nonetheless, whether such bootstrapping will be sufficient to sustain long-term progress remains an open que

... (truncated, 212 KB total)
Resource ID: 86a10195012598d2 | Stable ID: ZjdkYTcyZT