Skip to content
Longterm Wiki
Back

some researchers note

web

A journalistic overview from HEC Paris summarizing the current debate around scaling law limits and AGI trajectories; useful for context on mainstream discourse but not a primary research source.

Metadata

Importance: 38/100blog postcommentary

Summary

This HEC Paris article examines the apparent slowdown in AI scaling laws and the debate over what comes next, contrasting views of AI as a normal technology versus a path to superintelligence. It surveys expert disagreements on reasoning models, AGI timelines, and the potential for large-scale job displacement, while questioning whether progress narratives are driven by genuine breakthroughs or investment incentives.

Key Points

  • Frontier LLMs appear to have hit a ceiling, with simply adding more data and compute showing diminishing returns toward AGI.
  • Reasoning models using post-training reinforcement learning (e.g., GPT-o1, DeepSeek R1) are debated as either new scaling frontiers or an 'illusion of thinking'.
  • Major AI leaders predict 30-40% job automation by 2030, while skeptics argue AI will automate tasks rather than entire occupations.
  • Prominent researchers like Yann LeCun and Michael Jordan argue LLMs alone cannot achieve AGI and that new fundamental breakthroughs are needed.
  • The article frames the slowdown as opening a broader debate between viewing AI as a disruptive superintelligence vs. a more limited, specialized technology.

Cited by 1 page

PageTypeQuality
Is Scaling All You Need?Crux42.0

Cached Content Preview

HTTP 200Fetched Apr 7, 20266 KB
AI Beyond the Scaling Laws | HEC Paris 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 

 
 

 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Skip to main content
 
 
 
 
 
 
 
 

 

 

 
 
 
 
 
 
 
 

 
 
 
 
 
 
 

 
 
 
 

 
 

Tech & AI
 

 
 
 
 
 
 
 AI Beyond the Scaling Laws
 
 

 
 
 
 
 Hype, Limits, and What Comes Next

 

 
 
 
 
 
 
 December 1st, 2025 
 
 
 
 

 
 

 

 
 
 

 
 
 
 
 

 
 
 
 
 

 
 
 
 
 
 
 

 
 
 
 
 
 
 Maud Clerc
 
 
 Editor-in-Chief of Dare, HEC Media Hub
 
 
 
 
 
 
 
 

 
 
 
 
 
 Audio reading
 
 
 
 
 
 

 
 
 
 Listen 
 
 
 
 

 

 
 
 
 A - 
 A + 
 
 
 

 
 
 
 
 
 
 
 
 Share 
 
 
 
 
 

 
 

 
 
 
 
 
 
 
 
 AI’s exponential progress shows signs of slowing. On the road to AGI, two visions emerge: AI as a normal technology — or as a humanlike superintelligent, disruptive force. 

 It is a well-kept secret in the AI industry: for over a year now, frontier models appear to have reached their ceiling. The scaling laws that powered the exponential progress of Large Language Models (LLMs) like GPT-4, and fueled bold predictions of Artificial General Intelligence (AGI) by 2026 from Sam Altman (OpenAI) and Dario Amodei (Anthropic) have started to show diminishing returns. Inside labs, the consensus is growing that simply adding more data and compute will not create the “all-knowing digital gods” once promised (TechCrunch). Many respected voices - from Yann LeCun to Michael Jordan - have long argued that LLMs will not get us to AGI. Instead, progress will require new breakthroughs, as the curve of innovation flattens. The disappointment and backlash surrounding the release of GPT-5 has only made this ceiling more visible. 

 I found particularly fascinating how this turning point has opened a wider debate on what to expect, or fear, from AI in the years ahead. It even gives me arguments when facing my children’s difficult questions: ‘Mummy, will robots replace humans?’ 

 With reasoning models built on post-training techniques (reinforcement learning) such as GPT-o1, Claude 3.7 Sonnet, or DeepSeek R1, are we witnessing “the emergence of new scaling laws” (Satya Nadella)? Or just the “illusion of thinking” (Apple Machine Learning Research), a critique of a narrative designed to keep investment flowing into ever-larger compute infrastructures and massive data center projects? The stakes are high in this winner-takes-all race toward AGI. 

 What can we expect from AI in the year to come, and what will be the economic and societal impacts? Will half of all entry-level white-collar jobs vanish within the next five years due to AI automation, as Dario Amodei from Anthropic predicted? Sam Altman, the CEO of OpenAI, foresees AI automating 30–40% of jobs worldwide by 2030. Both leaders envision massive disruption ahead, with the potential for job displacement on an unprecedented scale. 

 Skeptics counter that there is little empirical evidence of large-scale job

... (truncated, 6 KB total)
Resource ID: 40560014cfc7663d | Stable ID: sid_qyMrA8qvMw