This page documents Yann LeCunYann LecunComprehensive biographical profile of Yann LeCun documenting his technical contributions (CNNs, JEPA), his ~0% AI extinction risk estimate, and his opposition to AI safety regulation including SB 1...Quality: 41/100’s public predictions and claims to assess his epistemic track record.
| Category | Count | Notes |
|---|
| Clearly Correct | 4-5 | Neural networks, RL limited impact, radiology timeline, AlphaGo not AGI, cake analogy |
| Partially Correct | 2-3 | ChatGPT as writing assistant, some capability limits |
| Pending/Testable | 6-8 | LLMs “dead end,” 5-year obsolescence, JEPA superiority, decade of robotics |
| Likely Wrong/Overstated | 3-4 | GPT-3 dismissal, “cannot reason” absolutism |
| Unfalsifiable | 2-3 | Existential risk dismissals (only testable via catastrophe) |
Overall pattern: Strong on long-term architectural intuitions; tends to underestimate near-term LLM capabilities and overstate their limitations in absolute terms.
| Date | LeCun Quote | Type | Source |
|---|
| Apr 2023 | ”Scaremongering about an asteroid that doesn’t actually exist (even if you think it does) is going to depress people for no reason.” | Twitter debate | LessWrong |
| Apr 2023 | ”Stop it, Eliezer. Your scaremongering is already hurting some people. You’ll be sorry if it starts getting people killed.” | Heated exchange | Same |
| Apr 2023 | ”A high-school student actually wrote to me saying that he got into a deep depression after reading prophecies of AI-fueled apocalypse.” | Twitter debate | Same |
| Apr 2023 | ”The ‘hard take-off’ scenario is utterly impossible.” | Bold claim | Same |
| Apr 2023 | ”To guarantee that a system satisfies objectives, you make it optimize those objectives at run time. That solves the problem of aligning behavior to objectives.” | Alignment claim | Same |
| 2024 | ”The goal of MIRI (the radical AI doomers institute) is nothing less than to shut down research in AI. But they seem to have communication and credibility issues: this puts them in the same bag as countless apocalyptic and survivalist cults.” | Twitter | Same |
| Detail | Information |
|---|
| Topic | ”Be it Resolved: AI research and development poses an existential threat” |
| LeCun’s team | Against (with Melanie Mitchell) |
| Opposing team | For (Yoshua Bengio, Max Tegmark) |
| Initial audience | 67% pro-risk, 33% anti |
| Final audience | 61% pro-risk, 39% anti (LeCun’s side gained ground) |
| LeCun’s argument | ”The best solution for bad actors with AI is good actors with AI” |
| Date | LeCun Quote | Type | Source |
|---|
| May 2024 | ”Expressing an ambitious vision for the future is great. But telling the public blatantly false predictions (‘AGI next year’, ‘1 million robotaxis by 2020’, ‘AGI will kill us all, lets pause’…) is very counterproductive (also illegal in some cases).” | Twitter | CNBC |
| May 2024 | Posted 80+ technical papers since Jan 2022 when Musk questioned his recent scientific contributions | Twitter | Same |
| Dec 2025 | On AGI definition dispute with Demis Hassabis, Musk sided with Hassabis: “Demis is right” | Twitter | Benzinga |
| LeCun’s claim | Hassabis response | Musk verdict |
|---|
| ”There is no such thing as general intelligence" | "I pretty much disagree with most of those comments" | "Demis is right” |
| Date | Quote | Type | Source |
|---|
| Feb 2023 | ”My unwavering opinion on current (auto-regressive) LLMs: 1. They are useful as writing aids. 2. They are ‘reactive’ & don’t plan nor reason.” | LinkedIn | LinkedIn |
| Mar 2023 | ”Auto-Regressive LLMs are exponentially diverging diffusion processes… Errors accumulate.” | Twitter | X/Twitter |
| Sep 2023 | ”Auto-Regressive LLMs can’t plan (and can’t really reason). They are ‘dumb’ and ‘merely produce one word after the other‘“ | Twitter | X/Twitter |
| Jan 2024 | ”ChatGPT is ‘not particularly innovative’—based on established techniques. Meta has had this for years.” | Interview | Digital Trends |
| Mar 2024 | ”LLMs are basically an off-ramp, a distraction, a dead end” for achieving human-level AI | Podcast | Lex Fridman #416 |
| Oct 2024 | ”Today’s AI models ‘are really just predicting the next word in a text,’ and because of their enormous memory capacity, they can seem to be reasoning, when in fact they’re merely regurgitating information.” | Interview | WSJ/TechCrunch |
| Jan 2025 | ”LLMs are good at manipulating language, but not at thinking.” | Davos 2025 | TechCrunch |
| 2025 | ”If you are interested in human-level AI, don’t work on LLMs.” | Conference advice | VentureBeat |
| Date | Quote | Source |
|---|
| Feb 2024 | ”So maybe [with neural networks] we are at the size of a cat. But why aren’t those systems as smart as a cat? A cat can remember, can understand the physical world, can plan complex actions, can do some level of reasoning—actually much better than the biggest LLMs.” | World Government Summit Dubai |
| May 2024 | ”We need to have the beginning of a hint of a design for a system smarter than a house cat” before worrying about controlling superintelligent AI | Twitter |
| Oct 2024 | ”Felines have a mental model of the physical world, persistent memory, some reasoning ability and a capacity for planning. None of these qualities are present in today’s ‘frontier’ AIs.” | Wall Street Journal |
| Nov 2024 | ”We don’t even have a machine as smart as a cat” | Queen Elizabeth Prize roundtable |
| Statement | Source |
|---|
| ”Current AI systems are mostly based on System 1 thinking, which is fast and intuitive, but it’s also brittle and can make mistakes that humans would never make.” | Various |
| ”An LLM produces one token after another. It goes through a fixed amount of computation to produce a token, and that’s clearly System 1—it’s reactive, right? There’s no reasoning.” | Interviews |
| Chain-of-thought reasoning is “at best, System 1.1” - not true deliberative reasoning | Interviews |
| ”System 2 requires a model of the world to reason and plan over multiple timescales and abstraction levels to find the optimal answer” | Technical talks |
| Date | Claim | Type | What Happened | Status | Source |
|---|
| 1980s-90s | Neural networks will eventually prove valuable despite mainstream skepticism | Research | Deep learning became dominant paradigm by 2010s | ✅ Correct | History of Data Science |
| Dec 2016 | RL is “cherry on the cake”—bulk of progress will come from unsupervised/self-supervised learning | Twitter | Self-supervised learning (transformers, BERT, GPT) became dominant | ✅ Mostly correct | LeCun on X |
| Mar 2016 | AlphaGo victory is “not true artificial intelligence”; we still need big breakthroughs for AGI | Interview | AlphaGo/AlphaZero remained narrow AI | ✅ Largely correct | Information Age |
| 2016 | Radiologists would NOT be replaced in 5 years (contradicting Hinton) | Twitter | By 2022, no radiologists replaced; only ≈11% used AI | ✅ Correct | LeCun on X |
| 2016 | ”Cake Analogy”: Bulk of AI progress from unsupervised/self-supervised learning | Conference | Self-supervised learning became dominant | ✅ Largely correct | NIPS 2016 |
| Oct 2020 | GPT-3 has “completely unrealistic expectations”; compared scaling to “building high-altitude airplanes to go to the moon” | Social media | GPT-4, Claude significantly exceeded GPT-3 | ❌ Too dismissive | Futurism |
| Jan 2023 | ChatGPT is “not particularly innovative”—based on established techniques | Interview | Technically true, but missed revolutionary practical impact | ⚠️ Technically correct, missed the point | Digital Trends |
| Date | Claim | Type | Testable By | Current Status | Source |
|---|
| Jan 2025 | ”Within 5 years, nobody in their right mind would use [LLMs] anymore, at least not as the central component of an AI system” | Davos statement | ≈2030 | LLMs remain dominant as of 2026 | TechCrunch |
| Jan 2025 | New AI paradigm (world models) will emerge within 3-5 years with “some level of common sense” | Davos statement | ≈2028-2030 | JEPA research ongoing; no paradigm shift yet | Same |
| Jan 2025 | Coming years will be the “decade of robotics” | Davos statement | ≈2035 | Early - physical AI investment increasing | Same |
| 2022-25 | LLMs are a “dead end” for human-level AI; autoregressive prediction fundamentally cannot reach AGI | Multiple | When/if AGI achieved | Central thesis; his career now bet on this | Newsweek |
| 2023-25 | Hallucinations cannot be fixed within LLM paradigm—requires architectural change | Multiple | Ongoing | Hallucinations persist; whether fixable debated | LeCun on X |
| 2024-25 | Human-level AI is “at least a decade and probably much more” away | Interview | ≈2035+ | Contrasts with Altman/Amodei 2-5 year predictions | LeCun on X |
| Date | Prediction | Type | Status | Source |
|---|
| Jun 2022 | Published “A Path Towards Autonomous Machine Intelligence” proposing JEPA architecture as alternative to LLMs | Paper | Foundational paper | OpenReview |
| Jan 2025 | ”The world model is going to become the key component of future AI systems” | Davos 2025 | Pending | TechCrunch |
| Nov 2025 | Left Meta to found AMI Labs, betting career on world models over LLMs | Career decision | Career-defining bet | TechCrunch |
| Dec 2025 | AMI Labs seeking $3.5B valuation to build “first world model for business” | Startup pitch | Pending | Sifted |
| Detail | Information |
|---|
| Announced | November 19, 2025 |
| Reason for leaving Meta | Philosophical divergence - Meta “doubled down on scaling LLaMA” while LeCun wanted world models |
| CEO | Alexandre Lebrun (founder of Nabla) |
| HQ | Paris, France |
| Valuation target | ≈$3.5 billion |
| Investors in talks | Cathay Innovation, Greycroft, Hiro Capital, 20VC, Bpifrance |
| Goal | Build “world model” AI that understands physics, has persistent memory, can reason and plan |
| LeCun quote | ”The AI industry is completely LLM-pilled” |
| Date | Claim | Type | What Happened | Assessment | Source |
|---|
| Sep 2023 | ”Auto-Regressive LLMs can’t plan (and can’t really reason)“—they are “dumb” | Twitter | o1 achieved 89th percentile on competitive programming, top 500 in USA Math Olympiad | Overstated—LLMs demonstrate reasoning-adjacent capabilities | LeCun on X |
| 2022-25 | Scaling will hit diminishing returns; “you cannot just assume that more data and more compute means smarter AI” | Multiple | Scaling continued producing improvements through GPT-4, Claude 3/3.5, o1 | Too early—scaling worked longer than predicted | The Decoder |
| Oct 2020 | Compared LLM scaling to “building high-altitude airplanes to go to the moon” | Social media | LLMs became the dominant AI paradigm | Wrong analogy—scaling was not a dead end (at least not yet) | Analytics Drift |
| Claim | Date | Type | Assessment | Source |
|---|
| AI existential risk is “complete B.S.” (French) | Oct 2024 | Interview | Unfalsifiable—unless catastrophe occurs | TechCrunch |
| ”Intelligence has nothing to do with a desire to dominate” | 2024 | Interview | Unfalsifiable—theoretical claim | LeCun on X |
| AI systems can be designed to remain submissive | 2023 | Twitter debate | Unfalsifiable—depends on alignment being solved | LessWrong |
| ”Looking at the political scene today, it’s not clear that intelligence is actually such a major factor” (for domination) | 2024 | Interview | Unfalsifiable | Newsweek |
| Date | Target | Quote | Source |
|---|
| Oct 2023 | OpenAI, DeepMind, Anthropic | Accused Sam Altman, Demis Hassabis, and Ilya Sutskever of “massive corporate lobbying” and “attempting to regulate the AI industry in their favor under the guise of safety” | The Decoder |
| 2024 | General | ”The proposals that have existed would have resulted in regulatory capture by a small number of companies” | Various |
| 2024 | Doomers | ”Scare the hell out of the public and their representatives with prophecies of doom” to achieve regulatory capture | Twitter |
| Quote | Source |
|---|
| ”The distortion is due to their inexperience, naiveté on how difficult the next steps in AI will be, wild overestimates of their employer’s lead and their ability to make fast progress.” | VentureBeat |
| ”Does SB 1047… spell the end of the Californian technology industry?” | Same |
| ”Without open-source AI, there is no AI start-up ecosystem and no academic research on large models. Meta will be fine, but AI start-ups will just die.” | Same |
| SB 1047 is based on an “illusion of ‘existential risk’ pushed by a handful of delusional think-tanks” | Same |
Outcome: Bill vetoed by Governor Newsom on September 29, 2024
| Quote | Source |
|---|
| ”To people who see the performance of DeepSeek and think: ‘China is surpassing the US in AI.’ You are reading this wrong. The correct reading is: ‘Open source models are surpassing proprietary ones.’” | LinkedIn/X |
| ”DeepSeek has profited from open research and open source (e.g. PyTorch and Llama from Meta). They came up with new ideas and built them on top of other people’s work.” | Same |
| The $1 trillion market sell-off was “woefully unjustified” based on a “major misunderstanding” about AI infrastructure costs | Same |
Where LeCun tends to be right:
- Long-term architectural intuitions (neural networks in the 80s-90s, self-supervised learning)
- Identifying limitations of specific approaches (pure RL, narrow AI claims)
- Predictions on longer timescales (5+ years)
- Skepticism about hype cycles and premature claims
Where LeCun tends to be wrong:
- Near-term capability assessments of LLMs (consistently underestimates)
- Absolute statements about what LLMs “cannot” do
- Dismissing practical utility even when philosophical critiques may hold
- Predicting when scaling will stop working
Confidence calibration:
- Expresses very high confidence even on contested claims
- Rarely acknowledges uncertainty or conditions under which he’d update
- Uses strong language (“complete B.S.”, “dead end”, “cannot”) that ages poorly when capabilities improve
Unlike some figures whose views shift significantly, LeCun has been remarkably consistent:
| Topic | Position | Consistency |
|---|
| LLM limitations | Skeptical since GPT-3 | Very consistent |
| AI existential risk | Dismissive | Very consistent |
| Open-source AI | Strong advocate | Very consistent |
| World models as alternative | Advocate since 2022 | Consistent |
| Scaling skepticism | Skeptical | Very consistent |
Notable: LeCun has not meaningfully updated his views despite LLM capabilities exceeding his stated expectations. This could indicate either (a) strong conviction based on deep understanding, or (b) insufficient responsiveness to evidence.
By 2028-2030:
- Whether LLMs remain central to AI systems (his “5 years” prediction)
- Whether JEPA/world models achieve capabilities LLMs cannot
- Whether hallucinations are addressed within or outside the LLM paradigm
- Whether human-level AI arrives (testing his “decade+” timeline vs others’ 2-5 year predictions)
His departure from Meta to found AMI Labs represents a career-defining bet on these predictions.