Edited today2.8k words2 backlinksUpdated every 6 weeksDue in 6 weeks
60QualityGood •Quality: 60/100LLM-assigned rating of overall page quality, considering depth, accuracy, and completeness.Structure suggests 8724ImportancePeripheralImportance: 24/100How central this topic is to AI safety. Higher scores mean greater relevance to understanding or mitigating AI risk.37.5ResearchLowResearch Value: 37.5/100How much value deeper investigation of this topic could yield. Higher scores indicate under-explored topics with high insight potential.
Summary
Comprehensive compilation of Yann LeCun's predictions showing he was correct on long-term architectural intuitions (neural networks, self-supervised learning dominance, radiologists not replaced by 2022) but consistently underestimated near-term LLM capabilities (dismissing GPT-3, claiming LLMs 'cannot reason'). His track record shows 4-5 clearly correct predictions, 3-4 likely wrong/overstated claims, and 6-8 pending predictions including his career-defining bet that LLMs will be obsolete within 5 years (by ~2030).
Content4/13
LLM summaryLLM summaryBasic text summary used in search results, entity link tooltips, info boxes, and related page cards.ScheduleScheduleHow often the page should be refreshed. Drives the overdue tracking system.EntityEntityYAML entity definition with type, description, and related entries.Add entity YAML in data/entities/Edit historyEdit historyTracked changes from improve pipeline runs and manual edits.crux edit-log view <id>OverviewOverviewA ## Overview heading section that orients readers. Helps with search and AI summaries.Add a ## Overview section at the top of the page
Tables18/ ~11TablesData tables for structured comparisons and reference material.Diagrams0/ ~1DiagramsVisual content — Mermaid diagrams, charts, or Squiggle estimate models.Add Mermaid diagrams or Squiggle models–Int. links11/ ~22Int. linksLinks to other wiki pages. More internal links = better graph connectivity.Add links to other wiki pagesExt. links48/ ~14Ext. linksLinks to external websites, papers, and resources outside the wiki.Footnotes0/ ~8FootnotesFootnote citations [^N] with source references at the bottom of the page.Add [^N] footnote citations–References3/ ~8ReferencesCurated external resources linked via <R> components or cited_by in YAML.Add <R> resource linksQuotes0QuotesSupporting quotes extracted from cited sources to back up page claims.crux citations extract-quotes <id>Accuracy0AccuracyCitations verified against their sources for factual accuracy.crux citations verify <id>RatingsN:3 R:6.5 A:2 C:7.5RatingsSub-quality ratings: Novelty, Rigor, Actionability, Completeness (0-10 scale).Backlinks2BacklinksNumber of other wiki pages that link to this page. Higher backlink count means better integration into the knowledge graph.
Issues2
QualityRated 60 but structure suggests 87 (underrated by 27 points)
Links7 links could use <R> components
Yann LeCun: Track Record
This page documents Yann LeCunPersonYann LeCunComprehensive biographical profile of Yann LeCun documenting his technical contributions (CNNs, JEPA), his ~0% AI extinction risk estimate, and his opposition to AI safety regulation including SB 1...Quality: 41/100's public predictions and claims to assess his epistemic track record.
ChatGPT as writing assistant, some capability limits
Pending/Testable
6-8
LLMs "dead end," 5-year obsolescence, JEPA superiority, decade of robotics
Likely Wrong/Overstated
3-4
GPT-3 dismissal, "cannot reason" absolutism
Unfalsifiable
2-3
Existential risk dismissals (only testable via catastrophe)
Overall pattern: Strong on long-term architectural intuitions; tends to underestimate near-term LLM capabilities and overstate their limitations in absolute terms.
Major Debates
Yudkowsky-LeCun Twitter Debate (April 2023)
Date
LeCun Quote
Type
Source
Apr 2023
"Scaremongering about an asteroid that doesn't actually exist (even if you think it does) is going to depress people for no reason."
"Stop it, Eliezer. Your scaremongering is already hurting some people. You'll be sorry if it starts getting people killed."
Heated exchange
Same
Apr 2023
"A high-school student actually wrote to me saying that he got into a deep depression after reading prophecies of AI-fueled apocalypse."
Twitter debate
Same
Apr 2023
"The 'hard take-off' scenario is utterly impossible."
Bold claim
Same
Apr 2023
"To guarantee that a system satisfies objectives, you make it optimize those objectives at run time. That solves the problem of aligning behavior to objectives."
Alignment claim
Same
2024
"The goal of MIRIOrganizationMachine Intelligence Research InstituteComprehensive organizational history documenting MIRI's trajectory from pioneering AI safety research (2000-2020) to policy advocacy after acknowledging research failure, with detailed financial da...Quality: 50/100 (the radical AI doomers institute) is nothing less than to shut down research in AI. But they seem to have communication and credibility issues: this puts them in the same bag as countless apocalyptic and survivalist cults."
Twitter
Same
Munk Debate (June 22, 2023)
Detail
Information
Topic
"Be it Resolved: AI research and development poses an existential threat"
LeCun's team
Against (with Melanie Mitchell)
Opposing team
For (Yoshua BengioPersonYoshua BengioComprehensive biographical overview of Yoshua Bengio's transition from deep learning pioneer (Turing Award 2018) to AI safety advocate, documenting his 2020 pivot at Mila toward safety research, co...Quality: 39/100, Max TegmarkPersonMax TegmarkComprehensive biographical profile of Max Tegmark covering his transition from cosmology to AI safety advocacy, his role founding the Future of Life Institute, and his controversial Mathematical Un...Quality: 63/100)
Initial audience
67% pro-risk, 33% anti
Final audience
61% pro-risk, 39% anti (LeCun's side gained ground)
LeCun's argument
"The best solution for bad actors with AI is good actors with AI"
Elon MuskPersonElon MuskComprehensive profile of Elon Musk's role in AI, documenting his early safety warnings (2014-2017), OpenAI founding and contentious departure, xAI launch and funding history, Neuralink BCI developm...Quality: 38/100 Feud (May-June 2024)
Date
LeCun Quote
Type
Source
May 2024
"Expressing an ambitious vision for the future is great. But telling the public blatantly false predictions ('AGI next year', '1 million robotaxis by 2020', 'AGI will kill us all, lets pause'...) is very counterproductive (also illegal in some cases)."
Posted 80+ technical papers since Jan 2022 when Musk questioned his recent scientific contributions
Twitter
Same
Dec 2025
On AGI definition dispute with Demis HassabisPersonDemis HassabisComprehensive biographical profile of Demis Hassabis documenting his evolution from chess prodigy to DeepMind CEO, with detailed timeline of technical achievements (AlphaGo, AlphaFold, Gemini) and ...Quality: 45/100, Musk sided with Hassabis: "Demis is right"
"Today's AI models 'are really just predicting the next word in a text,' and because of their enormous memory capacity, they can seem to be reasoning, when in fact they're merely regurgitating information."
"So maybe [with neural networks] we are at the size of a cat. But why aren't those systems as smart as a cat? A cat can remember, can understand the physical world, can plan complex actions, can do some level of reasoning—actually much better than the biggest LLMs."
World Government Summit Dubai
May 2024
"We need to have the beginning of a hint of a design for a system smarter than a house cat" before worrying about controlling superintelligent AI
Twitter
Oct 2024
"Felines have a mental model of the physical world, persistent memory, some reasoning ability and a capacity for planning. None of these qualities are present in today's 'frontier' AIs."
Wall Street Journal
Nov 2024
"We don't even have a machine as smart as a cat"
Queen Elizabeth Prize roundtable
System 1 / System 2 Analysis
Statement
Source
"Current AI systems are mostly based on System 1 thinking, which is fast and intuitive, but it's also brittle and can make mistakes that humans would never make."
Various
"An LLM produces one token after another. It goes through a fixed amount of computation to produce a token, and that's clearly System 1—it's reactive, right? There's no reasoning."
Interviews
Chain-of-thought reasoning is "at best, System 1.1" - not true deliberative reasoning
Interviews
"System 2 requires a model of the world to reason and plan over multiple timescales and abstraction levels to find the optimal answer"
Technical talks
Predictions: Resolved
Date
Claim
Type
What Happened
Status
Source
1980s-90s
Neural networks will eventually prove valuable despite mainstream skepticism
OpenAIOrganizationOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to Public Benefit Corporation, with detailed analysis of governance crisis, 2024-2025 ownership restructuri...Quality: 62/100, DeepMind, AnthropicOrganizationAnthropicComprehensive reference page on Anthropic covering financials (\$380B valuation, \$19B ARR), safety research (Constitutional AI, mechanistic interpretability, model welfare), governance (LTBT struc...Quality: 74/100
Accused Sam AltmanPersonSam AltmanComprehensive biographical profile of Sam Altman documenting his role as OpenAI CEO, timeline predictions (AGI within presidential term, superintelligence in "few thousand days"), and controversies...Quality: 40/100, Demis HassabisPersonDemis HassabisComprehensive biographical profile of Demis Hassabis documenting his evolution from chess prodigy to DeepMind CEO, with detailed timeline of technical achievements (AlphaGo, AlphaFold, Gemini) and ...Quality: 45/100, and Ilya SutskeverPersonIlya SutskeverBiographical overview of Ilya Sutskever's career trajectory from deep learning researcher (AlexNet, seq2seq, dropout) to co-founding Safe Superintelligence Inc. in 2024 after leaving OpenAI. Docume...Quality: 26/100 of "massive corporate lobbying" and "attempting to regulate the AI industry in their favor under the guise of safety"
"The proposals that have existed would have resulted in regulatory capture by a small number of companies"
Various
2024
Doomers
"Scare the hell out of the public and their representatives with prophecies of doom" to achieve regulatory capture
Twitter
SB 1047 Debate (September 2024)
Quote
Source
"The distortion is due to their inexperience, naiveté on how difficult the next steps in AI will be, wild overestimates of their employer's lead and their ability to make fast progress."
"Does SB 1047... spell the end of the Californian technology industry?"
Same
"Without open-source AI, there is no AI start-up ecosystem and no academic research on large models. Meta will be fine, but AI start-ups will just die."
Same
SB 1047 is based on an "illusion of 'existential risk' pushed by a handful of delusional think-tanks"
Same
Outcome: Bill vetoed by Governor Newsom on September 29, 2024
DeepSeek R1 Reaction (January 2025)
Quote
Source
"To people who see the performance of DeepSeek and think: 'China is surpassing the US in AI.' You are reading this wrong. The correct reading is: 'Open source models are surpassing proprietary ones.'"
LinkedIn/X
"DeepSeek has profited from open research and open source (e.g. PyTorch and Llama from Meta). They came up with new ideas and built them on top of other people's work."
Same
The $1 trillion market sell-off was "woefully unjustified" based on a "major misunderstanding" about AI infrastructure costs
Same
Accuracy Analysis
Where LeCun tends to be right:
Long-term architectural intuitions (neural networks in the 80s-90s, self-supervised learning)
Identifying limitations of specific approaches (pure RL, narrow AI claims)
Predictions on longer timescales (5+ years)
Skepticism about hype cycles and premature claims
Where LeCun tends to be wrong:
Near-term capability assessments of LLMs (consistently underestimates)
Absolute statements about what LLMs "cannot" do
Dismissing practical utility even when philosophical critiques may hold
Predicting when scaling will stop working
Confidence calibration:
Expresses very high confidence even on contested claims
Rarely acknowledges uncertainty or conditions under which he'd update
Uses strong language ("complete B.S.", "dead end", "cannot") that ages poorly when capabilities improve
Position Consistency
Unlike some figures whose views shift significantly, LeCun has been remarkably consistent:
Topic
Position
Consistency
LLM limitations
Skeptical since GPT-3
Very consistent
AI existential risk
Dismissive
Very consistent
Open-source AI
Strong advocate
Very consistent
World models as alternative
Advocate since 2022
Consistent
Scaling skepticism
Skeptical
Very consistent
Notable: LeCun has not meaningfully updated his views despite LLM capabilities exceeding his stated expectations. This could indicate either (a) strong conviction based on deep understanding, or (b) insufficient responsiveness to evidence.
Key Testable Claims to Watch
By 2028-2030:
Whether LLMs remain central to AI systems (his "5 years" prediction)
Demis HassabisPersonDemis HassabisComprehensive biographical profile of Demis Hassabis documenting his evolution from chess prodigy to DeepMind CEO, with detailed timeline of technical achievements (AlphaGo, AlphaFold, Gemini) and ...Quality: 45/100Sam AltmanPersonSam AltmanComprehensive biographical profile of Sam Altman documenting his role as OpenAI CEO, timeline predictions (AGI within presidential term, superintelligence in "few thousand days"), and controversies...Quality: 40/100Yoshua BengioPersonYoshua BengioComprehensive biographical overview of Yoshua Bengio's transition from deep learning pioneer (Turing Award 2018) to AI safety advocate, documenting his 2020 pivot at Mila toward safety research, co...Quality: 39/100Elon MuskPersonElon MuskComprehensive profile of Elon Musk's role in AI, documenting his early safety warnings (2014-2017), OpenAI founding and contentious departure, xAI launch and funding history, Neuralink BCI developm...Quality: 38/100Ilya SutskeverPersonIlya SutskeverBiographical overview of Ilya Sutskever's career trajectory from deep learning researcher (AlexNet, seq2seq, dropout) to co-founding Safe Superintelligence Inc. in 2024 after leaving OpenAI. Docume...Quality: 26/100
Concepts
Track Records OverviewTrack Records OverviewAn index/overview page documenting the epistemic track records of four AI figures (LeCun, Altman, Yudkowsky, Musk) with brief characterizations of their prediction patterns; the actual substance li...Quality: 32/100Elon Musk PredictionsElon Musk PredictionsComprehensive documentation of Elon Musk's prediction track record showing systematic overoptimism on timelines (FSD predictions missed by 6+ years across 15+ instances, AGI predictions shift forwa...Quality: 66/100Sam Altman PredictionsSam Altman PredictionsComprehensive tracking of Sam Altman's predictions shows he's directionally correct on AI trajectory and cost declines (10x/year validated) but consistently wrong on specific timelines (self-drivin...Quality: 60/100