Edited today2.8k words2 backlinksUpdated every 6 weeksDue in 6 weeks
66QualityGood •Quality: 66/100LLM-assigned rating of overall page quality, considering depth, accuracy, and completeness.Structure suggests 8725.5ImportancePeripheralImportance: 25.5/100How central this topic is to AI safety. Higher scores mean greater relevance to understanding or mitigating AI risk.43ResearchLowResearch Value: 43/100How much value deeper investigation of this topic could yield. Higher scores indicate under-explored topics with high insight potential.
Summary
Comprehensive documentation of Elon Musk's prediction track record showing systematic overoptimism on timelines (FSD predictions missed by 6+ years across 15+ instances, AGI predictions shift forward annually, Dojo project failed after 6 years). Early AI safety warnings (2014-2017) were prescient and influenced mainstream discourse, but product/capability predictions consistently miss by 3-6+ years with high stated confidence.
Content4/13
LLM summaryLLM summaryBasic text summary used in search results, entity link tooltips, info boxes, and related page cards.ScheduleScheduleHow often the page should be refreshed. Drives the overdue tracking system.EntityEntityYAML entity definition with type, description, and related entries.Add entity YAML in data/entities/Edit historyEdit historyTracked changes from improve pipeline runs and manual edits.crux edit-log view <id>OverviewOverviewA ## Overview heading section that orients readers. Helps with search and AI summaries.Add a ## Overview section at the top of the page
Tables16/ ~11TablesData tables for structured comparisons and reference material.Diagrams0/ ~1DiagramsVisual content — Mermaid diagrams, charts, or Squiggle estimate models.Add Mermaid diagrams or Squiggle models–Int. links6/ ~22Int. linksLinks to other wiki pages. More internal links = better graph connectivity.Add links to other wiki pagesExt. links90/ ~14Ext. linksLinks to external websites, papers, and resources outside the wiki.Footnotes0/ ~8FootnotesFootnote citations [^N] with source references at the bottom of the page.Add [^N] footnote citations–References2/ ~8ReferencesCurated external resources linked via <R> components or cited_by in YAML.Add <R> resource linksQuotes0QuotesSupporting quotes extracted from cited sources to back up page claims.crux citations extract-quotes <id>Accuracy0AccuracyCitations verified against their sources for factual accuracy.crux citations verify <id>RatingsN:3.5 R:7.5 A:2 C:8.5RatingsSub-quality ratings: Novelty, Rigor, Actionability, Completeness (0-10 scale).Backlinks2BacklinksNumber of other wiki pages that link to this page. Higher backlink count means better integration into the knowledge graph.
Issues2
QualityRated 66 but structure suggests 87 (underrated by 21 points)
Links2 links could use <R> components
Elon Musk: Track Record
This page documents Elon MuskPersonElon MuskComprehensive profile of Elon Musk's role in AI, documenting his early safety warnings (2014-2017), OpenAI founding and contentious departure, xAI launch and funding history, Neuralink BCI developm...Quality: 38/100's public predictions and claims to assess his epistemic track record. For biographical information, controversies, and full context, see the main Elon MuskPersonElon MuskComprehensive profile of Elon Musk's role in AI, documenting his early safety warnings (2014-2017), OpenAI founding and contentious departure, xAI launch and funding history, Neuralink BCI developm...Quality: 38/100 page.
Summary Assessment
Category
Count
Notes
Clearly Correct
2-3
Early AI safety warnings, need for regulation discussion
Pending
4-5
AGI timelines, job displacement predictions
Clearly Wrong
15+
FSD timelines (nearly all missed by years), Dojo project
Shifting Goalposts
Many
AGI predictions move forward each year as deadlines pass
Overall pattern: Prescient on directional safety concerns; consistently overoptimistic on specific product timelines by 3-6+ years; AGI predictions shift annually.
Full Self-Driving Predictions (Extensively Wrong)
This is a well-documented area of Musk's prediction track record. From 2014-2025, he predicted "full self-driving" would arrive "by end of year" or "next year" virtually every year.
"Autonomy Day" Million Robotaxi Claim (April 2019)
Quote: "From our standpoint, if you fast forward a year, maybe a year and three months, but next year for sure, we'll have over a million robotaxis on the road."
Reality: As of January 2026, Tesla has ≈32 robotaxis in Austin. Off by approximately 6 years and 999,968 vehicles.
Legal Note: A securities fraud lawsuit alleging misleading FSD statements was dismissed in September 2024. Judge ruled Musk's statements were "corporate puffery."
"We, Robot" Event (October 2024)
Date
Claim
Type
Outcome
Source
Oct 2024
"This will be one for the history books"
Twitter
Event started 53 minutes late; stock dropped 8% next day
Grok 4 Heavy was "smarter than GPT-5 two weeks ago"
Twitter
Jab at OpenAIOrganizationOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to Public Benefit Corporation, with detailed analysis of governance crisis, 2024-2025 ownership restructuri...Quality: 62/100
AGI TimelineConceptAGI TimelineComprehensive synthesis of AGI timeline forecasts showing dramatic acceleration: expert median dropped from 2061 (2018) to 2047 (2023), Metaculus from 50 years to 5 years since 2020, with current p...Quality: 59/100 Predictions (Shifting Goalposts)
Racing dynamicsRiskAI Development Racing DynamicsRacing dynamics analysis shows competitive pressure has shortened safety evaluation timelines by 40-60% since ChatGPT's launch, with commercial labs reducing safety work from 12 weeks to 4-6 weeks....Quality: 72/100 widely acknowledged
Assessment: Musk was among the first high-profile technology leaders to raise AI safety concerns publicly, years before it became mainstream. By 2023, over 350 tech executives signed statements declaring AI extinction risk a "global priority."
AI Pause Letter Contradiction (March 2023)
Date
Action
Type
What Actually Happened
Source
March 2023
Signed open letter calling for 6-month pause on AI development more powerful than GPT-4
Open letter
Was secretly investing "tens of millions" in Twitter's AI projects
Launched xAI six months after signing pause letter
Action
Max TegmarkPersonMax TegmarkComprehensive biographical profile of Max Tegmark covering his transition from cosmology to AI safety advocacy, his role founding the Future of Life Institute, and his controversial Mathematical Un...Quality: 63/100 defended: "as long as there isn't [a pause], he feels he has to also stay in the game"
Comparison: Lower than Yudkowsky (≈99%), similar to Amodei (≈25% "really badly").
Accuracy Analysis
Where Musk tends to be right:
Directional AI safety concerns (raised years before mainstream)
General trajectory of AI importance
Need for regulatory discussion
Where Musk tends to be wrong:
Specific product timelines (FSD off by 6+ years consistently)
Capability deployment dates
Scaling predictions (Neuralink, robotaxis, Dojo)
Confidence calibration:
Expresses extreme confidence ("100% confident," "for sure") on predictions that miss by years
Rarely acknowledges past prediction failures
Shifts goalposts without addressing missed deadlines
Pattern recognition:
Courts have characterized his FSD predictions as "corporate puffery" rather than binding commitments. This suggests a known pattern of aspirational statements not intended as firm predictions.
Track Records OverviewTrack Records OverviewAn index/overview page documenting the epistemic track records of four AI figures (LeCun, Altman, Yudkowsky, Musk) with brief characterizations of their prediction patterns; the actual substance li...Quality: 32/100Sam Altman PredictionsSam Altman PredictionsComprehensive tracking of Sam Altman's predictions shows he's directionally correct on AI trajectory and cost declines (10x/year validated) but consistently wrong on specific timelines (self-drivin...Quality: 60/100Yann Lecun PredictionsYann Lecun PredictionsComprehensive compilation of Yann LeCun's predictions showing he was correct on long-term architectural intuitions (neural networks, self-supervised learning dominance, radiologists not replaced by...Quality: 60/100