Skip to content

The Sequences by Eliezer Yudkowsky

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:65 (Good)
Importance:50 (Useful)
Last edited:2026-01-31 (1 day ago)
Words:2.2k
Structure:
📊 1📈 0🔗 13📚 4626%Score: 11/15
Issues (1):
  • Links16 links could use <R> components
DimensionAssessment
TypeEducational content / Foundational texts
AuthorEliezer Yudkowsky
Publication Period2006-2009 (original posts), 2015 (compiled book)
FormatOver 300 blog posts compiled as Rationality: From AI to Zombies
Primary TopicsRationality, cognitive biases, epistemology, AI alignment
Community InfluenceFoundational to LessWrong and rationalist movement
Main CriticismPhilosophical inaccuracies, overconfidence, poor engagement with critics

The Sequences is a comprehensive collection of blog posts written by Eliezer Yudkowsky between 2006 and 2009, originally published on Overcoming Bias and LessWrong.12 The essays focus on the science and philosophy of human rationality, covering cognitive biases, Bayesian reasoning, epistemology, philosophy of mind, and AI risks. The collection was later compiled and edited by the Machine Intelligence Research Institute (MIRI) into the book Rationality: From AI to Zombies (also known as From AI to Zombies) in 2015.3

Yudkowsky’s stated goal was to create a comprehensive guide to rationality by developing techniques and mental models to overcome cognitive biases, refine decision-making, and update beliefs using Bayesian reasoning. The essays emphasize distinguishing mental models (“map”) from reality (“territory”) and aim to equip readers with tools for clearer thinking, accurate beliefs, and addressing profound risks like artificial general intelligence existential threats.4 The work became foundational to the rationalist movement and significantly influenced effective altruism, particularly around Bayesian epistemology, prediction, and cognitive bias awareness.5

While The Sequences are primarily framed as a guide to rationality, they contain foundational epistemology that enables readers to develop better models for understanding AI alignment risks. In the latter sections, essays related to AI alignment appear frequently, with entire sequence sections like The Machine in the Ghost and Mere Goodness having direct object-level relevance to alignment work.6

Eliezer Yudkowsky began writing The Sequences as daily blog posts starting in 2006, initially on Overcoming Bias (where Robin Hanson was a principal contributor) and later on LessWrong, which he founded in February 2009.78 The original collection consisted of approximately 300 blog posts exploring theses coherently, including core concepts like the map-territory distinction—the idea that beliefs are maps representing reality, not reality itself.9

About half of the original posts were organized into thematically linked “sequences,” distinguished between “major” sequences (by size) and “minor” sequences. The core sequences included:10

  • Map and Territory - Bayesian rationality and epistemology
  • Mysterious Answers to Mysterious Questions - How to recognize and avoid false explanations
  • How to Actually Change Your Mind - Overcoming motivated reasoning and biases
  • Reductionism - Understanding complex phenomena through simpler components

Yudkowsky was an autodidact who did not attend high school or college, and had previously co-founded the Singularity Institute for Artificial Intelligence (which became MIRI in 2013).11

In 2015, MIRI collated, edited, and published the posts as the ebook Rationality: From AI to Zombies. This version omitted some original posts while adding uncollected essays from the same era.12 The compiled version organized the material into thematic “books”:

  • Book I: Map and Territory - Bayesian rationality and epistemology
  • Book II: How to Actually Change Your Mind - Overcoming motivated reasoning and biases like confirmation bias, availability heuristic, anchoring, and scope insensitivity
  • Book III: The Machine in the Ghost - Philosophy of mind, intelligence, goal systems, often linked to AI; includes thought experiments on consciousness and subjective experience versus physical processes (e.g., philosophical zombies)
  • Additional books on quantum physics, evolutionary psychology, and morality13

The original posts were preserved on LessWrong as “deprecated” for historical reference, while modern LessWrong sequences continued to draw from this material.14

The Sequences teach how to avoid typical failure modes of human reasoning and think in ways that lead to true and accurate beliefs.15 Core epistemological concepts include:

  • Map-Territory Distinction: Beliefs function as maps representing reality, not reality itself; confusing the two leads to systematic errors16
  • Bayesian Reasoning: Using probability theory to update beliefs based on evidence
  • Conservation of Expected Evidence: The principle that you can’t predict in advance what direction evidence will update your beliefs
  • Absence of Evidence as Evidence of Absence: When you would expect to see evidence if something were true, not finding it counts against that hypothesis17

The Sequences extensively catalog and explain cognitive biases that interfere with accurate thinking:

  • Confirmation bias - Seeking evidence that confirms existing beliefs
  • Availability heuristic - Overweighting easily recalled examples
  • Anchoring - Being influenced by initial numbers or suggestions
  • Scope insensitivity - Failing to properly scale emotional responses to magnitude
  • Motivated reasoning - Reasoning in service of desired conclusions rather than truth18

Yudkowsky developed Timeless Decision Theory (TDT) as an alternative to Causal and Evidential Decision Theory, addressing problems like Newcomb’s Problem and Pascal’s Mugging.19 The Sequences also introduce concepts relevant to AI alignment, including:

  • Intelligence explosion and recursive self-improvement
  • Optimization power in vast search spaces
  • Instrumental convergence and goal preservation
  • The challenge of specifying human values20

The Sequences became foundational texts for LessWrong and shaped the rationalist community’s culture and discourse.21 The material is widely recommended as an entry point for newcomers to rationalist thinking and AI safety considerations. LessWrong’s 2024 survey showed The Sequences as a top recommended resource among respondents.22

The work significantly influenced effective altruism, particularly around Bayesian epistemology, prediction, cognitive biases, and thinking about AI risks.23 Community members have noted that familiarity with The Sequences, particularly essays like “Death Spirals,” helps create “a community I can trust” by promoting epistemic clarity and transparency about uncertainty.24

Yudkowsky’s work on intelligence explosions from the Sequences era influenced philosopher Nick Bostrom’s 2014 book Superintelligence: Paths, Dangers, Strategies.25 However, The Sequences face criticism for limited engagement with academic philosophy and for sometimes rediscovering existing concepts without proper credit—for example, Yudkowsky’s “Requiredism” essentially describes compatibilism in philosophy of mind.26

The material overlaps with prior academic works like Thinking and Deciding by Jonathan Baron but is criticized for not fully crediting academia. Some view it as an original synthesis (30-60% new material) presented in an engaging “popular science” format that condenses psychology, philosophy, and AI ideas into memorable phrases.27

Readers report that The Sequences provide useful “tags” or terminology for discussing reasoning patterns, help internalize ideas that seem obvious in retrospect, and offer tools for avoiding belief weak points like motivated cognition.28 The essays are described as engaging popular science that makes concepts stick through catchy framing and thought experiments.

However, critics note limitations in measurable effectiveness. No empirical studies demonstrate improvements in decision-making or other quantifiable outcomes from reading The Sequences.29 The work’s impact appears primarily anecdotal and concentrated within specific communities rather than demonstrating broad practical effectiveness.

Critics argue that Yudkowsky dismisses philosophy while simultaneously reinventing concepts from the field without adequate credit or understanding. Specific criticisms include:3031

  • Misrepresenting the zombie argument: Yudkowsky confuses the philosophical zombie thought experiment with epiphenomenalism, leading philosopher David Chalmers to publicly correct his interpretation
  • Strawmanning critics: Failing to engage with the strongest versions of opposing arguments
  • Rediscovering existing ideas: Presenting concepts like compatibilism (“Requiredism”) as if novel
  • Weak decision theory: Timeless Decision Theory described as “wildly indeterminate,” hypersensitive, and inferior to evidential/causal alternatives

Multiple critics highlight concerns about Yudkowsky’s approach to disagreement and error correction:3233

  • Confidently asserting claims that contain “egregious errors”
  • Refusing to acknowledge mistakes or engaging weakly with substantive criticisms
  • Responding arrogantly or calling opponents “stupid”
  • Ignoring stronger counter-arguments while focusing on weaker ones
  • Poor track record in predictions despite high confidence

These patterns are seen as harmful to Yudkowsky’s reputation and to efforts to promote rationalist ideas outside the existing community.

Readers note several problems with the writing itself:3435

  • Excessive repetition: “Beating a dead horse” on the same points
  • Length and accessibility: The approximately 1 million words make it a “difficult read”
  • Variable quality: Some sequences (e.g., on metaethics) described as skimmable or underwhelming
  • Overly speculative: Encourages treating one’s own mind as inherently inferior or opaque in ways that can lead to unnecessary pessimism

Critics argue The Sequences transmit a “packaged worldview” with potential dangers rather than pure rationality tools.36 The work’s framing around AI doom has become more prominent over time—one reader noted that on a second reading, they became “constantly aware that Yudkowsky believes…that our doom is virtually certain and he has no idea how to even begin formulate a solution.”37

This contrasts with the optimistic tone of the original writing period (2006-2009). By 2024, Yudkowsky’s public statements emphasized extreme urgency, stating humanity has “ONE YEAR, THIS YEAR, 2024” for a global response to AI extinction risks.38

The Sequences heavily drew on psychological findings from the early 2000s, many of which collapsed during the replication crisis that began shortly after Yudkowsky finished writing them.39 This undermines some of the empirical foundations for claims about cognitive biases and reasoning, though core epistemological points may remain valid.

The Sequences are sometimes associated with what critics describe as a “nerdy, rationalist religion” with unconventional beliefs (including polyamory and AI obsession), with Yudkowsky positioned as an unrespected “guru” outside his immediate circle.40 The fact that Yudkowsky’s other major work is Harry Potter and the Methods of Rationality (a fanfiction novel) reinforces this perception among skeptics.

Within the rationalist and EA communities, some members note that “the Sequences clearly failed to make anyone a rational superbeing, or even noticeably more successful,” as Scott Alexander pointed out as early as 2009.41

The Sequences remain available in multiple formats: as blog posts on LessWrong, as the compiled ebook Rationality: From AI to Zombies, and through curated “Sequence Highlights” featuring 50 key essays.42 The material continues to serve as a recommended starting point for understanding rationalist thinking and AI safety concerns.

Yudkowsky continued publishing related work, including the 2017 ebook Inadequate Equilibria (published by MIRI) on societal inefficiencies,43 and co-authored If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All with Nate Soares, which became a New York Times bestseller.44

A 2025 podcast episode on Books in Bytes explored ongoing themes from The Sequences relevant to rationalists and AI theorists, including the zombie argument, perception biases, and joy in reasoning.45 Manifold Markets tracked predictions about Yudkowsky’s views on AI doom probability (greater than 75% within 50 years by 2035), noting potential for downward adjustments only if machine learning plateaus, global AI development stalls, or alignment succeeds.46

Several important questions remain about The Sequences’ ultimate value and impact:

  1. How much original insight versus synthesis? - The balance between novel contributions and condensing existing academic work remains debated, with estimates ranging from 30-60% new material
  2. What is the measurable effectiveness? - No empirical studies have quantified improvements in decision-making, career outcomes, or other concrete benefits from reading The Sequences
  3. How much has the replication crisis undermined the empirical foundations? - Many psychological findings cited have failed to replicate, though the epistemic core may remain valid
  4. Is the pessimistic AI worldview justified? - The progression from optimism (2006-2009) to doom certainty (2020s) raises questions about whether the underlying reasoning changed or if motivated reasoning influenced later views
  5. What is the appropriate relationship with academic philosophy? - Whether The Sequences should be positioned as complementary to, independent from, or in tension with traditional philosophy remains contested
  1. The Sequences - LessWrong

  2. EA Forum: Rationality Book Club

  3. Rationality: From AI to Zombies - MIRI

  4. The Sequences Overview - LessWrong

  5. EA Forum: Rationalist Movement Discussion

  6. EA Forum: Sequences and AI Alignment

  7. Eliezer Yudkowsky Biography

  8. History of LessWrong

  9. Map and Territory Sequence

  10. The Sequences: Core Sequences

  11. Eliezer Yudkowsky - MIRI

  12. Rationality: From AI to Zombies Publication

  13. Book Structure - Rationality: From AI to Zombies

  14. Modern Sequences - LessWrong

  15. How to Actually Change Your Mind

  16. Map and Territory - Core Concept

  17. Bayesian Reasoning in The Sequences

  18. Cognitive Biases in The Sequences

  19. Timeless Decision Theory

  20. AI Alignment Topics in The Sequences

  21. LessWrong Foundational Texts

  22. LessWrong 2024 Survey Results

  23. EA Forum: Rationalist Influence

  24. EA Forum: Death Spirals Discussion

  25. Nick Bostrom and Intelligence Explosion

  26. Criticism: Philosophy Engagement

  27. EA Forum: Sequences Originality Debate

  28. Reader Reception - Goodreads

  29. EA Forum: Measurable Effectiveness Discussion

  30. Philosophical Errors in The Sequences

  31. David Chalmers Response

  32. Epistemic Conduct Criticism

  33. EA Forum: Yudkowsky Track Record

  34. Reader Reviews - Style Criticism

  35. EA Forum: Sequences Writing Quality

  36. Worldview Transmission Concerns

  37. Second Reading Experience

  38. 2024 Doom Update Podcast

  39. Replication Crisis Impact

  40. Cultural Perception Discussion

  41. Scott Alexander on Sequences Effectiveness

  42. Sequence Highlights - 50 Essays

  43. Inadequate Equilibria - MIRI

  44. If Anyone Builds It, Everyone Dies

  45. Books in Bytes Podcast 2025

  46. Manifold Markets: Yudkowsky Doom Predictions