The Sequences by Eliezer Yudkowsky
- Links16 links could use <R> components
Quick Assessment
Section titled “Quick Assessment”| Dimension | Assessment |
|---|---|
| Type | Educational content / Foundational texts |
| Author | Eliezer YudkowskyResearcherEliezer YudkowskyComprehensive biographical profile of Eliezer Yudkowsky covering his foundational contributions to AI safety (CEV, early problem formulation, agent foundations) and notably pessimistic views (>90% ...Quality: 35/100 |
| Publication Period | 2006-2009 (original posts), 2015 (compiled book) |
| Format | Over 300 blog posts compiled as Rationality: From AI to Zombies |
| Primary Topics | Rationality, cognitive biases, epistemology, AI alignment |
| Community Influence | Foundational to LessWrongLesswrongLessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving $5M+ in funding and serving as the origin point for ~31% of EA survey respondent...Quality: 44/100 and rationalist movement |
| Main Criticism | Philosophical inaccuracies, overconfidence, poor engagement with critics |
Overview
Section titled “Overview”The Sequences is a comprehensive collection of blog posts written by Eliezer YudkowskyResearcherEliezer YudkowskyComprehensive biographical profile of Eliezer Yudkowsky covering his foundational contributions to AI safety (CEV, early problem formulation, agent foundations) and notably pessimistic views (>90% ...Quality: 35/100 between 2006 and 2009, originally published on Overcoming Bias and LessWrongLesswrongLessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving $5M+ in funding and serving as the origin point for ~31% of EA survey respondent...Quality: 44/100.12 The essays focus on the science and philosophy of human rationality, covering cognitive biases, Bayesian reasoning, epistemology, philosophy of mind, and AI risks. The collection was later compiled and edited by the Machine Intelligence Research Institute (MIRI)OrganizationMIRIComprehensive organizational history documenting MIRI's trajectory from pioneering AI safety research (2000-2020) to policy advocacy after acknowledging research failure, with detailed financial da...Quality: 50/100 into the book Rationality: From AI to Zombies (also known as From AI to Zombies) in 2015.3
Yudkowsky’s stated goal was to create a comprehensive guide to rationality by developing techniques and mental models to overcome cognitive biases, refine decision-making, and update beliefs using Bayesian reasoning. The essays emphasize distinguishing mental models (“map”) from reality (“territory”) and aim to equip readers with tools for clearer thinking, accurate beliefs, and addressing profound risks like artificial general intelligence existential threats.4 The work became foundational to the rationalist movement and significantly influenced effective altruism, particularly around Bayesian epistemology, prediction, and cognitive bias awareness.5
While The Sequences are primarily framed as a guide to rationality, they contain foundational epistemology that enables readers to develop better models for understanding AI alignment risks. In the latter sections, essays related to AI alignment appear frequently, with entire sequence sections like The Machine in the Ghost and Mere Goodness having direct object-level relevance to alignment work.6
History and Development
Section titled “History and Development”Original Publication (2006-2009)
Section titled “Original Publication (2006-2009)”Eliezer YudkowskyResearcherEliezer YudkowskyComprehensive biographical profile of Eliezer Yudkowsky covering his foundational contributions to AI safety (CEV, early problem formulation, agent foundations) and notably pessimistic views (>90% ...Quality: 35/100 began writing The Sequences as daily blog posts starting in 2006, initially on Overcoming Bias (where Robin Hanson was a principal contributor) and later on LessWrongLesswrongLessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving $5M+ in funding and serving as the origin point for ~31% of EA survey respondent...Quality: 44/100, which he founded in February 2009.78 The original collection consisted of approximately 300 blog posts exploring theses coherently, including core concepts like the map-territory distinction—the idea that beliefs are maps representing reality, not reality itself.9
About half of the original posts were organized into thematically linked “sequences,” distinguished between “major” sequences (by size) and “minor” sequences. The core sequences included:10
- Map and Territory - Bayesian rationality and epistemology
- Mysterious Answers to Mysterious Questions - How to recognize and avoid false explanations
- How to Actually Change Your Mind - Overcoming motivated reasoning and biases
- Reductionism - Understanding complex phenomena through simpler components
Yudkowsky was an autodidact who did not attend high school or college, and had previously co-founded the Singularity Institute for Artificial Intelligence (which became MIRIOrganizationMIRIComprehensive organizational history documenting MIRI's trajectory from pioneering AI safety research (2000-2020) to policy advocacy after acknowledging research failure, with detailed financial da...Quality: 50/100 in 2013).11
Book Compilation (2015)
Section titled “Book Compilation (2015)”In 2015, MIRIOrganizationMIRIComprehensive organizational history documenting MIRI's trajectory from pioneering AI safety research (2000-2020) to policy advocacy after acknowledging research failure, with detailed financial da...Quality: 50/100 collated, edited, and published the posts as the ebook Rationality: From AI to Zombies. This version omitted some original posts while adding uncollected essays from the same era.12 The compiled version organized the material into thematic “books”:
- Book I: Map and Territory - Bayesian rationality and epistemology
- Book II: How to Actually Change Your Mind - Overcoming motivated reasoning and biases like confirmation bias, availability heuristic, anchoring, and scope insensitivity
- Book III: The Machine in the Ghost - Philosophy of mind, intelligence, goal systems, often linked to AI; includes thought experiments on consciousness and subjective experience versus physical processes (e.g., philosophical zombies)
- Additional books on quantum physics, evolutionary psychology, and morality13
The original posts were preserved on LessWrongLesswrongLessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving $5M+ in funding and serving as the origin point for ~31% of EA survey respondent...Quality: 44/100 as “deprecated” for historical reference, while modern LessWrong sequences continued to draw from this material.14
Content and Core Concepts
Section titled “Content and Core Concepts”Rationality and Epistemology
Section titled “Rationality and Epistemology”The Sequences teach how to avoid typical failure modes of human reasoning and think in ways that lead to true and accurate beliefs.15 Core epistemological concepts include:
- Map-Territory Distinction: Beliefs function as maps representing reality, not reality itself; confusing the two leads to systematic errors16
- Bayesian Reasoning: Using probability theory to update beliefs based on evidence
- Conservation of Expected Evidence: The principle that you can’t predict in advance what direction evidence will update your beliefs
- Absence of Evidence as Evidence of Absence: When you would expect to see evidence if something were true, not finding it counts against that hypothesis17
Cognitive Biases
Section titled “Cognitive Biases”The Sequences extensively catalog and explain cognitive biases that interfere with accurate thinking:
- Confirmation bias - Seeking evidence that confirms existing beliefs
- Availability heuristic - Overweighting easily recalled examples
- Anchoring - Being influenced by initial numbers or suggestions
- Scope insensitivity - Failing to properly scale emotional responses to magnitude
- Motivated reasoning - Reasoning in service of desired conclusions rather than truth18
Decision Theory and AI
Section titled “Decision Theory and AI”Yudkowsky developed Timeless Decision Theory (TDT) as an alternative to Causal and Evidential Decision Theory, addressing problems like Newcomb’s Problem and Pascal’s Mugging.19 The Sequences also introduce concepts relevant to AI alignment, including:
- Intelligence explosion and recursive self-improvement
- Optimization power in vast search spaces
- Instrumental convergence and goal preservation
- The challenge of specifying human values20
Influence and Impact
Section titled “Influence and Impact”Foundational Role in Communities
Section titled “Foundational Role in Communities”The Sequences became foundational texts for LessWrongLesswrongLessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving $5M+ in funding and serving as the origin point for ~31% of EA survey respondent...Quality: 44/100 and shaped the rationalist community’s culture and discourse.21 The material is widely recommended as an entry point for newcomers to rationalist thinking and AI safety considerations. LessWrong’s 2024 survey showed The Sequences as a top recommended resource among respondents.22
The work significantly influenced effective altruism, particularly around Bayesian epistemology, prediction, cognitive biases, and thinking about AI risks.23 Community members have noted that familiarity with The Sequences, particularly essays like “Death Spirals,” helps create “a community I can trust” by promoting epistemic clarity and transparency about uncertainty.24
Academic and Intellectual Influence
Section titled “Academic and Intellectual Influence”Yudkowsky’s work on intelligence explosions from the Sequences era influenced philosopher Nick Bostrom’s 2014 book Superintelligence: Paths, Dangers, Strategies.25 However, The Sequences face criticism for limited engagement with academic philosophy and for sometimes rediscovering existing concepts without proper credit—for example, Yudkowsky’s “Requiredism” essentially describes compatibilism in philosophy of mind.26
The material overlaps with prior academic works like Thinking and Deciding by Jonathan Baron but is criticized for not fully crediting academia. Some view it as an original synthesis (30-60% new material) presented in an engaging “popular science” format that condenses psychology, philosophy, and AI ideas into memorable phrases.27
Practical Reception
Section titled “Practical Reception”Readers report that The Sequences provide useful “tags” or terminology for discussing reasoning patterns, help internalize ideas that seem obvious in retrospect, and offer tools for avoiding belief weak points like motivated cognition.28 The essays are described as engaging popular science that makes concepts stick through catchy framing and thought experiments.
However, critics note limitations in measurable effectiveness. No empirical studies demonstrate improvements in decision-making or other quantifiable outcomes from reading The Sequences.29 The work’s impact appears primarily anecdotal and concentrated within specific communities rather than demonstrating broad practical effectiveness.
Criticisms and Controversies
Section titled “Criticisms and Controversies”Philosophical Shortcomings
Section titled “Philosophical Shortcomings”Critics argue that Yudkowsky dismisses philosophy while simultaneously reinventing concepts from the field without adequate credit or understanding. Specific criticisms include:3031
- Misrepresenting the zombie argument: Yudkowsky confuses the philosophical zombie thought experiment with epiphenomenalism, leading philosopher David Chalmers to publicly correct his interpretation
- Strawmanning critics: Failing to engage with the strongest versions of opposing arguments
- Rediscovering existing ideas: Presenting concepts like compatibilism (“Requiredism”) as if novel
- Weak decision theory: Timeless Decision Theory described as “wildly indeterminate,” hypersensitive, and inferior to evidential/causal alternatives
Epistemic Conduct
Section titled “Epistemic Conduct”Multiple critics highlight concerns about Yudkowsky’s approach to disagreement and error correction:3233
- Confidently asserting claims that contain “egregious errors”
- Refusing to acknowledge mistakes or engaging weakly with substantive criticisms
- Responding arrogantly or calling opponents “stupid”
- Ignoring stronger counter-arguments while focusing on weaker ones
- Poor track record in predictions despite high confidence
These patterns are seen as harmful to Yudkowsky’s reputation and to efforts to promote rationalist ideas outside the existing community.
Stylistic and Substantive Issues
Section titled “Stylistic and Substantive Issues”Readers note several problems with the writing itself:3435
- Excessive repetition: “Beating a dead horse” on the same points
- Length and accessibility: The approximately 1 million words make it a “difficult read”
- Variable quality: Some sequences (e.g., on metaethics) described as skimmable or underwhelming
- Overly speculative: Encourages treating one’s own mind as inherently inferior or opaque in ways that can lead to unnecessary pessimism
Worldview Concerns
Section titled “Worldview Concerns”Critics argue The Sequences transmit a “packaged worldview” with potential dangers rather than pure rationality tools.36 The work’s framing around AI doom has become more prominent over time—one reader noted that on a second reading, they became “constantly aware that Yudkowsky believes…that our doom is virtually certain and he has no idea how to even begin formulate a solution.”37
This contrasts with the optimistic tone of the original writing period (2006-2009). By 2024, Yudkowsky’s public statements emphasized extreme urgency, stating humanity has “ONE YEAR, THIS YEAR, 2024” for a global response to AI extinction risks.38
Replication Crisis Impact
Section titled “Replication Crisis Impact”The Sequences heavily drew on psychological findings from the early 2000s, many of which collapsed during the replication crisis that began shortly after Yudkowsky finished writing them.39 This undermines some of the empirical foundations for claims about cognitive biases and reasoning, though core epistemological points may remain valid.
Community Perception
Section titled “Community Perception”The Sequences are sometimes associated with what critics describe as a “nerdy, rationalist religion” with unconventional beliefs (including polyamory and AI obsession), with Yudkowsky positioned as an unrespected “guru” outside his immediate circle.40 The fact that Yudkowsky’s other major work is Harry Potter and the Methods of Rationality (a fanfiction novel) reinforces this perception among skeptics.
Within the rationalist and EA communities, some members note that “the Sequences clearly failed to make anyone a rational superbeing, or even noticeably more successful,” as Scott Alexander pointed out as early as 2009.41
Ongoing Relevance and Evolution
Section titled “Ongoing Relevance and Evolution”The Sequences remain available in multiple formats: as blog posts on LessWrongLesswrongLessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving $5M+ in funding and serving as the origin point for ~31% of EA survey respondent...Quality: 44/100, as the compiled ebook Rationality: From AI to Zombies, and through curated “Sequence Highlights” featuring 50 key essays.42 The material continues to serve as a recommended starting point for understanding rationalist thinking and AI safety concerns.
Yudkowsky continued publishing related work, including the 2017 ebook Inadequate Equilibria (published by MIRIOrganizationMIRIComprehensive organizational history documenting MIRI's trajectory from pioneering AI safety research (2000-2020) to policy advocacy after acknowledging research failure, with detailed financial da...Quality: 50/100) on societal inefficiencies,43 and co-authored If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All with Nate Soares, which became a New York Times bestseller.44
A 2025 podcast episode on Books in Bytes explored ongoing themes from The Sequences relevant to rationalists and AI theorists, including the zombie argument, perception biases, and joy in reasoning.45 Manifold Markets tracked predictions about Yudkowsky’s views on AI doom probability (greater than 75% within 50 years by 2035), noting potential for downward adjustments only if machine learning plateaus, global AI development stalls, or alignment succeeds.46
Key Uncertainties
Section titled “Key Uncertainties”Several important questions remain about The Sequences’ ultimate value and impact:
- How much original insight versus synthesis? - The balance between novel contributions and condensing existing academic work remains debated, with estimates ranging from 30-60% new material
- What is the measurable effectiveness? - No empirical studies have quantified improvements in decision-making, career outcomes, or other concrete benefits from reading The Sequences
- How much has the replication crisis undermined the empirical foundations? - Many psychological findings cited have failed to replicate, though the epistemic core may remain valid
- Is the pessimistic AI worldview justified? - The progression from optimism (2006-2009) to doom certainty (2020s) raises questions about whether the underlying reasoning changed or if motivated reasoning influenced later views
- What is the appropriate relationship with academic philosophy? - Whether The Sequences should be positioned as complementary to, independent from, or in tension with traditional philosophy remains contested