Longterm Wiki
Updated 2026-03-13HistoryData
Citations verified23 accurate2 flagged13 unchecked
Page StatusRisk
Edited today1.6k words10 backlinksUpdated every 6 monthsDue in 26 weeks
92QualityComprehensive •94.5ImportanceEssential18ResearchMinimal
Content3/13
LLM summaryScheduleEntityEdit history2Overview
Tables0/ ~6Diagrams0/ ~1Int. links11/ ~13Ext. links0/ ~8Footnotes0/ ~5References4/ ~5Quotes25/38Accuracy25/38Backlinks10
Change History2
Citation pipeline improvements and footnote normalization3 weeks ago

Fixed citation extraction to handle all footnote formats (text+bare URL), created a footnote normalization script that auto-converted 58 non-standard footnotes to markdown-link format, switched dashboard export from JSON/.cache to YAML/data/ for production compatibility, ran the citation accuracy pipeline on 5 pages (rethink-priorities, cea, compute-governance, hewlett-foundation, center-for-applied-rationality) producing 232 citation checks with 57% accurate, 16% flagged, re-verified colorado-ai-act archive outside sandbox (18/19 verified), and improved difficulty distribution to use structured categories (easy/medium/hard) with normalization fallback.

claude-opus-4-6 · ~1h

Improve top 5 foundational wiki pages#1883 weeks ago

Improved the 5 highest-importance, lowest-quality wiki pages using the Crux content pipeline. All were stubs (7 words) or had quality=0 and are now comprehensive articles with citations, EntityLinks, and balanced perspectives.

Issues2
QualityRated 92 but structure suggests 53 (overrated by 39 points)
StructureNo tables or diagrams - consider adding visual content

Superintelligence

Concept

Superintelligence

AI systems with cognitive abilities vastly exceeding human intelligence

Related
Concepts
Fast Takeoff
Capabilities
Self-Improvement and Recursive Enhancement
1.6k words · 10 backlinks

Superintelligence refers to any intellect that greatly exceeds human cognitive performance across virtually all domains of interest.1 The concept encompasses hypothetical AI systems that would surpass human-level intelligence not just in narrow tasks, but in general reasoning, creativity, social intelligence, and other cognitive capabilities.

Definition and Forms

Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies established the most widely-used taxonomy, identifying three distinct forms:2

Speed superintelligence describes a system with cognitive capabilities similar to human minds but operating at significantly faster speeds. Such a system could accomplish in minutes what would take humans months or years. Speed superintelligence could arise from whole brain emulations running on faster hardware substrates.

Collective superintelligence consists of multiple intellects coordinating and communicating to achieve capabilities far exceeding any individual intelligence. This form excels at parallelizable tasks. Current examples include prediction markets and research organizations, though at levels far below what would constitute superintelligence.

Quality superintelligence refers to systems that are qualitatively smarter than humans—capable of intellectual tasks that humans cannot perform regardless of time allocation. This form would represent fundamentally different and superior cognitive architectures.

Historical Development

The core concept predates modern AI research. In 1965, mathematician I. J. Good published "Speculations Concerning the First Ultraintelligent Machine," defining an ultraintelligent machine as one "that can far surpass all the intellectual activities of any man however clever."3 Good noted that "the design of machines is one of these intellectual activities; therefore, an ultraintelligent machine could design even better machines," introducing the concept of recursive self-improvement.4

The term "superintelligence" gained broader attention following Bostrom's 2014 book, which systematically analyzed potential development paths, capabilities, and control challenges. The book received mixed reception—while influential in AI safety circles, some critics characterized it as "speculations built upon plausible conjecture."5 Other researchers noted that sophisticated machines remain "intelligent in only a limited sense" relative to human general intelligence.6

Paths to Development

Recursive Self-Improvement

Recursive self-improvement (RSI) describes a process where an AI system modifies its own code to enhance its capabilities, which then enables it to make further improvements, potentially leading to rapid capability gains.7 This mechanism forms the basis of intelligence explosion theories.

Current research includes Meta AI's work on self-modifying systems and Google DeepMind's AlphaEvolve, though these remain far from the recursive self-improvement envisioned in superintelligence scenarios.8 In December 2025, Anthropic co-founder Jared Kaplan described recursive self-improvement as the "ultimate risk" in AI development.9

Other Development Paths

Beyond recursive self-improvement, potential paths to superintelligence include:

  • AI development: Continued advances in machine learning architectures and training methods
  • Whole brain emulation: Detailed scanning and simulation of human brain structures
  • Biological cognitive enhancement: Genetic or pharmaceutical improvements to human intelligence
  • Brain-computer interfaces: Direct integration of human cognition with computational systems
  • Collective intelligence amplification: Improved coordination mechanisms for human organizations

Intelligence Explosion

The intelligence explosion hypothesis, articulated by Eliezer Yudkowsky and others, posits that "due to recursive self-improvement, an AI can potentially grow in capability on a timescale that seems fast relative to human experience."10 This relates directly to debates about takeoff speed.

Fast takeoff scenarios envision transitions from human-level to far-beyond-human capability occurring in hours, days, or weeks. Such rapid development would leave minimal time for human intervention or course correction.

Slow takeoff scenarios describe capability increases occurring over years or decades, allowing society to adapt gradually to changing AI capabilities. In 2021, Eliezer Yudkowsky and Paul Christiano debated the likelihood of these scenarios, with Yudkowsky arguing for discontinuous acceleration and Christiano favoring more gradual development.11

The 2025 AI Index Report documents rapid capability improvements in specific domains—AI systems increased from solving 4.4% of coding problems in 2023 to 71.7% in 2024 on SWE-bench.12 However, on newly designed challenging benchmarks like Humanity's Last Exam, top systems score just 8.80%, suggesting substantial distance from human-level general intelligence.13

Theoretical Concepts

Orthogonality Thesis

The orthogonality thesis, formulated by Bostrom, states that intelligence and final goals are orthogonal axes along which artificial intellects can vary independently.14 A system could possess any level of intelligence combined with essentially any set of final goals. This challenges assumptions that sufficiently intelligent systems would necessarily converge on particular values or objectives.

Instrumental Convergence

The instrumental convergence thesis posits that agents with sufficient intelligence and diverse final goals will pursue similar intermediate strategies.15 These convergent instrumental goals potentially include:

  • Self-preservation (to continue pursuing final goals)
  • Goal-content integrity (maintaining original objectives)
  • Cognitive enhancement (improving decision-making capabilities)
  • Resource acquisition (obtaining means to achieve objectives)

These convergent goals raise control concerns, as they might motivate systems to resist shutdown or modification attempts regardless of their specified final objectives.

Control Problem

The control problem addresses the challenge of ensuring superintelligent systems remain aligned with human values and under human oversight. As OpenAI stated in their 2023 superalignment announcement, "we don't have a solution for steering or controlling potentially superintelligent AI" and "current alignment techniques won't scale to superintelligence because humans won't be able to reliably supervise systems much smarter than us."16

Three core challenges complicate superintelligence control:17

Value loading involves specifying complex human values in a form that AI systems can understand and optimize. Human values prove difficult to formalize comprehensively.

The interpretability gap describes how superintelligent systems' internal reasoning may become incomprehensible to human overseers, making it difficult to verify alignment.

Instrumental convergence creates incentives for even well-intentioned systems to resist control measures that might interfere with goal achievement.

Researcher Joe Carlsmith identifies the availability of superhuman strategies—approaches to achieving goals that humans could neither generate nor detect—as a key obstacle to maintaining control.18

Strategic Implications

Decisive Strategic Advantage

A "decisive strategic advantage" occurs when one project achieves sufficient capability superiority to overcome all opposition and achieve global dominance.19 Factors affecting this possibility include:

  • Takeoff speed: Faster capability gains provide less time for competitors to catch up
  • Technology diffusion rates: How quickly advances spread to other projects
  • Lead magnitude: The initial capability gap between leading and following projects

Singleton Scenarios

Bostrom defines a "singleton" as "a single global decision-making agency strong enough to solve all major global coordination problems."20 A superintelligent system with decisive strategic advantage might establish singleton control, though this depends on both capability gaps and whether such advantage would be used for global coordination.

Research suggests that even with gradual AI development (slow takeoff), decisive strategic advantage remains possible after intelligence explosion, as a superintelligent system could leverage qualitative cognitive advantages beyond simple speed increases.21

Expert Timelines and Forecasts

Epoch AI and AI Impacts have conducted multiple surveys of machine learning researchers regarding timelines for human-level AI:

2022 Survey: Surveyed machine learning researchers predicted a 50% probability of high-level machine intelligence (HLMI) by 2059—an aggregate forecast of 37 years from the survey date.22

2023 Survey: A survey of 2,778 AI researchers in October 2023 reexamined questions from previous surveys regarding timelines for HLMI and full automation of labor.23

A 2012 survey by Vincent Müller and Bostrom at the Future of Humanity Institute found that experts expect systems to move to superintelligence in less than 30 years after achieving human-level AI.24 Multiple earlier surveys through 2016 produced median 50% probability estimates for human-level AI ranging between 2035 and 2050.25

Current Capabilities Comparison

The 2025 AI Index Report documents areas where AI systems approach or exceed human performance:26

  • Reading comprehension benchmarks show near-human or exceeding performance
  • Image classification matches or exceeds human accuracy in many domains
  • Competition-level mathematics problems are increasingly solvable by AI systems

However, performance varies significantly by task type and time constraints. On short time horizons (two-hour budgets), top AI systems score four times higher than human experts on certain tasks. As time increases to 32 hours, human performance surpasses AI by a ratio of two to one, suggesting current systems lack robust general reasoning capabilities.27

OpenAI's GDPval benchmark, measuring performance on real-world tasks, found that Claude Opus 4.1 produces outputs as good as or better than humans in just under half of tested tasks, and that GPT-5 performance more than tripled in one year compared to GPT-4o.28

Governance Proposals

OpenAI has called for governance frameworks for superintelligence development, suggesting that major governments could establish coordinated projects or collectively agree to limit the rate of capability growth at the frontier.29

In 2023, OpenAI cofounders proposed an "IAEA for superintelligence efforts" to govern high-capability systems.30 Carnegie Endowment research suggests that rather than a single institutional solution, governance will likely emerge as a regime complex with four functional categories:31

  1. Knowledge sharing among developers and governments
  2. Norms and standards for development practices
  3. Equitable access to AI benefits
  4. Collective security mechanisms

The Future of Life Institute organized a statement calling for a global moratorium on superintelligence development until broad scientific consensus exists that it can be developed safely. The statement gathered over 133,000 signatories.32 As of October 2025, the UK government was considering plans for an international moratorium on superintelligent AI development.33

Alternative Perspectives

Eric Drexler's 2018 presentation at EA Global proposed Comprehensive AI Services (CAIS) as an alternative framework to monolithic superintelligence scenarios. CAIS envisions AI capabilities developing as specialized services rather than unified agents.34

Some researchers argue that what is characterized as "superintelligence" actually describes "super-equipped intelligence"—systems with the same cognitive architecture as current AI but with greater resources and faster execution, rather than qualitatively superior intelligence.35

Multiple deep learning researchers, including Andrew Ng, have compared concerns about superintelligence to "worrying about overpopulation on Mars," suggesting that current evidence does not support near-term superintelligence scenarios.36 Critics note that most writing on AI existential risks comes from a small number of sources, primarily Bostrom's Superintelligence and essays by Yudkowsky, with limited substantive criticism published.37

Some colleagues of Bostrom have argued that nuclear war, nanotechnology, and biotechnology present more immediate and tractable threats than superintelligence.38

Footnotes

  1. Superintelligence - WikipediaSuperintelligence - Wikipedia

  2. Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University PressBostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press

  3. Good, I. J. (1965). Speculations Concerning the First Ultraintelligent Machine. Advances in Computers, 6, 31-88Good, I. J. (1965). Speculations Concerning the First Ultraintelligent Machine. Advances in Computers, 6, 31-88

  4. Good, I. J. (1965). Speculations Concerning the First Ultraintelligent MachineGood, I. J. (1965). Speculations Concerning the First Ultraintelligent Machine

  5. Superintelligence: Paths, Dangers, Strategies - WikipediaSuperintelligence: Paths, Dangers, Strategies - Wikipedia

  6. Superintelligence: Paths, Dangers, Strategies - WikipediaSuperintelligence: Paths, Dangers, Strategies - Wikipedia

  7. Recursive self-improvement - WikipediaRecursive self-improvement - Wikipedia

  8. Recursive self-improvement - WikipediaRecursive self-improvement - Wikipedia

  9. The Ultimate Risk: Recursive Self-Improvement. Control AI News, December 2025The Ultimate Risk: Recursive Self-Improvement. Control AI News, December 2025

  10. Yudkowsky, E. (2013). Intelligence Explosion MicroeconomicsYudkowsky, E. (2013). Intelligence Explosion Microeconomics

  11. Yudkowsky, E., & Christiano, P. (2021). Yudkowsky and Christiano discuss 'Takeoff Speeds'. Machine Intelligence Research Institute, November 22, 2021Yudkowsky, E., & Christiano, P. (2021). Yudkowsky and Christiano discuss 'Takeoff Speeds'. Machine Intelligence Research Institute, November 22, 2021

  12. Stanford HAI. (2025). Technical Performance - The 2025 AI Index ReportStanford HAI. (2025). Technical Performance - The 2025 AI Index Report

  13. Citation rc-26a4 (data unavailable — rebuild with wiki-server access)

  14. Bostrom, N. (2012). The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial AgentsBostrom, N. (2012). The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents

  15. Bostrom, N. (2012). The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial AgentsBostrom, N. (2012). The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents

  16. OpenAI. (2023). Introducing SuperalignmentOpenAI. (2023). Introducing Superalignment

  17. The Control Problem: Aligning Superintelligence. Human Sovereignty AIThe Control Problem: Aligning Superintelligence. Human Sovereignty AI

  18. Citation rc-0d6f (data unavailable — rebuild with wiki-server access)

  19. Bostrom, N. (2014). Decisive strategic advantage - Superintelligence: Paths, Dangers, StrategiesBostrom, N. (2014). Decisive strategic advantage - Superintelligence: Paths, Dangers, Strategies

  20. Superintelligence 7: Decisive strategic advantage. LessWrongSuperintelligence 7: Decisive strategic advantage. LessWrong

  21. Soft takeoff can still lead to decisive strategic advantage. Alignment ForumSoft takeoff can still lead to decisive strategic advantage. Alignment Forum

  22. AI Impacts. (2022). 2022 Expert Survey on Progress in AI, June-August 2022AI Impacts. (2022). 2022 Expert Survey on Progress in AI, June-August 2022

  23. AI Impacts. (2023). 2023 Expert Survey on Progress in AI, October 2023AI Impacts. (2023). 2023 Expert Survey on Progress in AI, October 2023

  24. Müller, V. C., & Bostrom, N. Future Progress in Artificial Intelligence: A Survey of Expert OpinionMüller, V. C., & Bostrom, N. Future Progress in Artificial Intelligence: A Survey of Expert Opinion

  25. AI Impacts. AI Timeline SurveysAI Impacts. AI Timeline Surveys

  26. Nature. (2024). AI now beats humans at basic tasks — new benchmarks are neededNature. (2024). AI now beats humans at basic tasks — new benchmarks are needed

  27. Stanford HAI. (2025). Technical Performance - The 2025 AI Index ReportStanford HAI. (2025). Technical Performance - The 2025 AI Index Report

  28. OpenAI. Measuring the performance of our models on real-world tasks (GDPval)OpenAI. Measuring the performance of our models on real-world tasks (GDPval)

  29. OpenAI. Governance of superintelligenceOpenAI. Governance of superintelligence

  30. Carnegie Endowment. (2024). Envisioning a Global Regime Complex to Govern Artificial Intelligence, March 2024Carnegie Endowment. (2024). Envisioning a Global Regime Complex to Govern Artificial Intelligence, March 2024

  31. Carnegie Endowment. (2024). Envisioning a Global Regime Complex to Govern Artificial Intelligence, March 2024Carnegie Endowment. (2024). Envisioning a Global Regime Complex to Govern Artificial Intelligence, March 2024

  32. House of Lords Library. (2025). Superintelligent AI: Should its development be stopped? October 2025House of Lords Library. (2025). Superintelligent AI: Should its development be stopped? October 2025

  33. House of Lords Library. (2025). Superintelligent AI: Should its development be stopped? October 2025House of Lords Library. (2025). Superintelligent AI: Should its development be stopped? October 2025

  34. Drexler, E. (2018). Reframing Superintelligence. EA Global 2018Drexler, E. (2018). Reframing Superintelligence. EA Global 2018

  35. The 'Super' Is In The Equipment, Not The Intelligence. EA ForumThe 'Super' Is In The Equipment, Not The Intelligence. EA Forum

  36. How sure are we about this AI stuff? Effective Altruism, 2018How sure are we about this AI stuff? Effective Altruism, 2018

  37. How sure are we about this AI stuff? Effective Altruism, 2018How sure are we about this AI stuff? Effective Altruism, 2018

  38. Superintelligence: Paths, Dangers, Strategies - WikipediaSuperintelligence: Paths, Dangers, Strategies - Wikipedia

References

1Superintelligence - Wikipediaen.wikipedia.org·Reference
Claims (1)
Superintelligence refers to any intellect that greatly exceeds human cognitive performance across virtually all domains of interest. The concept encompasses hypothetical AI systems that would surpass human-level intelligence not just in narrow tasks, but in general reasoning, creativity, social intelligence, and other cognitive capabilities.
Claims (1)
OpenAI has called for governance frameworks for superintelligence development, suggesting that major governments could establish coordinated projects or collectively agree to limit the rate of capability growth at the frontier.
Accurate100%Feb 22, 2026
There are many ways this could be implemented; major governments around the world could set up a project that many current efforts become part of, or we could collectively agree (with the backing power of a new organization like the one suggested below) that the rate of growth in AI capability at the frontier is limited to a certain rate per year.
Claims (1)
Good published "Speculations Concerning the First Ultraintelligent Machine," defining an ultraintelligent machine as one "that can far surpass all the intellectual activities of any man however clever." Good noted that "the design of machines is one of these intellectual activities; therefore, an ultraintelligent machine could design even better machines," introducing the concept of recursive self-improvement.
4arXiv:1502.06512arxiv.org·Paper
Claims (1)
The intelligence explosion hypothesis, articulated by Eliezer Yudkowsky and others, posits that "due to recursive self-improvement, an AI can potentially grow in capability on a timescale that seems fast relative to human experience." This relates directly to debates about takeoff speed.
Citation verification: 20 verified, 1 flagged, 13 unchecked of 38 total

Related Pages

Top Related Pages

Risks

Treacherous TurnBioweapons RiskAI-Induced Irreversibility

Analysis

Carlsmith's Six-Premise ArgumentLock-in Mechanisms Model

Other

Eliezer YudkowskyDario AmodeiPaul ChristianoJan Leike

Organizations

AnthropicEpoch AIFuture of Humanity InstituteGoogle DeepMindSafe Superintelligence Inc.

Historical

The MIRI Era

Concepts

Transformative AIAGI TimelineSituational Awareness

Policy

MAIM (Mutually Assured AI Malfunction)

Key Debates

When Will AGI Arrive?