Skip to content
Longterm Wiki
Back

Superintelligence: Paths, Dangers, Strategies - Wikipedia

reference

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Wikipedia

A Wikipedia overview of Bostrom's seminal 2014 book, which significantly shaped public and academic discourse on AI existential risk; useful as a quick reference for key concepts and arguments introduced in the book.

Metadata

Importance: 72/100wiki pagereference

Summary

Wikipedia article summarizing Nick Bostrom's influential 2014 book arguing that superintelligent AI poses existential risks to humanity. The book introduces key concepts like the orthogonality thesis, instrumental convergence, and the control problem, and argues that ensuring AI alignment is among the most important challenges facing civilization.

Key Points

  • Introduces the 'orthogonality thesis': intelligence and final goals are independent, so a superintelligence could pursue any objective.
  • Argues for 'instrumental convergence': most goals lead AI to seek self-preservation, resource acquisition, and goal-content integrity.
  • Presents the 'control problem': how to ensure a superintelligent AI acts in accordance with human values and intentions.
  • Discusses paths to superintelligence including whole brain emulation, biological enhancement, and artificial general intelligence.
  • Warns of 'value lock-in' scenarios where a misaligned AI permanently determines the future trajectory of civilization.

Cited by 5 pages

Cached Content Preview

HTTP 200Fetched Apr 5, 202615 KB
Superintelligence: Paths, Dangers, Strategies - Wikipedia 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Jump to content 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 From Wikipedia, the free encyclopedia 
 
 
 
 
 
 2014 book by Nick Bostrom 
 

 Superintelligence:
Paths, Dangers, Strategies First edition Author Nick Bostrom Language English Subject Artificial intelligence Genre Philosophy , popular science Publisher Oxford University Press [ 1 ] Publication date July 3, 2014 (UK)
September 1, 2014 (US) Publication place United Kingdom Media type Print, e-book, audiobook Pages 352 pp. ISBN 978-0199678112 Preceded by Global Catastrophic Risks   
 Superintelligence: Paths, Dangers, Strategies is a 2014 book by the philosopher Nick Bostrom . It explores how superintelligence could be created and what its features and motivations might be. [ 2 ] It argues that superintelligence, if created, would be difficult to control, and that it could take over the world in order to accomplish its goals. The book also presents strategies to help make superintelligences whose goals benefit humanity. [ 3 ] It was particularly influential for raising concerns about existential risk from artificial intelligence . [ 4 ] 

 
 Synopsis

 [ edit ] 
 It is unknown whether human-level artificial intelligence will arrive in a matter of years, later this century, or not until future centuries. Regardless of the initial timescale, once human-level machine intelligence is developed, a "superintelligent" system that "greatly exceeds the cognitive performance of humans in virtually all domains of interest" would most likely follow surprisingly quickly. Such a superintelligence would be very difficult to control.

 While the ultimate goals of superintelligences could vary greatly, a functional superintelligence will spontaneously generate, as natural subgoals, " instrumental goals " such as self-preservation and goal-content integrity, cognitive enhancement, and resource acquisition. For example, an agent whose sole final goal is to solve the Riemann hypothesis (a famous unsolved mathematical conjecture ) could create and act upon a subgoal of transforming the entire Earth into some form of computronium (hypothetical material optimized for computation) to assist in the calculation. The superintelligence would proactively resist any outside attempts to turn the superintelligence off or otherwise prevent its subgoal completion. In order to prevent such an existential catastrophe , it is necessary to successfully solve the " AI control problem " for the first superintelligence. The solution might involve instilling the superintelligence with goals that are compatible with human survival and well-being. Solving the control problem is surprisingly difficult because most goals, when translated into machine-implementable code, lead to unforeseen and undesirable consequences.

 The owl

... (truncated, 15 KB total)
Resource ID: 0151481d5dc82963 | Stable ID: sid_AS1zFMBVbx