Skip to content
Longterm Wiki
Back

Nick Bostrom's Homepage

web
nickbostrom.com·nickbostrom.com/

Nick Bostrom is one of the most influential early thinkers in AI existential risk; his homepage aggregates decades of papers relevant to alignment, x-risk, and long-term AI governance. Note FHI closed in 2024.

Metadata

Importance: 72/100homepage

Summary

Personal website of Nick Bostrom, philosopher and founding director of the Future of Humanity Institute at Oxford. He is known for foundational work on existential risk, superintelligence, simulation theory, and the ethics of emerging technologies. His book 'Superintelligence' significantly shaped mainstream discourse on AI safety.

Key Points

  • Hub for Bostrom's extensive academic publications on existential risk, AI, and transhumanism
  • Home to foundational papers on x-risk including the 'Vulnerable World Hypothesis' and 'Astronomical Waste'
  • Author of 'Superintelligence: Paths, Dangers, Strategies' (2014), a landmark text in AI safety
  • Introduced key concepts like the orthogonality thesis and instrumental convergence to AI safety discourse
  • Founding figure of the Future of Humanity Institute (FHI), a major AI safety research institution

Cited by 3 pages

PageTypeQuality
Corrigibility Failure PathwaysAnalysis62.0
Future of Humanity InstituteOrganization51.0
AI Value Lock-inRisk64.0

1 FactBase fact citing this source

EntityPropertyValueAs Of
Nick BostromEmployed Bysid_idAU5YPkSw

Cached Content Preview

HTTP 200Fetched Apr 9, 202635 KB
Nick Bostrom’s Home Page 

 
 

 
 
 

 
 

 
 
 
 
 
 

 

 
 
 
 
 
 

 
 
 
 
 
 

 
 The world is quickening, and the birth of superintelligence presumably not very far off; yet most people are otherwise occupied.

 Currently working on a couple of things related to AGI governance.

 Chinese translation of Deep Utopia is now out. Also the second print run in English and the Audiobook — people say the voice actor is good. The book has received still more awards.

 You can sign up for newsletter to receive very rare updates, but the most reliable method is to check this page.

 For more, e.g. New Yorker profile (old), Bio , CV , Contact , Press images .

 
 
 Recent additions

 
 Optimal Timing for Superintelligence , working paper

 Open Global Investment as a Governance Model for AGI , working paper

 Sandcastles , poem

 AI Creation and the Cosmic Host , working paper

 Propositions Concerning Digital Minds and Society , w/ Carl Shulman, in The Cambridge Journal of Law, Politics, and Art 

 Base Camp for Mt. Ethics , working paper

 Sharing the World with Digital Minds , w/ Carl Shulman, in edited volume (Oxford University Press, 2021)

 
 
 
 
 
 
 
 
 
 
 Selected papers

 
 
 
 
 
 
 Ethics & Policy

 
 
 AIs with moral status and political rights? We'll need a modus vivendi, and it’s becoming urgent to figure out the parameters for that. This paper makes a load of specific claims that begin to stake out a position.

 
 
 
 
 There may well exist a normative structure, based on the preferences or concordats of a cosmic host, and which has high relevance to the development of AI.

 
 
 
 
 AI being developed by one or more market-traded corporations has something to be said for it.

 
 
 
 
 Even for quite high values of P(doom), the life expectancy for us existing people seems higher if AGI is developed quite soon.

 
 
 
 
 Humans are relatively expensive but absolutely cheap.

 
 
 
 Strategic Implications of Openness in AI Development 

 An analysis of the global desirability of different forms of openness (including source code, science, data, safety techniques, capabilities, and goals). 

 
 
 
 The Reversal Test: Eliminating Status Quo Bias in Applied Ethics 

 We present a heuristic for correcting for one kind of bias (status quo bias), which we suggest affects many of our judgments about the consequences of modifying human nature. We apply this heuristic to the case of cognitive enhancements, and argue that the consequentialist case for this is much stronger than commonly recognized.

 
 
 
 
 Recounts the Tale of a most vicious Dragon that ate thousands of people every day, and of the actions that the King, the People, and an assembly of Dragonologists took with respect thereto. 

 
 
 
 Astronomical Waste: The Opportunity Cost of Delayed Technological Development 

 Suns are illuminating and heating empty rooms, unused energy is being flushed down black holes, and our great common endowment of negentropy is being irreversibly de

... (truncated, 35 KB total)
Resource ID: 9cf1412a293bfdbe | Stable ID: sid_ngRZoFTWo2