Skip to content
Longterm Wiki
Back

Minsky on AI risk in the 80s and 90s

web

Written by Luke Muehlhauser (former MIRI executive director), this post is valuable as historical documentation showing that AI risk concerns from a founding figure of AI predate the modern safety movement, useful for intellectual history discussions.

Metadata

Importance: 42/100blog postanalysis

Summary

Luke Muehlhauser documents and analyzes Marvin Minsky's statements about AI risk during the 1980s and 1990s, examining what one of AI's founding figures thought about the dangers of advanced AI systems. The post serves as a historical record showing that concerns about AI risk predate the modern AI safety movement.

Key Points

  • Documents Marvin Minsky's views on AI risk expressed decades before the modern AI safety field emerged
  • Provides historical evidence that prominent AI researchers recognized potential dangers of advanced AI early on
  • Helps contextualize the intellectual history of AI risk concern, countering the narrative that it is a recent or fringe idea
  • Minsky's status as an AI pioneer lends credibility to early risk concerns and shows the field's founders were not universally dismissive of dangers
  • Useful for understanding how AI safety thinking evolved from early pioneers to the modern alignment research community

Cited by 1 page

PageTypeQuality
Early Warnings EraHistorical31.0

Cached Content Preview

HTTP 200Fetched Apr 10, 20262 KB
Minsky on AI risk in the 80s and 90s 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 
 Follow-up to: AI researchers on AI risk ; Fredkin on AI risk in 1979 . 

 Marvin Minsky is another AI scientist who has been thinking about AI risk for a long time, at least since the 1980s. Here he is in a 1983 afterword to Vinge’s novel  True Names : 1 

 The ultimate risk comes when our greedy, lazy, masterminds are able at last to take that final step: to design goal-achieving programs which are programmed to make themselves grow increasingly powerful… It will be tempting to do this, not just for the gain in power, but just to decrease our own human effort in the consideration and formulation of our own desires. If some genie offered you three wishes, would not your first one be, “Tell me, please, what is it that I want to the most!” The problem is that, with such powerful machines, it would require but the slightest powerful accident of careless design for them to place their goals ahead of ours, perhaps the well-meaning purpose of protecting us from ourselves, as in With Folded Hands , by Jack Williamson; or to protect us from an unsuspected enemy, as in Colossus by D.H. Jones…

 
 And according to Eric Drexler ( 2015 ), Minsky was making the now-standard “dangerous-to-humans resource acquisition is a natural subgoal of almost any final goal” argument at least as early as 1990:

 My concerns regarding AI risk, which center on the challenges of long-term AI governance, date from the inception of my studies of advanced molecular technologies, ca. 1977. I recall a later conversation with Marvin Minsky (they chairing my doctoral committee, ca. 1990) that sharpened my understanding of some of the crucial considerations: Regarding goal hierarchies, Marvin remarked that the high-level task of learning language is, for an infant, a sub goal of getting a drink of water, and that converting the resources of the universe into computers is a potential subgoal of a machine attempting to play perfect chess.

 
 

 An online copy of the afterword is available here , though has been slightly modified from the original. I am quoting from the original, which was written in 1983. [ ↩ ] 
 
 
 -->
 Lists | Quotes | Musings 



 RSS |
 About |
 Other Writings 



 Modern classical music 

 Modern art jazz 

 Favorite movies since 2009 

 Animal consciousness 

 Industrial revolution 



 Recommended readings
Resource ID: fafa8b89b5212902 | Stable ID: sid_u79uDZBg1J