Longterm Wiki
Navigation
Updated 2026-03-13HistoryData
Page StatusContent
Edited today320 words
Content3/13
LLM summaryScheduleEntityEdit historyOverview
Tables1/ ~1Diagrams0Int. links19/ ~3Ext. links0/ ~2Footnotes0/ ~2References0/ ~1Quotes0Accuracy0

History

Overview

This section traces the development of AI safety as a field, from early theoretical concerns to the current mainstream recognition of AI risks. Understanding this history helps contextualize current debates and institutional structures.

Historical Eras

MIRI Era (2000-2015)

The field's founding period, dominated by the Machine Intelligence Research Institute:

  • Eliezer Yudkowsky's early writings on AI risk
  • Founding of SIAI (later MIRI) in 2000
  • Development of foundational concepts (orthogonality thesis, instrumental convergence)
  • Superintelligence (2014) brings ideas to academic attention

Deep Learning Era (2015-2022)

Deep learning breakthroughs reshape the landscape:

  • AlphaGo (2016) demonstrates superhuman capability
  • GPT-2 (2019) shows language model potential
  • Anthropic founded (2021) by former OpenAI safety team
  • Growing recognition in ML community

FTX/EA Crisis (2022)

The collapse of FTX exposed major fissures in EA-funded AI safety:

  • FTX collapse and EA's public credibility — November 2022 bankruptcy and reputational fallout
  • EA epistemic failures in the FTX era — governance, donor vetting, and cultural critiques
  • EA institutions' response — community surveys, trust damage, funding gaps
  • FTX Future Fund — $132M in grants dissolved overnight
  • Longtermism's credibility after FTX — philosophical and reputational questions

Early Warnings (2022-2023)

AI safety enters public consciousness:

  • ChatGPT (Nov 2022) captures public attention
  • Pause letter (March 2023) signed by prominent researchers
  • Geoffrey Hinton leaves Google to speak freely about risks
  • Congressional hearings on AI safety

Mainstream Era (2023-Present)

AI safety becomes a policy priority:

  • Biden Executive Order on AI (Oct 2023)
  • Bletchley Park AI Safety Summit (Nov 2023)
  • AI Safety Institutes established globally
  • Major labs adopt responsible scaling policies

Key Milestones

YearEventSignificance
2000SIAI foundedFirst AI safety organization
2014Superintelligence publishedBrought ideas to academia
2017Asilomar PrinciplesEarly multi-stakeholder agreement
2022FTX collapse$132M in EA/AI safety funding dissolved; major community reckoning
2022ChatGPT releasedPublic awareness breakthrough
2023UK AI Safety SummitFirst major government summit
2024EU AI Act enactedFirst comprehensive AI regulation

Related Pages

Top Related Pages

Organizations

OpenAIFTX Future FundMachine Intelligence Research Institute

Policy

Responsible Scaling PoliciesEU AI Act

Concepts

Ea Institutions Response To The Ftx CollapseLongtermism Credibility After Ftx

Historical

Deep Learning Revolution EraMainstream EraThe MIRI EraEarly Warnings Era

Other

Geoffrey HintonEliezer Yudkowsky