History
Overview
Section titled “Overview”This section traces the development of AI safety as a field, from early theoretical concerns to the current mainstream recognition of AI risks. Understanding this history helps contextualize current debates and institutional structures.
Historical Eras
Section titled “Historical Eras”The field’s founding period, dominated by the Machine Intelligence Research Institute:
- Eliezer Yudkowsky’s early writings on AI risk
- Founding of SIAI (later MIRI) in 2000
- Development of foundational concepts (orthogonality thesis, instrumental convergence)
- Superintelligence (2014) brings ideas to academic attention
Deep learning breakthroughs reshape the landscape:
- AlphaGo (2016) demonstrates superhuman capability
- GPT-2 (2019) shows language model potential
- Anthropic founded (2021) by former OpenAI safety team
- Growing recognition in ML community
AI safety enters public consciousness:
- ChatGPT (Nov 2022) captures public attention
- Pause letter (March 2023) signed by prominent researchers
- Geoffrey Hinton leaves Google to speak freely about risks
- Congressional hearings on AI safety
AI safety becomes a policy priority:
- Biden Executive Order on AI (Oct 2023)
- Bletchley Park AI Safety Summit (Nov 2023)
- AI Safety Institutes established globally
- Major labs adopt responsible scaling policies
Key Milestones
Section titled “Key Milestones”| Year | Event | Significance |
|---|---|---|
| 2000 | SIAI founded | First AI safety organization |
| 2014 | Superintelligence published | Brought ideas to academia |
| 2017 | Asilomar Principles | Early multi-stakeholder agreement |
| 2022 | ChatGPT released | Public awareness breakthrough |
| 2023 | UK AI Safety Summit | First major government summit |
| 2024 | EU AI Act enacted | First comprehensive AI regulation |