Back
MIRI's 2024 assessment
webCredibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: MIRI
Data Status
Not fetched
Cited by 4 pages
| Page | Type | Quality |
|---|---|---|
| Why Alignment Might Be Hard | Argument | 69.0 |
| Machine Intelligence Research Institute | Organization | 50.0 |
| Agent Foundations | Approach | 59.0 |
| Technical AI Safety Research | Crux | 66.0 |
Cached Content Preview
HTTP 200Fetched Feb 23, 202622 KB
[Skip to content](https://intelligence.org/2024/01/04/miri-2024-mission-and-strategy-update/#content)
# MIRI 2024 Mission and Strategy Update
- [January 4, 2024](https://intelligence.org/2024/01/04/)
- [Malo Bourgon](https://intelligence.org/author/malo/)
As we [announced](https://intelligence.org/2023/10/10/announcing-miris-new-ceo-and-leadership-team/) back in October, I have taken on the senior leadership role at MIRI as its CEO. It’s a big pair of shoes to fill, and an awesome responsibility that I’m honored to take on.
There have been several changes at MIRI since [our 2020 strategic update](https://intelligence.org/2020/12/21/2020-updates-and-strategy/), so let’s get into it. [1](https://intelligence.org/2024/01/04/miri-2024-mission-and-strategy-update/#fn1)
* * *
**The short version:**
We think it’s very unlikely that the AI alignment field will be able to make progress quickly enough to prevent human extinction and the loss of the future’s potential value, that we expect will result from loss of control to smarter-than-human AI systems.
However, developments this past year like the release of ChatGPT seem to have shifted the [Overton window](https://en.wikipedia.org/wiki/Overton_window) in a lot of groups. There’s been a lot more discussion of extinction risk from AI, including among policymakers, and the discussion quality seems greatly improved.
This provides a glimmer of hope. While we expect that more shifts in public opinion are necessary before the world takes actions that sufficiently change its course, it now appears more likely that governments could enact meaningful regulations to forestall the development of unaligned, smarter-than-human AI systems. It also seems more possible that humanity could take on a new megaproject squarely aimed at ending the acute risk period.
As such, in 2023, MIRI shifted its strategy to pursue three objectives:
1. **Policy:** Increase the probability that the major governments of the world end up coming to some international agreement to halt progress toward smarter-than-human AI, until humanity’s state of knowledge and justified confidence about its understanding of relevant phenomena has drastically changed; and until we are able to secure these systems such that they can’t fall into the hands of malicious or incautious actors. [2](https://intelligence.org/2024/01/04/miri-2024-mission-and-strategy-update/#fn2)
2. **Communications:** Share our models of the situation with a broad audience, especially in cases where talking about an important consideration could help normalize discussion of it.
3. **Research:** Continue to invest in a portfolio of research. This includes technical alignment research (though we’ve become more pessimistic that such work will have time to bear fruit if policy interventions fail to buy the research field more time), as well as research in support of our policy and communications goals. [3](https://intelligence.org/2024/01/04/miri-2024-mission-and-strategy-upd
... (truncated, 22 KB total)Resource ID:
435b669c11e07d8f | Stable ID: Y2RjMjE1ZT