The International PauseAI Protest: Activism under uncertainty
webAuthors
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: EA Forum
Published on the EA Forum, this post is associated with the PauseAI movement, which advocates for a halt or slowdown in frontier AI development pending better safety guarantees; relevant to discussions of AI governance activism and civil society responses to AI risk.
Metadata
Summary
This post reflects on the PauseAI international protest movement, examining the rationale for AI pause activism even under significant uncertainty about AI risks. It explores how concerned individuals can take meaningful action to slow AI development when the stakes are potentially catastrophic but outcomes are unclear.
Key Points
- •Argues that activism for an AI development pause can be justified even under deep uncertainty about the probability and nature of AI risks
- •Discusses the PauseAI movement's international protest efforts as a form of coordinated public pressure on policymakers and AI labs
- •Explores the tension between epistemic humility about AI timelines and the urgency of taking precautionary action
- •Frames AI pause advocacy as a legitimate EA-aligned cause given the potential magnitude of downside risks
- •Considers practical and ethical dimensions of protest as a tool for influencing AI governance outcomes
Cached Content Preview
# The International PauseAI Protest: Activism under uncertainty
By Joseph Miller, Holly Elmore ⏸️ 🔸, joepio
Published: 2023-10-12
This post is an attempt [to](https://forum.effectivealtruism.org/s/vw6tX5SyvTwMeSxJk) [summarize](https://www.astralcodexten.com/p/pause-for-thought-the-ai-pause-debate) the [crucial considerations](https://forum.effectivealtruism.org/topics/crucial-consideration) for an AI pause and AI pause advocacy. It is also a promotion for the [International PauseAI protest](https://pauseai.info/2023-oct), 21 October, the biggest AI protest ever, held in 7 countries on the same day. The aim of this post is to present an unbiased view, but obviously that may not be the case.
You can check out the EA Forum event page for the protest [here](https://forum.effectivealtruism.org/events/qLEMeDHaPzdMrfbe8/global-pause-ai-protest-10-21).
Seven Crucial Considerations for Pausing AI
===========================================
### **Under the default scenario, is the risk from AI acceptable?**
* Is alignment [going well](https://forum.effectivealtruism.org/s/vw6tX5SyvTwMeSxJk/p/JYEAL8g7ArqGoTaX6#Alignment_is_doing_pretty_well)?
* How likely is alignment by default?
* How hard is the alignment problem?
* What is an acceptable level of risk given the potential benefits of AGI?
This is the only question where I'm confident enough to say the answer is clearly "no". The remaining question is whether a pause would decrease the risk.
### **How dangerous is hardware overhang in a pause?**
* How much AI progress comes from additional spending vs. [hardware improvement](https://ourworldindata.org/grapher/gpu-price-performance) vs. algorithmic progress?
* Could hardware overhang cause a fast takeoff? Is a fast takeoff substantially more dangerous than a slow one?
* Would multipolar scenarios be more likely due to hardware overhang? Are multipolar or unipolar scenarios more dangerous?
* Is it possible to pause hardware progress as well?
### **How much does AI capability progress help alignment?**
* Is fine-tuning based safety such as RLHF making progress on fundamental alignment?
* How far are we from mechanistic interpretability on frontier models? On models that may constitute superintelligence?
* How long would it take to figure out agent foundations? Is this necessary to fundamentally solving alignment?
* Does governance get harder as we get closer to AGI because more people are more invested in the development of AI? Or easier, because the danger becomes more apparent?
* Is it possible to allow better models to be trained in some labs / a CERN for AI, while keeping them from being deployed / stolen by hackers?
### **Can we pause algorithmic capabilities research while allowing alignment research?**
* Is it possible to set up an institution that makes this judgement reliably? Is it as simple as disallowing some papers from NeurIPS, ICML, etc.?
* How much alignment research is dual-use and [how bad](http
... (truncated, 10 KB total)ab04a2b333b4faa8 | Stable ID: sid_zBBHJQPI5W