Skip to content
Longterm Wiki
Back

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: EA Forum

A community aggregator for EA-aligned AI safety discourse; useful for tracking emerging concerns and debates in the field, but not a primary research source. Best used as a discovery tool for specific posts.

Metadata

Importance: 45/100wiki pagehomepage

Summary

The EA Forum's AI safety topic page aggregates community discussions, research posts, and quick takes on reducing existential risks from advanced AI. It serves as a living index of community thinking spanning technical safety, policy, capacity-building, and emerging concerns like superpersuasive AI and evaluation saturation.

Key Points

  • Aggregates thousands of posts (4664+) on AI safety topics including alignment, governance, policy, and community building
  • Quick takes highlight emerging concerns: superpersuasive AI eroding expert epistemic calibration and the 'eval singularity' where capability growth outpaces measurement
  • Features capacity-building discussions, fellowship announcements, and community space fundraising reflecting the field's organizational growth
  • Responsible Scaling Policy v3 and long-term ideological risk posts represent high-engagement policy and governance content
  • Functions as a real-time pulse of EA-adjacent AI safety community priorities rather than a single research contribution

Cited by 1 page

PageTypeQuality
EA GlobalOrganization38.0

Cached Content Preview

HTTP 200Fetched Apr 9, 202612 KB
AI safety - EA Forum 
 
 This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. AI safety AI safety Studying and reducing the existential risks posed by advanced artificial intelligence Posts Wiki Write new Subscribe New & upvoted 17 Help me launch Obsolete: a book aimed at building a new movement for AI reform Garrison Garrison · 4h  ago · 8 m read 0 0 14 Video and transcript of talk on writing AI constitutions Joe_Carlsmith Joe_Carlsmith · 6h  ago · 56 m read 0 0 18 What is the Expected Value of Working on AI Safety? I Ran the Numbers. Hazem Hassan 🔶 Hazem Hassan 🔶 · 1d  ago · 7 m read 7 7 40 By Strong Default, ASI Will End Liberal Democracy MichaelDickens MichaelDickens · 3d  ago · 3 m read 1 1 44 Defense-favoured coordination design sketches Forethought Forethought , Owen Cotton-Barratt , Oliver Sourbut , Lizka , rosehadshar + 0 more · 3d  ago · 29 m read 14 14 188 Broad Timelines Toby_Ord Toby_Ord · 21d  ago · Curated 13h  ago · 19 m read 25 25 1 Enforcement without experience: Military AI and China | Responsible AI in Military Contexts: A Comparative Analysis; Part 2 of 5
 Slava Kold (Viacheslav Kolodiazhnyi) Slava Kold (Viacheslav Kolodiazhnyi) · 11h  ago · 18 m read 0 0 9 When the Court Said "Orwellian": The Pentagon-Anthropic Ruling and What It Reveals About AI Governance Under Political Pressure Slava Kold (Viacheslav Kolodiazhnyi) Slava Kold (Viacheslav Kolodiazhnyi) · 1d  ago · 11 m read 0 0 103 Survey of AI safety leaders on x-risk, AGI timelines, and resource allocation (Feb 2026) OllieRodriguez OllieRodriguez , Jemima + 0 more · 15d  ago · 8 m read 16 16 169 The case for AI safety capacity-building work
 abergal abergal · 1mo  ago · 27 m read 4 4 Load more (10/4759) Quick takes

 38 Michaël Trazzi 21d In two days (March 21st, 12-4pm), about 140 of us (event link) will be marching on Anthropic, OpenAI and xAI in SF asking the CEOs to make statements on whether they would stop developing new frontier models if every other major lab in the world credibly does the same. This comes after Anthropic removed its commitment to pause development from their RSP.

We'll be starting at 500 Howard St, San Francisco (Anthropic's Office, full schedule and more info here). This is shaping to be the biggest US AI Safety protest to date, with a coalition including Nate Soares (MIRI), David Krueger (Evitable), Will Fithian (Berkeley Professor) and folks representing PauseAI, QuitGPT, Humans First. 27 Ben_West🔸 2mo 2 The AI Eval Singularity is Near

 * AI capabilities seem to be doubling every 4-7 months
 * Humanity's ability to measure capabilities is growing much more slowly
 * This implies an "eval singularity": a point at which capabilities grow faster than our ability to measure them
 * It seems like the singularity is ~here in cybersecurity, CBRN, and AI R&D (supporting quotes below)
 * It's possible that this is temporary, but the people involved seem 

... (truncated, 12 KB total)
Resource ID: 721b826caa2020b3 | Stable ID: sid_ahpTe2DI8L