Community Building Organizations (Overview)
Overview
The AI safety field is embedded within and draws heavily from the effective altruism (EA) and rationality communities. Several organizations provide community infrastructure—forums, conferences, training programs, and physical spaces—that facilitate the intellectual exchange and talent development essential to AI safety work.
Key Organizations
| Organization | Type | Key Activities |
|---|---|---|
| Centre for Effective Altruism (CEA) | Community hub | EA Global conferences, community building grants, online forum |
| LessWrong | Online forum | Rationality and AI alignment discussion platform; hosts the Alignment Forum |
| Lighthaven | Event venue | Conference center in Berkeley hosting EA and rationality events |
| Manifest | Conference | Annual prediction market and forecasting conference |
| EA Global | Conference series | Flagship EA conference series held in multiple cities annually |
| Center for Applied Rationality (CFAR) | Training | Workshops teaching applied rationality and decision-making skills |
| Gratified | Platform | Community engagement and gratitude platform |
| The Sequences | Writing collection | Eliezer Yudkowsky's foundational essays on rationality and AI risk |
Role in AI Safety
These organizations contribute to AI safety through several mechanisms:
- Talent pipeline: EA Global, LessWrong, and CFAR workshops expose people to AI safety ideas and recruit talent into the field
- Intellectual infrastructure: LessWrong and the Alignment Forum host much of the public technical discussion on alignment research
- Coordination: Conferences and physical spaces (Lighthaven) enable face-to-face coordination among researchers and funders
- Epistemic norms: The rationality community emphasizes calibrated beliefs, intellectual honesty, and quantitative reasoning—norms that influence AI safety culture
Key Dynamics
Berkeley concentration: Much of the community infrastructure is physically concentrated in the San Francisco Bay Area, particularly Berkeley (Lighthaven, CFAR, MIRI). This creates benefits for in-person collaboration but also insularity risks.
EA-safety relationship: The 2022 FTX collapse disrupted EA community funding and raised questions about community governance. The AI safety field has since become somewhat more independent from the broader EA movement, though institutional connections remain strong.