Longterm Wiki
Updated 2026-03-13HistoryData
Page StatusContent
Edited today326 words
35QualityDraft28.5ImportancePeripheral
Summary

A brief overview of community-building organizations (CEA, LessWrong, CFAR, etc.) that support the EA/rationality ecosystem underpinning AI safety, noting Berkeley concentration and post-FTX EA-safety relationship shifts. Primarily a reference index with minimal original analysis or actionable guidance.

Content5/13
LLM summaryScheduleEntityEdit history2Overview
Tables1/ ~1Diagrams0Int. links9/ ~3Ext. links0/ ~2Footnotes0/ ~2References0/ ~1Quotes0Accuracy0RatingsN:2 R:3 A:2 C:4.5
Change History2
Clarify overview pages with new entity type3 weeks ago

Added `overview` as a proper entity type throughout the system, migrated all 36 overview pages to `entityType: overview`, built overview-specific InfoBox rendering with child page links, created an OverviewBanner component, and added a knowledge-base-overview page template to Crux.

Fix conflicting numeric IDs + add integrity checks#1684 weeks ago

Fixed all 9 overview pages from PR #118 which had numeric IDs (E687-E695) that conflicted with existing YAML entities. Reassigned to E710-E718. Then hardened the system to prevent recurrence: 1. Added page-level numericId conflict detection to `build-data.mjs` (build now fails on conflicts) 2. Created `numeric-id-integrity` global validation rule (cross-page uniqueness, format validation, entity conflict detection) 3. Added `numericId` and `subcategory` to frontmatter Zod schema with format regex

Community Building Organizations (Overview)

Overview

The AI safety field is embedded within and draws heavily from the effective altruism (EA) and rationality communities. Several organizations provide community infrastructure—forums, conferences, training programs, and physical spaces—that facilitate the intellectual exchange and talent development essential to AI safety work.

Key Organizations

OrganizationTypeKey Activities
Centre for Effective Altruism (CEA)Community hubEA Global conferences, community building grants, online forum
LessWrongOnline forumRationality and AI alignment discussion platform; hosts the Alignment Forum
LighthavenEvent venueConference center in Berkeley hosting EA and rationality events
ManifestConferenceAnnual prediction market and forecasting conference
EA GlobalConference seriesFlagship EA conference series held in multiple cities annually
Center for Applied Rationality (CFAR)TrainingWorkshops teaching applied rationality and decision-making skills
GratifiedPlatformCommunity engagement and gratitude platform
The SequencesWriting collectionEliezer Yudkowsky's foundational essays on rationality and AI risk

Role in AI Safety

These organizations contribute to AI safety through several mechanisms:

  1. Talent pipeline: EA Global, LessWrong, and CFAR workshops expose people to AI safety ideas and recruit talent into the field
  2. Intellectual infrastructure: LessWrong and the Alignment Forum host much of the public technical discussion on alignment research
  3. Coordination: Conferences and physical spaces (Lighthaven) enable face-to-face coordination among researchers and funders
  4. Epistemic norms: The rationality community emphasizes calibrated beliefs, intellectual honesty, and quantitative reasoning—norms that influence AI safety culture

Key Dynamics

Berkeley concentration: Much of the community infrastructure is physically concentrated in the San Francisco Bay Area, particularly Berkeley (Lighthaven, CFAR, MIRI). This creates benefits for in-person collaboration but also insularity risks.

EA-safety relationship: The 2022 FTX collapse disrupted EA community funding and raised questions about community governance. The AI safety field has since become somewhat more independent from the broader EA movement, though institutional connections remain strong.

Related Pages

Top Related Pages

Organizations

EA GlobalLighthaven (Event Venue)Gratified

Other

Eliezer Yudkowsky