Longterm Wiki
Navigation
Updated 2026-03-13HistoryData
Page StatusResponse
Edited today7 words2 backlinks
67ImportanceUseful25ResearchMinimal
Content2/13
LLM summaryScheduleEntityEdit historyOverview
Tables0/ ~1Diagrams0Int. links0/ ~3Ext. links0/ ~1Footnotes0/ ~2References30/ ~1Quotes0Accuracy0Backlinks2
Issues1
StructureNo tables or diagrams - consider adding visual content

AI Safety Field Building

Crux

AI Safety Field Building and Community

Growing the AI safety research community through funding, training, and outreach

CategoryMeta-level intervention
Time Horizon3-10+ years
Primary MechanismHuman capital development
Key MetricResearchers produced per year
Entry BarrierLow to Medium
Related
Organizations
Redwood ResearchAnthropic
7 words · 2 backlinks

This page is a stub. Content needed.

References

1MATS Spring 2024 Extension RetrospectiveLessWrong·HenningB, Matthew Wearden, Cameron Holmes & Ryan Kidd·2025·Blog post
★★★☆☆
2ARENA 4.0 Impact ReportLessWrong·Chloe Li, JamesH & James Fox·2024·Blog post
★★★☆☆
7EA Forum analysisEA Forum·Christopher Clay·2025·Blog post
8AI Safety Fundfrontiermodelforum.org
980,000 Hours80,000 Hours

80,000 Hours provides a comprehensive guide to technical AI safety research, highlighting its critical importance in preventing potential catastrophic risks from advanced AI systems. The article explores career paths, skills needed, and strategies for contributing to this emerging field.

★★★☆☆
10AI Safety Field Growth Analysis 2025 (LessWrong)LessWrong·Stephen McAleese·2025·Blog post
★★★☆☆

Open Philanthropy reviewed its philanthropic efforts in 2024, focusing on expanding partnerships, supporting AI safety research, and making strategic grants across multiple domains including global health and catastrophic risk reduction.

14Overview of AI Safety FundingEA Forum·Stephen McAleese·2023·Blog post
15CAIS 2024 Impact ReportCenter for AI Safety
★★★★☆
17AI Safety Index Winter 2025Future of Life Institute

The Future of Life Institute assessed eight AI companies on 35 safety indicators, revealing substantial gaps in risk management and existential safety practices. Top performers like Anthropic and OpenAI demonstrated marginally better safety frameworks compared to other companies.

★★★☆☆

The Center for Human-Compatible AI (CHAI) focuses on reorienting AI research towards developing systems that are fundamentally beneficial and aligned with human values through technical and conceptual innovations.

19ARENAarena.education
21International AI Safety Report 2025internationalaisafetyreport.org

The International AI Safety Report 2025 provides a global scientific assessment of general-purpose AI capabilities, risks, and potential management techniques. It represents a collaborative effort by 96 experts from 30 countries to establish a shared understanding of AI safety challenges.

22ARENA 5.0LessWrong·JScriven, JamesH & James Fox·2025·Blog post
★★★☆☆
23MATS Research Programmatsprogram.org

MATS is an intensive training program that helps researchers transition into AI safety, providing mentorship, funding, and community support. Since 2021, over 446 researchers have participated, producing 150+ research papers and joining leading AI organizations.

24Catalyze's pilot programEA Forum·Catalyze Impact, Alexandra Bos & Mick·2025·Blog post
26AI Safety Field Growth Analysis 2025EA Forum·Stephen McAleese·2025

Comprehensive study tracking the expansion of technical and non-technical AI safety fields from 2010 to 2025. Documents growth from approximately 400 to 1,100 full-time equivalent researchers across both domains.

★★★☆☆

Open Philanthropy provides grants across multiple domains including global health, catastrophic risks, and scientific progress. Their focus spans technological, humanitarian, and systemic challenges.

SPAR is a research program that pairs mentees with experienced professionals to work on AI safety, policy, and related research projects. The program offers structured research experience, mentorship, and potential publication opportunities.

The Future of Life Institute's AI Safety Index 2024 evaluates six leading AI companies across 42 safety indicators, highlighting major concerns about risk management and potential AI threats.

★★★☆☆
30Widening AI Safety's Talent PipelineEA Forum·RubenCastaing, Nelson_GC & danwil·2025·Blog post

Related Pages

Top Related Pages

Approaches

AI Safety Training Programs

Analysis

Capabilities-to-Safety Pipeline ModelAI Safety Researcher Gap Model

Organizations

Center for AI SafetyLightcone InfrastructureAI Safety SupportAlignment Research Engineer AcceleratorAI Safety CampBlueDot Impact