AI Safety Field Building
AI Safety Field Building and Community
Growing the AI safety research community through funding, training, and outreach
This page is a stub. Content needed.
References
1MATS Spring 2024 Extension RetrospectiveLessWrong·HenningB, Matthew Wearden, Cameron Holmes & Ryan Kidd·2025·Blog post▸
80,000 Hours provides a comprehensive guide to technical AI safety research, highlighting its critical importance in preventing potential catastrophic risks from advanced AI systems. The article explores career paths, skills needed, and strategies for contributing to this emerging field.
Open Philanthropy reviewed its philanthropic efforts in 2024, focusing on expanding partnerships, supporting AI safety research, and making strategic grants across multiple domains including global health and catastrophic risk reduction.
The Future of Life Institute assessed eight AI companies on 35 safety indicators, revealing substantial gaps in risk management and existential safety practices. Top performers like Anthropic and OpenAI demonstrated marginally better safety frameworks compared to other companies.
The Center for Human-Compatible AI (CHAI) focuses on reorienting AI research towards developing systems that are fundamentally beneficial and aligned with human values through technical and conceptual innovations.
The International AI Safety Report 2025 provides a global scientific assessment of general-purpose AI capabilities, risks, and potential management techniques. It represents a collaborative effort by 96 experts from 30 countries to establish a shared understanding of AI safety challenges.
MATS is an intensive training program that helps researchers transition into AI safety, providing mentorship, funding, and community support. Since 2021, over 446 researchers have participated, producing 150+ research papers and joining leading AI organizations.
Comprehensive study tracking the expansion of technical and non-technical AI safety fields from 2010 to 2025. Documents growth from approximately 400 to 1,100 full-time equivalent researchers across both domains.
Open Philanthropy provides grants across multiple domains including global health, catastrophic risks, and scientific progress. Their focus spans technological, humanitarian, and systemic challenges.
SPAR is a research program that pairs mentees with experienced professionals to work on AI safety, policy, and related research projects. The program offers structured research experience, mentorship, and potential publication opportunities.
The Future of Life Institute's AI Safety Index 2024 evaluates six leading AI companies across 42 safety indicators, highlighting major concerns about risk management and potential AI threats.