Longterm Wiki

Center for AI Risk Management & Alignment (CARMA)

Safety Organization

AI safety organization incubated by the Future of Life Foundation, led by Richard Mallah (former FLI Principal AI Safety Strategist since 2014). Focus areas include risk assessment, policy strategy, and technical safety. Based in Calabasas, CA. Fiscally sponsored by Social & Environmental Entrepreneurs, Inc. Described by Anthony Aguirre as "the one thing FLF has sort of fully launched." Team of 12+ people. Advisors include Anthony Aguirre and Eric Drexler.

Related Wiki Pages

Top Related Pages

Safety Research

Anthropic Core ViewsProsaic Alignment

Approaches

AI AlignmentAI-Human Hybrid Systems

Analysis

Short AI Timeline Policy ImplicationsAlignment Robustness Trajectory Model

Historical

Anthropic-Pentagon Standoff (2026)

Other

David SacksValue LearningRLHF

Key Debates

Why Alignment Might Be HardAI Alignment Research AgendasWhy Alignment Might Be Easy

Risks

Epistemic SycophancyScheming

Organizations

US AI Safety InstituteSafe Superintelligence Inc.Cambridge Boston Alignment Initiative

Concepts

Agentic AISelf-Improvement and Recursive Enhancement