501(c)(3) nonprofit sister organization to the Future of Life Institute (FLI), focused on incubating new organizations that steer transformative technology toward benefiting life. Founded in 2022 with $25M initial capitalization from FLI (sourced from Vitalik Buterin's 2021 SHIB token donation). Operates as part grant-maker, part strategic research group, part VC firm with a goal of establishing 3-5 new organizations per year. Led by Anthony Aguirre (President) and Josh Jacobson (COO). FLF's four-step approach: research gaps, strategic planning, recruit founders, and provide operational support. Total funding received from FLI through 2024 approximately $58.8M ($25M in 2022, $15.7M in 2023, $18.1M in 2024). Key initiatives include the AI for Human Reasoning Fellowship (30 fellows, 2025), CARMA (Center for AI Risk Management & Alignment), and Wise Ancestors (conservation genomics). Institutional overhead capped at 15% of direct grant costs.
Anthropic Core ViewsSafety AgendaAnthropic Core ViewsAnthropic allocates 15-25% of R&D (~$100-200M annually) to safety research including the world's largest interpretability team (40-60 researchers), while maintaining $5B+ revenue by 2025. Their RSP...Quality: 62/100
Approaches
AI-Human Hybrid SystemsApproachAI-Human Hybrid SystemsHybrid AI-human systems achieve 15-40% error reduction across domains through six design patterns, with evidence from Meta (23% false positive reduction), Stanford Healthcare (27% diagnostic improv...Quality: 91/100
Analysis
Planning for Frontier Lab ScalingAnalysisPlanning for Frontier Lab ScalingStrategic framework analyzing how non-lab actors could respond to frontier AI labs deploying $100-300B+ pre-TAI. For philanthropies: analysis of potential shifts from matching spend to maximizing l...Quality: 55/100Elon Musk (Funder)AnalysisElon Musk (Funder)Elon Musk's philanthropy represents a massive gap between potential and actual impact. With ~$400B net worth and a 2012 Giving Pledge commitment, he has given only ~$250M annually through his found...Quality: 45/100
Other
Anthony AguirrePersonAnthony AguirrePhysicist and AI safety advocate serving as Executive Director of the Future of Life Institute and President of the Future of Life Foundation. Faggin Presidential Professor for Physics of Informati...Josh JacobsonPersonJosh JacobsonChief Operating Officer of the Future of Life Foundation since August 2023. Extensive background in AI safety and effective altruism organizations with prior roles at ARC Evals, FAR AI, Anthropic, ...Oliver SourbutPersonOliver SourbutResearcher and AI specialist at the Future of Life Foundation. Former UK AI Safety Institute researcher. Engaged with OECD, UK FCDO, and DSIT on AI governance. Researched agent oversight at Oxford....Elizabeth GarrettPersonElizabeth GarrettHeadhunter and recruitment lead at the Future of Life Foundation. Former Center for Applied Rationality staff. Led nonprofit workshops on epistemics and AI safety. Prior roles at Aidgrade and Nexts...Richard MallahPersonRichard MallahExecutive Director of CARMA (Center for AI Risk Management & Alignment), incubated by the Future of Life Foundation. Former Principal AI Safety Strategist at FLI since 2014. Focus areas include AI ...Kathleen FinlinsonPersonKathleen FinlinsonProgram manager for the AI for Human Reasoning Fellowship at the Future of Life Foundation. AI safety researcher and strategist. Co-founder of Eleos AI Research (October 2024 - May 2025). Former Re...
Organizations
MetaculusOrganizationMetaculusMetaculus is a reputation-based forecasting platform with 1M+ predictions showing AGI probability at 25% by 2027 and 50% by 2031 (down from 50 years away in 2020). Analysis finds good short-term ca...Quality: 50/100Center for AI Risk Management & Alignment (CARMA)OrganizationCenter for AI Risk Management & Alignment (CARMA)AI safety organization incubated by the Future of Life Foundation, led by Richard Mallah (former FLI Principal AI Safety Strategist since 2014). Focus areas include risk assessment, policy strategy...Wise AncestorsOrganizationWise AncestorsConservation genomics nonprofit originally funded by FLI, later receiving approximately $800K from the Future of Life Foundation (April 2025). Co-founded by Anthony Aguirre. Operates a platform for...QURI (Quantified Uncertainty Research Institute)OrganizationQURI (Quantified Uncertainty Research Institute)QURI develops Squiggle (probabilistic programming language with native distribution types), SquiggleAI (Claude-powered model generation producing 100-500 line models), Metaforecast (aggregating 2,1...Quality: 48/100
Concepts
EA Shareholder Diversification from AnthropicConceptEA Shareholder Diversification from AnthropicThe EA ecosystem faces extreme portfolio concentration risk with $27-76B in risk-adjusted capital tied to Anthropic stock. This page analyzes diversification strategies across three time horizons: ...Quality: 60/100Agentic AICapabilityAgentic AIAnalysis of agentic AI capabilities and deployment challenges, documenting industry forecasts (40% of enterprise apps by 2026, $199B market by 2034) alongside implementation difficulties (40%+ proj...Quality: 68/100Self-Improvement and Recursive EnhancementCapabilitySelf-Improvement and Recursive EnhancementComprehensive analysis of AI self-improvement from current AutoML systems (23% training speedups via AlphaEvolve) to theoretical intelligence explosion scenarios, with expert consensus at ~50% prob...Quality: 69/100
Risks
SchemingRiskSchemingScheming—strategic AI deception during training—has transitioned from theoretical concern to observed behavior across all major frontier models (o1: 37% alignment faking, Claude: 14% harmful compli...Quality: 74/100Deceptive AlignmentRiskDeceptive AlignmentComprehensive analysis of deceptive alignment risk where AI systems appear aligned during training but pursue different goals when deployed. Expert probability estimates range 5-90%, with key empir...Quality: 75/100