Future of Life Institute 2023 Grants
webCredibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: Future of Life Institute
This page lists Future of Life Institute's 2023 institutional grants, revealing which AI safety organizations received funding and for what purposes, providing insight into the philanthropic landscape supporting AI safety research and governance.
Metadata
Summary
This page catalogs FLI's 2023 grant allocations to AI safety organizations including ARC Evals, AI Impacts, BERI/CHAI, Center for AI Safety, and others. Grants range from $22,000 to over $1.4 million and cover technical alignment research, capability evaluations, policy work, and existential risk reduction. It reflects FLI's strategic priorities in funding the AI safety ecosystem.
Key Points
- •Alignment Research Center received $1.4M to support ARC Evals, focused on capability and alignment evaluations for advanced ML models.
- •Centre for Long-Term Resilience received $769K for general support of UK-based extreme risk resilience work.
- •AI Objectives Institute received $500K for Talk to the City and Moral Mirror projects on sociotechnical AI dynamics.
- •Center for Humane Technology received $500K for AI-related policy work and messaging cohesion within the AI x-risk community.
- •Grants span technical safety, policy, forecasting, and existential risk reduction, reflecting a broad portfolio approach.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Future of Life Institute | Organization | 46.0 |
Cached Content Preview
All Grant Programs 2023 Grants
This page highlights some of the institutional grants that the Future of Life Institute awarded in 2023. Status: Funds allocated
Grants archive
An archive of all grants provided within this grant program: Project title AI Impacts
Amount recommended $162,000.00 Details Project Summary
General support. AI Impacts performs research related to the future of AI. They aim to answer decision-relevant questions in the most neglected areas of AI strategy and forecasting. The intended audience includes researchers doing work related to artificial intelligence, philanthropists involved in funding research related to artificial intelligence, and policy-makers whose decisions may be influenced by their expectations about artificial intelligence.
Project title AI Objectives Institute
Amount recommended $500,000.00 Details Project Summary
Support for the Talk to the City and Moral Mirror projects. AI Objectives Institute (AOI) is a non-profit research lab of leading builders and researchers. AOI brings together a network of volunteer researchers from top AI labs with product builders and experts from psychology, political theory and economics for a sociotechnical systems perspective on AI dynamics as well as post-singularity outcomes, improving odds that human values thrive in a world of rapidly deployed, extremely capable AI systems evolving with existing institutions and incentives.
Project title Alignment Research Center
Amount recommended $1,401,000.00 Details Project Summary
Support for the Alignment Research Center (ARC) Evaluation (Evals) Team. Evals is a new team at ARC building capability evaluations (and in the future, alignment evaluations) for advanced ML models. The goals of the project are to improve our understanding of what alignment danger is going to look like, understand how far away we are from dangerous AI, and create metrics that labs can make commitments around.
Project title Berkeley Existential Risk Initiative
Amount recommended $481,000.00 Details Project Summary
Support for the Berkeley Existential Risk Initiative's (BERI) collaboration with The Center for Human-Compatible Artificial Intelligence (CHAI). BERI’s mission is to improve human civilization’s long-term prospects for survival and flourishing. Currently, its main strategy is collaborating with university research groups working to reduce existential risk by providing them with free services and support. CHAI’s mission is to develop the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems.
Project title Center for AI Safety, Inc.
Amount recommended $22,000.00 Details Project Summary
General support. The Center for AI Safety (CAIS) exists to ensure the safe development and deployment of AI. AI risk has emerged as a global priority, ranking alongside pandemics and nuclear war. Despite its importance, AI safety remains remarka
... (truncated, 10 KB total)4711920c1730c71c