Skip to content
Longterm Wiki
Back

Navigating Transformative AI Fund – Coefficient Giving

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Coefficient Giving

This is the landing page for Coefficient Giving's 'Navigating Transformative AI' fund, which finances technical AI safety research, AI governance/policy work, and capacity building to reduce catastrophic risks from advanced AI systems.

Metadata

Importance: 52/100homepage

Summary

Coefficient Giving's Navigating Transformative AI fund supports grants across technical AI safety research, AI governance and policy, and capacity building for researchers and practitioners. The fund has made 480+ grants and is led by program directors covering short timelines projects, AI governance, technical safety, and global catastrophic risk capacity building. It accepts proposals across multiple open funding tracks.

Key Points

  • Fund has made 480+ grants supporting technical AI safety, AI governance/policy, and capacity building initiatives.
  • Open funding opportunities include AI governance RFPs, capacity building projects, and career development/transition funding.
  • Program leads include Claire Zabel (short timelines), Luke Muehlhauser (AI governance), Peter Favaloro (technical safety), and Eli Rose (GCR capacity building).
  • Partners with Good Ventures and seeks additional funding partners for the AI safety space.
  • Motivated by belief that transformative AI could be the most important technological development in history, with both enormous benefits and catastrophic risks.

Cited by 2 pages

PageTypeQuality
Coefficient GivingOrganization55.0
Long-Term Future Fund (LTFF)Organization56.0

Cached Content Preview

HTTP 200Fetched Apr 11, 20265 KB
Navigating Transformative AI

 
 Though advances in AI could benefit people enormously, we think they also pose serious risks from misuse, accidents, loss of control, and other problems. 

 
 
 
 
 
 
 
 480+ 
 grants made 
 

 
 
 
 
 
 Contents

 
 About the Fund 

 Funding Opportunities 

 Research & Updates 

 Featured Grants 

 
 
 
 
 
 
 
 
 
 
 

 About the Fund

 
 
 

 Program Leads

 
 
 
 
 
 
 
 
 
 Claire Zabel 
 Managing Director, Short Timelines Special Projects

 
 

 
 
 
 
 
 
 
 Luke Muehlhauser 
 Managing Director, AI Governance & Policy

 
 

 
 
 
 
 
 
 
 Peter Favaloro 
 Program Director, Technical AI Safety

 
 

 
 
 
 
 
 
 
 Eli Rose 
 Program Director, Global Catastrophic Risks Capacity Building

 
 

 
 
 
 

 Partners

 
 
 
 
 Good Ventures 

 
 Interested in providing funding within this space? Reach out to partnerwithus@coefficientgiving.org .

 

 
 
 
 

 
 In recent years, we’ve seen rapid progress in artificial intelligence. There’s a strong possibility that AI systems will soon outperform humans in nearly all cognitive domains. 

 We think AI could be the most important technological development in human history. If handled well, it could accelerate scientific discovery , improve health outcomes , and create unprecedented prosperity . If handled poorly, it could lead to catastrophic consequences: many experts think that risks from AI-related misuse, loss of control, or drastic societal change could endanger human civilization. 

 To reduce the risk of global catastrophe and help society prepare for major advances in AI, we support: 

 
 Technical AI safety research aimed at making advanced AI systems more trustworthy, robust, controllable, and aligned 

 AI governance and policy work to develop frameworks for safe, secure, and responsibly managed AI development 

 Capacity building to grow and strengthen the field of researchers and practitioners working on these challenges

 New projects  that we expect to be particularly impactful if timelines to transformative AI are short
 
 
 
 
 
 
 
 
 
 
 
 

 

 

 
 

 
 
 
 
 
 Funding Opportunities

 
 
 Prev 
 
 
 Next 
 
 
 

 
 
 

 
 
 Request for Proposals

 
 
 
 
 AI Governance
 
 
 
 

 
 
 

 
 This program provides support for projects that aim to improve the odds of humanity successfully navigating the risks of transformative AI. We are primarily seeking expressions of interest in the following areas: technical AI governance, policy development, strategic analysis and threat modeling, frontier company policy, international AI governance, and law.

 
 

 
 Learn more and apply
 
 

 

 
 
 

 

 
 
 Request for Proposals

 
 
 
 
 Funding for Work That Builds Capacity To Address Risks From Transformative AI
 
 
 
 

 
 
 

 
 This program funds projects that build society’s capacity to navigate the risks of transformative AI. We’re especially interested in funding projects that help new talent pivot into the field, support existing talent (e.g. v

... (truncated, 5 KB total)
Resource ID: 33c1d2aa2a92c23d