Skip to content
Longterm Wiki
Back

Open Philanthropy: Potential Risks from Advanced Artificial Intelligence

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Coefficient Giving

Open Philanthropy is a major philanthropic funder whose AI safety grantmaking strategy has significantly shaped the field; this page documents their rationale and scope for those seeking context on funding priorities and institutional perspectives.

Metadata

Importance: 62/100organizational reporthomepage

Summary

Open Philanthropy's focus area page on potential risks from advanced AI outlines their strategic grantmaking approach to reducing catastrophic and existential risks from transformative AI systems. It explains their reasoning for prioritizing AI safety research, policy work, and field-building as among the most important philanthropic opportunities of our time.

Key Points

  • Open Philanthropy treats potential risks from advanced AI as one of its highest-priority cause areas due to the scale and severity of potential harms.
  • Funding is directed toward technical AI safety research, governance, policy, and efforts to build a robust AI safety field.
  • The page reflects a long-termist framework, emphasizing risks from AI systems that could be transformative within decades.
  • Open Philanthropy has become one of the largest funders of AI safety work, shaping research agendas at major labs and universities.
  • The focus area encompasses both near-term and speculative long-term risks, including misalignment and misuse scenarios.

Cited by 1 page

PageTypeQuality
AI Risk Portfolio AnalysisAnalysis64.0

Cached Content Preview

HTTP 200Fetched Apr 10, 20269 KB
Navigating Transformative AI | Coefficient Giving 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 

 

 
 

 
 
 
 
 
 
 
 
 
 
 



 

 

 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 

 
 

 
 Skip to Content 

 

 
 
 
 

 

 

 
 
 
 
 Navigating Transformative AI

 
 Though advances in AI could benefit people enormously, we think they also pose serious risks from misuse, accidents, loss of control, and other problems. 

 
 
 
 
 
 
 
 480+ 
 grants made 
 

 
 
 
 
 
 Contents

 
 About the Fund 

 Funding Opportunities 

 Research & Updates 

 Featured Grants 

 
 
 
 
 
 
 
 
 
 
 

 About the Fund

 
 
 

 Program Leads

 
 
 
 
 
 
 
 
 
 Claire Zabel 
 Managing Director, Short Timelines Special Projects

 
 

 
 
 
 
 
 
 
 Luke Muehlhauser 
 Managing Director, AI Governance & Policy

 
 

 
 
 
 
 
 
 
 Peter Favaloro 
 Program Director, Technical AI Safety

 
 

 
 
 
 
 
 
 
 Eli Rose 
 Program Director, Global Catastrophic Risks Capacity Building

 
 

 
 
 
 

 Partners

 
 
 
 
 Good Ventures 

 
 Interested in providing funding within this space? Reach out to partnerwithus@coefficientgiving.org .

 

 
 
 
 

 
 In recent years, we’ve seen rapid progress in artificial intelligence. There’s a strong possibility that AI systems will soon outperform humans in nearly all cognitive domains. 

 We think AI could be the most important technological development in human history. If handled well, it could accelerate scientific discovery , improve health outcomes , and create unprecedented prosperity . If handled poorly, it could lead to catastrophic consequences: many experts think that risks from AI-related misuse, loss of control, or drastic societal change could endanger human civilization. 

 To reduce the risk of global catastrophe and help society prepare for major advances in AI, we support: 

 
 Technical AI safety research aimed at making advanced AI systems more trustworthy, robust, controllable, and aligned 

 AI governance and policy work to develop frameworks for safe, secure, and responsibly managed AI development 

 Capacity building to grow and strengthen the field of researchers and practitioners working on these challenges

 New projects  that we expect to be particularly impactful if timelines to transformative AI are short
 
 
 
 
 
 
 
 
 
 
 
 

 

 

 
 

 
 
 
 
 
 Funding Opportunities

 
 
 Prev 
 
 
 Next 
 
 
 

 
 
 

 
 
 Request for Proposals

 
 
 
 
 AI Governance
 
 
 
 

 
 
 

 
 This program provides support for projects that aim to improve the odds of humanity successfully navigating the risks of transformative AI. We are primarily seeking expressions of interest in the following areas: technical AI governance, policy development, strategic analysis and threat modeling, frontier company policy, international AI governance, and law.

 
 

 
 Learn more and apply
 
 

 

 
 
 

 

 
 
 Request for Proposals

 
 
 
 
 Funding for Work That Builds Capacity To Address Risks From Transformative AI
 
 
 
 

 
 
 

 
 T

... (truncated, 9 KB total)
Resource ID: f8f6f3ee55c2babe | Stable ID: sid_BXlD6qTczC