Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Coefficient Giving

Published by Coefficient Giving, a philanthropic advisory organization; useful for funders and researchers interested in understanding the funding landscape and gaps in AI safety and security.

Metadata

Importance: 52/100organizational reportanalysis

Summary

This research piece from Coefficient Giving argues that AI safety and security research is significantly underfunded relative to the risks involved, and makes the case for philanthropists and funders to increase financial support for the field. It examines funding gaps, highlights promising organizations and research areas, and encourages diversification of the funder base beyond a few major donors.

Key Points

  • The AI safety and security field is critically underfunded compared to the scale of potential risks from advanced AI systems.
  • Current funding is concentrated among a small number of major philanthropists, creating fragility and gaps in the ecosystem.
  • Diversifying the funder base would improve field resilience and allow more research directions to be explored.
  • There are numerous high-impact organizations and programs in AI safety that could absorb additional philanthropic capital effectively.
  • Coefficient Giving positions this as a high-leverage opportunity for donors seeking to reduce catastrophic and existential risks.

Cited by 3 pages

Cached Content Preview

HTTP 200Fetched Apr 7, 202618 KB
AI Safety and Security Need More Funders | Coefficient Giving 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 

 

 
 

 
 
 
 
 
 
 
 
 
 
 



 

 

 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 

 
 

 
 Skip to Content 

 

 
 
 
 

 
 
 *+*]:mt-5">
 October 2, 2025 
 AI Safety and Security Need More Funders

 
 
 
 
 
 
 
 

 *]:lg:col-start-1! [&>.wysiwyg:first-child>div:first-child>h2:first-child]:border-t-0 [&>.wysiwyg:first-child>div:first-child>h2:first-child]:mt-0">
 

 

 Editor’s note: This article was published under our former name, Open Philanthropy. 

 
 Introduction

 
 h2+p]:relative"> Leading AI systems outperform human experts in virology tasks relevant to creating novel pathogens and show signs of deceptive behavior. Many experts predict that these systems will become smarter-than-human in the next decade. [1] But efforts to mitigate the risks remain profoundly underfunded. In this post, we argue that now is a uniquely high-impact moment for new philanthropic funders to enter the field of AI safety and security.  

 1 Leaders of OpenAI, Anthropic, and DeepMind have suggested that AGI will arrive in the next 2-5 years. A 2024 survey of thousands of AI researchers found that 10% thought that machines outperforming humans in every possible task will occur by 2027. As of writing this, the Metaculus forecast for AGI is June 2033. Close 
 We cover:  

 
 
 Why more philanthropic funders are needed now: Additional funders can help build a more effective coalition behind AI safety and security; back areas and organizations that Good Ventures (our largest funding partner) is not well-positioned to support; and increase the total amount of funding in this space, which we think is still too low. Because of these factors, we are typically able to recommend giving opportunities to external funders that are 2-5x as cost-effective as Good Ventures’ marginal AI safety funding. ( More )  

 Examples of previous philanthropic wins in AI safety and security: Our experience over 10 years of grantmaking in this space shows that well-targeted philanthropy can meaningfully reduce worst-case risks from advanced AI. We discuss several examples across our three investment pillars: visibility, safeguards, and capacity. ( More ) 

 How other funders can get involved: We help new funders reduce the time required to find high-impact philanthropic opportunities in the field by developing custom portfolios of grant recommendations that fit each donor’s interests and preferences. This includes connecting them with other leading experts and advisors, offering support to evaluate giving opportunities, and sourcing co-funding opportunities. ( More )

 

 
 This is the third in a three-part series on our approach to safety, progress, and AI. The first covered why we fund scientific and technological progress while also funding work to reduce risks from emerging technologies like AI. The second described our grantmaking approach for AI safety and security.  

 
 

 
 Why

... (truncated, 18 KB total)
Resource ID: 0b2d39c371e3abaa | Stable ID: sid_TsRGd6zOWF