Skip to content
Longterm Wiki
Back

Open Philanthropy Request for Proposals: Technical AI Safety Research

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Coefficient Giving

This Open Philanthropy RFP is a key funding opportunity document that shaped the direction of technical AI safety research by publicly identifying priority areas; useful context for understanding how philanthropic funding influences the field.

Metadata

Importance: 62/100press releasereference

Summary

Open Philanthropy issued a request for proposals seeking technical AI safety research projects, signaling funding priorities and research directions the organization considers most valuable. The RFP outlines areas of interest including interpretability, scalable oversight, and related alignment challenges, aiming to grow the field by supporting researchers and organizations working on these problems.

Key Points

  • Open Philanthropy seeks proposals for technical AI safety research across multiple focus areas including interpretability and scalable oversight.
  • The RFP serves as a field-building mechanism by directing philanthropic funding toward high-priority alignment research gaps.
  • Reflects Open Philanthropy's broader strategy of proactively shaping the AI safety research landscape through targeted grants.
  • Eligibility and submission criteria are provided to help individual researchers and organizations apply for funding.
  • Signals which technical safety problem areas are considered most promising and underfunded by a major AI safety funder.

Cited by 4 pages

Cached Content Preview

HTTP 200Fetched Apr 9, 202632 KB
Request for Proposals: Technical AI Safety Research | Coefficient Giving 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 

 

 
 

 
 
 
 
 
 
 
 
 
 
 



 

 

 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 

 
 

 
 Skip to Content 

 

 
 
 
 
 
 

 Request for Proposals: Technical AI Safety Research

 

 
 
 
 
 

 
 *]:lg:col-start-1!">
 

 

 A call for AI safety research

 
 2025, like 2024, will see the release of the most capable AI system in history. In fact, we may see it happen multiple times, each a few weeks or months apart. This won’t require any spectacular breakthroughs — just the same steady progress we’ve seen for the last few years. No one knows how long this trend will last, but many AI researchers and developers now expect we’ll have human-level AI within a decade , and that it will be radically transformative . 

 
 At Open Philanthropy, we think the possibility of transformative AI is worth taking seriously and planning for right now. In particular, we should prepare for the risk that AI systems could be misaligned — that they might pursue goals that no one gave them and harm people in the process. We think that ML research today can help to clarify and mitigate the likelihood of this failure mode. 

 
 Since 2014, Open Philanthropy has put hundreds of millions of dollars toward scientific research. We’ve funded groundbreaking work on computational protein design , novel methods for malaria eradication , and cutting-edge strategies for pandemic prevention . With transformative AI on the horizon, we see another opportunity for our funding to accelerate highly impactful technical research. In consultation with our technical advisors, we’ve generated a list of research areas that we think offer high leverage for improving our understanding and control of AI. 

 
 We expect to spend roughly $40M on this RFP over the next 5 months , and we have funding available to spend substantially more depending on the quality of applications received. We’re open to proposals for grants of many sizes and purposes, ranging from rapid funding for API credits all the way to seed funding for new research organizations. 

 
 Whether you’re an expert on one of these research topics or you’ve barely thought about them, we encourage you to apply. Over the last few years, we’ve seen many researchers switch into safety research and produce impactful work, and we think there’s still a lot of ground to cover. 

 
 Applications closed on April 15, 2025, at 11:59 PM PDT. However, the submission form will stay open until July 15 to accommodate applicants who received a “revise and resubmit” response to their EOIs. During this 3-month grace period, we will also accept late submissions from other applicants but will have a high bar for considering them. We will also be slower than normal to respond to EOIs, since our staff have mostly moved on to other stages of this project.  

 
 TAIS RFP Application 

 
 How to read this RFP 

 
 The RFP is organized in

... (truncated, 32 KB total)
Resource ID: 913cb820e5769c0b | Stable ID: sid_23SkxNzjdP