Skip to content
Longterm Wiki
Back

Request for Proposals: Technical AI Safety Research - Coefficient Giving

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Coefficient Giving

This Open Philanthropy RFP was a significant funding call that helped shape the direction of technical AI safety research by incentivizing alignment work grounded in empirical deep learning, influencing which research agendas received resources during a formative period for the field.

Metadata

Importance: 55/100organizational reportprimary source

Summary

Open Philanthropy issued a request for proposals seeking technical AI safety research projects that directly engage with modern deep learning systems. The RFP aimed to fund alignment research grounded in empirical work with neural networks, rather than purely theoretical approaches, reflecting a strategic shift toward more practically-oriented safety research.

Key Points

  • Open Philanthropy sought proposals for AI alignment research specifically targeting deep learning systems, signaling prioritization of empirical over purely theoretical safety work.
  • The RFP represented a major funding opportunity for technical AI safety researchers working on problems relevant to contemporary ML systems.
  • Reflects Open Philanthropy's broader strategy of directing philanthropic capital toward near-term tractable alignment problems with modern architectures.
  • Encouraged diverse research directions including interpretability, robustness, and alignment techniques applicable to neural networks.
  • Signals institutional recognition that safety research must keep pace with rapid advances in deep learning capabilities.

Cited by 1 page

PageTypeQuality
Model Organisms of MisalignmentAnalysis65.0

Cached Content Preview

HTTP 200Fetched Apr 9, 20267 KB
Apply for Funding | Coefficient Giving 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 

 

 
 

 
 
 
 
 
 
 
 
 
 
 



 

 

 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 

 
 

 
 Skip to Content 

 

 
 
 
 
 
 
 Apply for Funding

 
 
 
 
 
 

 
 *]:lg:col-start-1!">
 

 

 
 
 
 We find most of our giving opportunities through proactive research. However, the following funds have open applications: 

 Abundance & Growth Fund 

 Living literature reviews

 Supports scholars to build and maintain “living literature reviews” — continuously updated collections of articles that synthesize research on a single topic. We’re especially interested in topics related to policymaking.

 Learn more and apply 

 Biosecurity & Pandemic Preparedness 

 Request for proposals: Biosecurity

 Supports work aimed at preventing engineered biological threats from emerging and improving our response to these threats should prevention fail.

 Learn more and apply 

 Effective Giving & Careers 

 Request for proposals: Effective Careers

 Supports organizations and programs providing mentorship, advice, and opportunities to help people pursue highly impactful careers.

 Submissions are due by Apr 20, 2026 . 

 Learn more and apply 

 Farm Animal Welfare Fund

 Request for proposals: Humane Fish Slaughter Research/Prototypes

 Supports work to develop technologies and prototypes that materially improve the welfare of fish at capture and slaughter.

 Submissions are due by July 1, 2026 .

 Learn more and apply 

 Global Catastrophic Risks Opportunities Fund 

 Career development and transition funding

 Supports people at any career stage who want to pursue careers focused on reducing global catastrophic risks. Many different activities are covered, including graduate study, professional training, and self-study.

 Learn more and apply 

 

 Funding for programs and events

 Supports programs and events related to effective altruism, global catastrophic risks, biosecurity, and other areas.

 Learn more and apply 

 Navigating Transformative AI Fund

 Request for proposals: AI Governance

 Supports work across technical AI governance, policy development, frontier company policy, international AI governance, law, and strategic analysis and threat modeling.

 Learn more and apply 

 

 Funding for capacity-building on risks from transformative AI

 Supports work focused on addressing risks from transformative AI through “capacity-building” (e.g. supporting professional networks, helping new people find work in the field, or contributing to public discourse).

 Learn more and apply 

 Science and Global Health R&D Fund 

 Where to submit your proposal

 Most of the time, we proactively reach out to potential grantees to shape funding proposals together, but we also read unsolicited proposals. If you want to submit an idea to us, you can send an email to science@coefficientgiving.org  with a short description of an existing proposal you have submitted to another funder, or a new 1-2

... (truncated, 7 KB total)
Resource ID: 885a9fea564900fe | Stable ID: sid_0EDqXrk40f