Back
over $9.4 million from Open Philanthropy
webCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Coefficient Giving
This grant page documents Open Philanthropy's substantial funding of Redwood Research, useful for understanding the organizational and financial landscape of the AI safety field.
Metadata
Importance: 42/100press releasereference
Summary
Open Philanthropy awarded over $9.4 million in general support funding to Redwood Research, an AI safety organization focused on applied alignment research. This grant reflects Open Philanthropy's commitment to supporting technical AI safety work, particularly Redwood's efforts in areas like adversarial training, interpretability, and reducing risks from advanced AI systems.
Key Points
- •Open Philanthropy provided $9.4M+ in general support to Redwood Research, one of the major AI safety organizations.
- •Redwood Research focuses on applied technical safety research including adversarial robustness and interpretability.
- •General support grants indicate high donor confidence in an organization's mission and operational capacity.
- •This funding helps sustain a team dedicated to empirical alignment research and practical safety interventions.
- •The grant reflects broader philanthropic investment in organizations working on near-term and long-term AI risk reduction.
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| Redwood Research | Organization | 78.0 |
| AI Alignment Research Agendas | Crux | 69.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 20269 KB
Navigating Transformative AI | Coefficient Giving
Skip to Content
Navigating Transformative AI
Though advances in AI could benefit people enormously, we think they also pose serious risks from misuse, accidents, loss of control, and other problems.
480+
grants made
Contents
About the Fund
Funding Opportunities
Research & Updates
Featured Grants
About the Fund
Program Leads
Claire Zabel
Managing Director, Short Timelines Special Projects
Luke Muehlhauser
Managing Director, AI Governance & Policy
Peter Favaloro
Program Director, Technical AI Safety
Eli Rose
Program Director, Global Catastrophic Risks Capacity Building
Partners
Good Ventures
Interested in providing funding within this space? Reach out to partnerwithus@coefficientgiving.org .
In recent years, we’ve seen rapid progress in artificial intelligence. There’s a strong possibility that AI systems will soon outperform humans in nearly all cognitive domains.
We think AI could be the most important technological development in human history. If handled well, it could accelerate scientific discovery , improve health outcomes , and create unprecedented prosperity . If handled poorly, it could lead to catastrophic consequences: many experts think that risks from AI-related misuse, loss of control, or drastic societal change could endanger human civilization.
To reduce the risk of global catastrophe and help society prepare for major advances in AI, we support:
Technical AI safety research aimed at making advanced AI systems more trustworthy, robust, controllable, and aligned
AI governance and policy work to develop frameworks for safe, secure, and responsibly managed AI development
Capacity building to grow and strengthen the field of researchers and practitioners working on these challenges
New projects that we expect to be particularly impactful if timelines to transformative AI are short
Funding Opportunities
Prev
Next
Request for Proposals
AI Governance
This program provides support for projects that aim to improve the odds of humanity successfully navigating the risks of transformative AI. We are primarily seeking expressions of interest in the following areas: technical AI governance, policy development, strategic analysis and threat modeling, frontier company policy, international AI governance, and law.
Learn more and apply
Request for Proposals
Funding for Work That Builds Capacity To Address Risks From Transformative AI
T
... (truncated, 9 KB total)Resource ID:
8c79e00bab007a63 | Stable ID: sid_rWf3DnEugW