Back
Manifund Regranting Program
webmanifund.org·manifund.org/about/regranting
Manifund's regranting model is relevant to AI safety funding infrastructure, offering an alternative funding pathway for researchers and projects outside traditional grant cycles.
Metadata
Importance: 42/100homepagereference
Summary
Manifund's regranting program allows vetted individuals (regrantors) to distribute funding to AI safety and other cause area projects on behalf of donors. Regrantors have discretion over how to allocate their grant budgets, enabling faster and more flexible funding decisions than traditional grant processes.
Key Points
- •Regrantors receive a budget from donors and have discretion to fund projects they find promising without lengthy approval processes
- •The model aims to leverage domain experts' knowledge to identify high-impact opportunities that centralized funders might miss
- •Regrantors are publicly accountable, with their funding decisions and rationales visible on the platform
- •The program is particularly relevant to AI safety funding, enabling faster deployment of capital to emerging research
- •Manifund acts as a fiscal sponsor, handling legal and administrative overhead for grantees
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| Coefficient Giving | Organization | 55.0 |
| Manifund | Organization | 50.0 |
1 FactBase fact citing this source
Cached Content Preview
HTTP 200Fetched Apr 7, 20268 KB
Manifund
Dec
JAN
Feb
16
2025
2026
2027
success
fail
About this capture
COLLECTED BY
Collection: Save Page Now Outlinks
TIMESTAMPS
The Wayback Machine - http://web.archive.org/web/20260116153827/https://manifund.org/about/regranting
Manifund
Home
Login
About
People
Categories
Newsletter
Home
About
People
Categories
Login
Create
AI Safety Regranting
We partner with regrantors: experts in the field of AI safety, each given an independent budget. Regrantors recommend grants based on their personal expertise; Manifund reviews these recommendations and distributes the funds.
Donate to AI Safety Regranting
Our regrantors
202320242025
$350K
Neel Nanda
Lead of mech interp team at Google DeepMind
$200K
Joel Becker
Member of Technical Staff at METR; CEO at Qally's
$150K
Gavin Leech
Cofounder Arb Research, fellow at Cosmos, LCFI and Foresight.
$125K
Richard Ngo
AI safety and governance researcher
$125K
Ethan Josean Perez
I lead the adversarial robustness team at Anthropic, where I’m hoping to reduce existential risks from AI systems. I helped to develop Retrieval-Augmented Generation (RAG), a widely used approach for augmenting large language models with other sources of information. I also helped to demonstrate that state-of-the-art AI safety training techniques do not ensure safety against sleeper agents. I received a best paper award at ICML 2024 for my work showing that debating with more persuasive LLMs leads to more truthful answers. I received my PhD from NYU under the supervision of Kyunghyun Cho and Douwe Kiela and funded by NSF and Open Philanthropy. Previously, I’ve spent time at DeepMind, Facebook AI Research, Montreal Institute for Learning Algorithms, and Google. I was also named one of Forbes’s 30 Under 30 in AI.
$100K
Marcus Abramovitch
Effective altruist, earning to give running a crypto fund. Very concerned with animal welfare and longtermism and their intersection. Ex-poker player and chemisty PhD student.
$100K
Ryan Kidd
Co-Executive Director at MATS
$100K
Lisa Thiergart
Director at SL5 Task Force, prev. Research Lead at MIRI Technical Governance Team
$100K
Lauren Mangla
Hi, I’m Lauren! I run AI safety programs at Constellation. All grant decisions are my own and independent of my role at Constellation
$100K
Thomas Larsen
I work at the AI Futures Project, most recently on AI 2027.
$100K
Tamay Besiroglu
I co-founded Mechanize and Epoch AI
$100K
Marius Hobbhahn
I'm the CEO of Apollo Research. An evaluations research organization focused on frontier AI safety risks like scheming. Previously, I did a PhD in ML and worked as a Fellow for Epoch
$100K
Alexandra Bates
Program Manager @ Constellation
Why regranting?
Hidden opportunities: Regrantors can tap into their personal networks, giving to places that dono
... (truncated, 8 KB total)Resource ID:
kb-0c3cd3534fa36003 | Stable ID: sid_2pz7Yi9nle