Coefficient Giving
- QualityRated 55 but structure suggests 87 (underrated by 32 points)
- Links12 links could use <R> components
Quick Assessment
Section titled “Quick Assessment”| Dimension | Assessment | Evidence |
|---|---|---|
| Scale | Dominant | $4B+ total grants; ≈$46M AI safety in 2023 |
| Structure | 13 cause-specific funds | Multi-donor pooled funds since Nov 2025 rebrand |
| AI Safety Focus | Leading funder | $336M+ to AI safety since 2014; ≈60% of external AI safety funding |
| Application Model | Rolling RFPs + regranting | 300-word EOI, 2-week response; supports platforms like Manifund |
| Transparency | High | Public grants database, annual progress reports |
| Key Funders | Good Ventures (primary) | Dustin Moskovitz & Cari Tuna; expanding to multi-donor model |
Organization Details
Section titled “Organization Details”| Attribute | Details |
|---|---|
| Full Name | Coefficient Giving (formerly Open Philanthropy) |
| Type | Philanthropic Advising and Funding Organization |
| Legal Structure | LLC (independent since 2017) |
| Founded | 2014 (as GiveWell outgrowth); 2017 (independent); 2025 (rebranded) |
| Total Grants | $4+ billion (as of June 2025) |
| AI Safety Grants | $336+ million (≈12% of total) |
| 2024 AI Safety Spend | ≈$50 million committed |
| Leadership | Alexander Berger (CEO), Holden Karnofsky (Board) |
| Location | San Francisco, California |
| Website | coefficientgiving.org |
| Grants Database | coefficientgiving.org/grants |
Overview
Section titled “Overview”Coefficient Giving is a major philanthropic organization that has directed over $4 billion in grants since 2014 across global health, AI safety, pandemic preparedness, farm animal welfare, and other cause areas. In November 2025, the organization rebranded from Open Philanthropy to Coefficient Giving, signaling an expansion from serving primarily one anchor donor (Good Ventures, the foundation of Dustin Moskovitz and Cari Tuna) to operating 13 cause-specific funds open to multiple philanthropists. The name “Coefficient” reflects the organization’s goal of multiplying impact through research, grantmaking, and partnerships—with “co” nodding to collaboration and “efficient” reflecting their unusual focus on cost-effectiveness.
Coefficient Giving is widely considered the largest funder of AI safety work globally. Since 2014, approximately $336 million (12% of total grants) has gone to AI safety research and governance, with roughly $46 million deployed in 2023 alone—making it the dominant external funder in a field where most safety research happens inside frontier AI labs. The organization’s Navigating Transformative AI Fund supports technical AI safety research, AI governance and policy work, and capacity building, with a $40 million Technical AI Safety RFP launched in 2025 covering 21 research areas.
The organization distinguishes itself through its strategic cause selection methodology—identifying problems that are large, tractable, and neglected relative to their size. This approach, combined with a willingness to fund speculative research and support multiple funding mechanisms (direct grants, regranting programs, pooled funds), has made Coefficient Giving central to the effective altruism funding ecosystem. However, critics have noted concerns about funding concentration, the slow pace of spending relative to the scale of AI risks, and heavy focus on evaluations over alignment research in recent technical AI safety grants.
History and Evolution
Section titled “History and Evolution”Origins (2011-2017)
Section titled “Origins (2011-2017)”Coefficient Giving traces its origins to 2011 when GiveWell, the charity evaluator founded by Holden Karnofsky and Elie Hassenfeld, began advising Good Ventures on how to deploy Dustin Moskovitz’s philanthropic capital effectively. Good Ventures was established by Moskovitz (Facebook co-founder, net worth ≈$12 billion) and Cari Tuna in 2011. By 2014, this advising relationship formalized into “Open Philanthropy” as a distinct project within GiveWell, focused on identifying high-impact giving opportunities across a broader range of cause areas than GiveWell’s traditional global health focus.
In 2017, Open Philanthropy spun off from GiveWell as an independent LLC, enabling it to pursue its own strategic priorities while GiveWell continued focusing on evidence-backed global health interventions. The separation reflected diverging methodologies: GiveWell prioritizes robust evidence of effectiveness, while Open Philanthropy embraced “hits-based giving”—funding speculative, high-variance projects where a few major successes could justify many failures.
Growth and AI Safety Focus (2015-2024)
Section titled “Growth and AI Safety Focus (2015-2024)”Open Philanthropy began supporting AI safety work in 2015, when the field was nascent and institutional support was minimal. Early grants helped establish foundational organizations including the Machine Intelligence Research Institute (MIRI), the Center for Human-Compatible AI at UC Berkeley, and the Future of Humanity Institute at Oxford. By 2023, AI safety had become Open Philanthropy’s largest longtermist cause area, reflecting growing concern about advanced AI risks among the leadership team.
| Year | AI Safety Milestone |
|---|---|
| 2015 | First AI safety grants; field had ≈10 full-time researchers |
| 2017 | Independent organization; Holden Karnofsky publishes AI concerns |
| 2019 | AI safety spending exceeds $20M annually |
| 2022 | $150M Regranting Challenge launched (not AI-specific) |
| 2023 | ≈$46M AI safety spending; largest funder in the field |
| 2024 | ≈$50M committed; 68% to evaluations/benchmarking |
| 2025 | Rebrand to Coefficient Giving; $40M Technical AI Safety RFP |
The November 2025 Rebrand
Section titled “The November 2025 Rebrand”On November 18, 2025, Open Philanthropy announced its rebranding to Coefficient Giving. The change reflected several strategic shifts:
Multi-Donor Expansion: The organization moved from primarily serving Good Ventures to operating pooled funds open to any philanthropist. In 2024, Coefficient directed over $100 million from donors besides Good Ventures; by 2025, non-Good Ventures funding had more than doubled.
Brand Clarity: The “Open Philanthropy” name created confusion—journalists mistook them for OpenAI, potential grantees confused them with Open Society Foundations. “Coefficient” provided a distinctive identity.
Structural Reorganization: The organization restructured from program areas to 13 distinct funds, each with dedicated leadership and transparent goals, allowing donors to support specific causes at scale.
Organizational Structure
Section titled “Organizational Structure”The 13 Funds Model
Section titled “The 13 Funds Model”Since the November 2025 rebrand, Coefficient Giving operates through 13 cause-specific funds, each pooling money from multiple donors:
| Fund | Focus | Key Activities |
|---|---|---|
| Navigating Transformative AI | AI safety & governance | Technical research, policy, capacity building |
| Biosecurity & Pandemic Preparedness | Catastrophic bio risks | Research, policy, infrastructure |
| Global Catastrophic Risks Opportunities | Cross-cutting x-risk work | Ecosystem support, foundational work |
| Science and Global Health R&D | Neglected disease research | TB, malaria, high-risk transformational science |
| Global Health Policy | Policy for health impact | Lead exposure, air pollution |
| Global Aid Policy | Development effectiveness | Evidence-based aid policy |
| Farm Animal Welfare | Factory farming reform | Welfare reforms, alternative proteins |
| Effective Giving and Careers | EA movement building | Giving What We Can, 80,000 Hours |
| Abundance & Growth | Economic prosperity | $120M launched 2025 for scientific progress |
| Criminal Justice Reform | US criminal justice | Bail reform, prosecutorial accountability |
| Land Use Reform | Housing and development | YIMBY policy, zoning reform |
| Immigration Policy | Immigration reform | Policy research and advocacy |
| Other Global Health | Remaining health causes | Malaria, deworming, direct cash transfers |
Navigating Transformative AI Fund
Section titled “Navigating Transformative AI Fund”The Navigating Transformative AI Fund is Coefficient’s primary vehicle for AI-related grantmaking, supporting:
Technical AI Safety Research: Work aimed at making advanced AI systems more trustworthy, robust, controllable, and aligned. This includes interpretability research, robustness to adversarial inputs, scalable oversight methods, and understanding emergent capabilities.
AI Governance and Policy: Frameworks for safe, secure, and responsibly managed AI development, including export controls, compute governance, international coordination, and corporate governance mechanisms.
Capacity Building: Growing and strengthening the field of researchers and practitioners working on AI challenges, including training programs, career development, and institutional infrastructure.
Short-Timeline Projects: New projects expected to be particularly impactful if timelines to transformative AI are short, reflecting Coefficient’s view that advanced AI could emerge within the next 5-15 years.
Regrantor Model
Section titled “Regrantor Model”| Component | Description |
|---|---|
| Selection | OP identifies trusted individuals with relevant expertise |
| Budget | Each regrantor receives $200K - $2M to distribute |
| Autonomy | Regrantors make independent decisions within guidelines |
| Reporting | Regrantors document grants, OP maintains oversight |
| Renewal | Strong performers may receive additional budgets |
Regrantor Criteria
Section titled “Regrantor Criteria”| Criterion | Description |
|---|---|
| Domain Expertise | Deep knowledge in cause area |
| Community Connections | Know who does good work |
| Judgment | Track record of good decisions |
| Capacity | Time to evaluate and make grants |
| Values Alignment | Share EA/longtermist priorities |
AI Safety Grantmaking
Section titled “AI Safety Grantmaking”Major AI Safety Grantees (2024)
Section titled “Major AI Safety Grantees (2024)”Coefficient’s largest 2024 AI safety grants reflect priorities across evaluations, interpretability, and theoretical alignment work:
| Grantee | Amount | Focus | Notes |
|---|---|---|---|
| Center for AI Safety | $8.5M | Field building, research | Training programs, compute grants, advocacy |
| Redwood Research | $6.2M | Alignment research | Interpretability, control research; $21M+ total from OP |
| MIRI | $4.1M | Theoretical alignment | Agent foundations, deceptive alignment |
| Epoch AI | ≈$3M | AI forecasting | Compute trends, capability timelines |
| METR (formerly ARC Evals) | ≈$3M | Capability evaluations | Model evaluations used by labs and governments |
| AI Safety Camp | ≈$500K | Talent pipeline | Intensive research programs |
| Various Individuals | ≈$10M | Researchers, fellowships | PhDs, postdocs, independent researchers |
2024 Technical AI Safety Funding Breakdown
Section titled “2024 Technical AI Safety Funding Breakdown”An analysis of Open Philanthropy’s Technical AI Safety funding revealed the following distribution of the $28M recorded in their database:
| Research Area | Percentage | Amount (~) | Assessment |
|---|---|---|---|
| Evaluations/Benchmarking | 68% | $19M | Primary focus; critics note AI Safety Institutes already well-resourced |
| Interpretability | ≈10% | ≈$3M | Mechanistic interpretability, circuit analysis |
| Robustness | ≈5% | ≈$1.5M | Adversarial robustness, red-teaming |
| Value Alignment | ≈5% | ≈$1.5M | RLHF alternatives, preference learning |
| Field Building | ≈5% | ≈$1.5M | Training programs, community |
| Forecasting | ≈3% | ≈$1M | Timelines, capabilities |
| Other | ≈4% | ≈$1M | Governance research, miscellaneous |
Note: The $28M figure underestimates total 2024 spending as some approved grants had not been posted to the database at time of analysis. Coefficient acknowledged spending “roughly $50 million” on technical AI safety in 2024.
Historical Major AI Safety Grants
Section titled “Historical Major AI Safety Grants”| Grantee | Total (All Years) | Period | Notable Impact |
|---|---|---|---|
| MIRI | $14M+ | 2014-2024 | Agent foundations, embedded agency |
| Redwood Research | $21M+ | 2021-2024 | Interpretability methods, control research |
| Center for AI Safety | $15M+ | 2022-2024 | Compute cluster, training programs |
| Future of Humanity Institute | $10M+ | 2015-2024 | Strategic analysis (closed 2024) |
| Center for Human-Compatible AI | $8M+ | 2016-2024 | Stuart Russell’s CHAI lab |
| Anthropic | $0 directly | N/A | VC-funded; OP staff invested personally |
| Long-Term Future Fund | $3.15M | 2019-2024 | Regranting to LTFF for distribution |
2025 Technical AI Safety RFP
Section titled “2025 Technical AI Safety RFP”In early 2025, Coefficient launched a $40 million Request for Proposals across 21 research areas, with funding available for substantially more based on application quality. Key features:
Priority Research Areas (starred items are especially prioritized):
| Category | Research Areas |
|---|---|
| Alignment Foundations | Alternatives to adversarial training*, alignment faking*, scalable oversight* |
| Interpretability | Mechanistic interpretability*, representation engineering, probing |
| Evaluation | Dangerous capability evaluations*, propensity evaluations*, automated red-teaming |
| Robustness | Adversarial robustness, distribution shift, specification gaming |
| Governance-Adjacent | AI governance research, responsible scaling policies |
Grant Characteristics:
| Aspect | Details |
|---|---|
| Size Range | API credits ($1-10K) to seed funding for new orgs ($1M+) |
| Application | 300-word expression of interest (EOI) |
| Response Time | Within 2 weeks of EOI submission |
| Decision Timeline | 4-8 weeks for full proposals |
| Eligibility | Academic researchers, nonprofits, independent researchers, new orgs |
Regranting Ecosystem
Section titled “Regranting Ecosystem”Coefficient Giving supports multiple regranting platforms and mechanisms to achieve faster, more distributed funding decisions. This represents a deliberate strategy to complement slower direct grantmaking with nimble, expert-driven allocation.
Funding Flow Through Regranting
Section titled “Funding Flow Through Regranting”Long-Term Future Fund (LTFF)
Section titled “Long-Term Future Fund (LTFF)”The Long-Term Future Fund is a committee-based grantmaking fund that receives significant support from Coefficient. About half of LTFF funding historically comes from Open Philanthropy donations.
| Aspect | Details |
|---|---|
| Annual Volume | ≈$6.7M (2023) |
| AI Safety Portion | ≈$4.3M (≈65% of grants) |
| Grant Count | ≈200 grants per year |
| Median Grant | ≈$15-30K |
| Decision Model | Committee of fund managers |
| Transparency | High (public grant reports) |
LTFF grants tend toward smaller, faster decisions than direct Coefficient grants, serving researchers and projects that may not yet warrant Coefficient’s full evaluation process.
Manifund AI Safety Regranting
Section titled “Manifund AI Safety Regranting”Manifund operates a distinct regranting model where individual experts receive budgets to make independent funding decisions. For 2025, Manifund raised $2.25 million and announced their first 10 regrantors.
Named 2025 Regrantors:
| Regrantor | Budget | Background | Focus |
|---|---|---|---|
| Evan Hubinger | $450K | Anthropic AGI Safety Researcher, former LTFF manager | Technical AI safety |
| Ryan Kidd | ≈$100K+ | Co-director of SERI MATS | Emerging talent |
| Marius Hobbhahn | ≈$100K+ | CEO of Apollo Research | Evaluations, scheming |
| Lisa Thiergart | ≈$100K+ | Director at SL5 Task Force, former MIRI | Governance |
| Gavin Leech | ≈$100K+ | Cofounder Arb Research | Research reviews |
| Dan Hendrycks | ≈$100K+ | Director of CAIS | Safety research |
| Adam Gleave | ≈$100K+ | CEO of FAR AI | Adversarial robustness |
Manifund Regranting Characteristics:
| Feature | Details |
|---|---|
| Speed | Grant to bank account in under 1 week |
| Typical Grant Size | $5K-$50K |
| Decision Authority | Solo regrantor decisions |
| Oversight | Manifund reviews but doesn’t approve |
| Risk Tolerance | High (encourages speculative grants) |
Notable Manifund Grants:
| Project | Amount | Regrantors | Impact |
|---|---|---|---|
| Timaeus (DevInterp) | $143,200 | Evan Hubinger, Rachel Weinberg, Marcus Abramovitch, Ryan Kidd | First funding; accelerated research months |
| ChinaTalk | $37,000 | Joel Becker, Evan Hubinger | Coverage of China/AI, including DeepSeek |
| Shallow Review 2024 | $9,000 | Neel Nanda, Ryan Kidd | Induced further $5K from OpenPhil |
Survival and Flourishing Fund (SFF)
Section titled “Survival and Flourishing Fund (SFF)”The Survival and Flourishing Fund uses a unique “S-process” algorithm for grant allocation, primarily funded by Jaan Tallinn (Skype co-founder). While Coefficient and SFF are independent, they share many grantees and strategic priorities.
| Aspect | Coefficient | SFF |
|---|---|---|
| 2024 Volume | ≈$650M total | ≈$24M |
| AI Safety % | ≈12% | ≈86% ($20M) |
| Decision Model | Staff + regrantors | S-process algorithm |
| Speed | Rolling | Twice yearly rounds |
| Overlap | High | High |
How to Apply for Funding
Section titled “How to Apply for Funding”Direct Application to Coefficient
Section titled “Direct Application to Coefficient”The most straightforward path for substantial funding requests:
| Step | Details | Timeline |
|---|---|---|
| 1. Check RFPs | Review active Requests for Proposals | Ongoing |
| 2. Submit EOI | 300-word expression of interest describing project | N/A |
| 3. Initial Response | Coefficient responds with interest level | 2 weeks |
| 4. Full Proposal | If invited, submit detailed proposal with budget | 2-4 weeks to prepare |
| 5. Due Diligence | Coefficient evaluates organization and proposal | 4-8 weeks |
| 6. Decision | Grant approval or rejection | Total: 2-4 months |
Tips for Applicants (from Coefficient’s guidance):
The bar is intentionally low for submitting expressions of interest. Key failure modes to avoid include not demonstrating understanding of prior work (read papers linked in relevant RFP sections) and not demonstrating that your team has prior experience with ML projects. Even uncertain proposals are worth submitting as the RFP is partly an experiment to understand funding demand.
Via Regranting Platforms
Section titled “Via Regranting Platforms”Faster and more accessible for smaller grants:
| Platform | Best For | How to Apply |
|---|---|---|
| Manifund | $5-50K projects, emerging researchers | Create project on manifund.org, contact regrantors directly |
| LTFF | $10-100K, established track record | Apply via EA Funds |
| SFF | $100K+, established organizations | Apply during S-process rounds |
Finding Regrantors
Section titled “Finding Regrantors”Many regrantors are reachable through:
- Direct outreach: Email or social media (many are publicly active on Twitter/X, LessWrong)
- EA communities: EA Forum, Alignment Forum, local EA groups
- Professional networks: AI safety conferences (NeurIPS safety track, ICML), SERI MATS alumni
- Manifund platform: Create project and regrantors may proactively reach out
Comparison with Other AI Safety Funders
Section titled “Comparison with Other AI Safety Funders”| Aspect | Coefficient Giving | LTFF | SFF | Manifund |
|---|---|---|---|---|
| 2024 AI Safety Volume | ≈$50M | ≈$4.3M | ≈$20M | ≈$2M |
| Total Assets | Good Ventures ($12B+) | Pool of donors | Jaan Tallinn | Donors |
| Decision Model | Staff + regrantors | Committee | S-process algorithm | Individual regrantors |
| Typical Grant Size | $100K-$5M | $15-100K | $100K-$2M | $5-50K |
| Speed (EOI to decision) | 2-4 months | 1-3 months | 6 months (rounds) | Under 2 weeks |
| Transparency | Medium (public database) | High (detailed reports) | High (S-process public) | Very high (live on platform) |
| Risk Tolerance | Medium | Medium-High | Medium | High |
| Best For | Major grants, established orgs | Growing researchers | Established orgs | Early-stage, speculative |
Funding Gap Analysis
Section titled “Funding Gap Analysis”According to an overview of AI safety funding, total external philanthropic AI safety funding (≈$100M annually) is dwarfed by:
| Comparison | Amount | Ratio to Safety Funding |
|---|---|---|
| Generative AI Investment (2023) | ≈$24B | 240:1 |
| Frontier Lab Safety Budgets | ≈$500M+ combined | 5:1 |
| US Government AI R&D | ≈$3B annually | 30:1 |
This funding gap is a persistent concern in the AI safety community, though Coefficient and other funders argue that talent constraints, not funding, are often the binding limitation.
Critical Assessment
Section titled “Critical Assessment”Strengths
Section titled “Strengths”Scale and Stability: With Good Ventures’ multi-billion dollar backing, Coefficient can make commitments that smaller funders cannot. This enables multi-year organizational support, compute grants, and substantial research programs.
Strategic Sophistication: The organization’s cause selection methodology and research depth (public writeups, shallow investigations, deep dives) provides unusually transparent reasoning for grant decisions.
Ecosystem Building: By funding LTFF, Manifund, and other regranting mechanisms, Coefficient amplifies its reach while maintaining quality through trusted intermediaries.
Hits-Based Giving: Willingness to fund speculative research acknowledges that transformative progress often comes from unexpected directions, though this increases variance in outcomes.
Limitations and Criticisms
Section titled “Limitations and Criticisms”Funding Concentration: With Coefficient representing ~60% of external AI safety funding, the field is heavily dependent on one organization’s worldview and priorities. Critics note this could lead to “possible solutions being overlooked or assumptions no longer being questioned.”
Evaluation Focus: The heavy focus on evaluations/benchmarking (68% of 2024 technical grants) has drawn criticism. As one researcher noted, “This looks much worse than I thought it would, both in terms of funding underdeployment, and in terms of overfocusing on evals.” Critics argue AI Safety Institutes are already well-resourced for evaluation work.
Alignment Neglect: Some researchers express disappointment that “there’s so little emphasis in this RFP about alignment, i.e. research on how to build an AI system that is doing what its developer intended it to do.”
Slow Spending: Coefficient has acknowledged that “in retrospect, our rate of spending was too slow, and we should have been more aggressively expanding support for technical AI safety work earlier.” Key reasons cited include difficulty making qualified senior hires and disappointment with returns to past spending.
Grants Database Limitations: The public grants database “offers an increasingly inaccurate picture” of Coefficient’s work, as it generally excludes funding advised from non-Good Ventures donors. Coefficient is considering deprecating it.
Strategic Questions
Section titled “Strategic Questions”| Question | Context |
|---|---|
| Funding deployment rate | Is $50M/year appropriate given AI development pace? |
| Evaluation vs alignment balance | Should more funding go to core alignment research? |
| Lab relationships | How to maintain independence while funding lab-adjacent work? |
| Multi-donor model | Will expanding beyond Good Ventures change priorities? |
| Talent vs funding constraint | Is the field truly talent-constrained, or is this justifying underspending? |
Sources and Citations
Section titled “Sources and Citations”Primary Sources
Section titled “Primary Sources”- Coefficient Giving Official Website
- Open Philanthropy Is Now Coefficient Giving (Nov 2025)
- The Story Behind Our New Name
- Our Progress in 2024 and Plans for 2025
- Navigating Transformative AI Fund
- Technical AI Safety Research RFP
Analysis and Commentary
Section titled “Analysis and Commentary”- An Overview of the AI Safety Funding Situation - LessWrong
- Brief Analysis of OP Technical AI Safety Funding - LessWrong
- Open Philanthropy Is Now Coefficient Giving - Inside Philanthropy
- How to Get a Grant from Coefficient Giving - Inside Philanthropy
- Coefficient Giving - Wikipedia
Regranting Programs
Section titled “Regranting Programs”- Manifund AI Safety Regranting
- Manifund 2025 Regrants Announcement - EA Forum
- What Makes a Good Regrant? - Manifund Substack
- Long-Term Future Fund - EA Funds
- Survival and Flourishing Fund
Grantee Information
Section titled “Grantee Information”- Redwood Research Grant Page - Open Philanthropy
- MIRI AI Safety Retraining Program Grant - Open Philanthropy
- Center for AI Safety General Support - Open Philanthropy