Skip to content

Coefficient Giving

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:55 (Adequate)⚠️
Importance:62 (Useful)
Last edited:2026-01-29 (3 days ago)
Words:3.6k
Structure:
📊 21📈 2🔗 0📚 5010%Score: 13/15
LLM Summary:Coefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, with 68% going to evaluations/benchmarking, and launched a $40M Technical AI Safety RFP in 2025 covering 21 research areas with 2-week EOI response times.
Issues (2):
  • QualityRated 55 but structure suggests 87 (underrated by 32 points)
  • Links12 links could use <R> components
DimensionAssessmentEvidence
ScaleDominant$4B+ total grants; ≈$46M AI safety in 2023
Structure13 cause-specific fundsMulti-donor pooled funds since Nov 2025 rebrand
AI Safety FocusLeading funder$336M+ to AI safety since 2014; ≈60% of external AI safety funding
Application ModelRolling RFPs + regranting300-word EOI, 2-week response; supports platforms like Manifund
TransparencyHighPublic grants database, annual progress reports
Key FundersGood Ventures (primary)Dustin Moskovitz & Cari Tuna; expanding to multi-donor model
AttributeDetails
Full NameCoefficient Giving (formerly Open Philanthropy)
TypePhilanthropic Advising and Funding Organization
Legal StructureLLC (independent since 2017)
Founded2014 (as GiveWell outgrowth); 2017 (independent); 2025 (rebranded)
Total Grants$4+ billion (as of June 2025)
AI Safety Grants$336+ million (≈12% of total)
2024 AI Safety Spend≈$50 million committed
LeadershipAlexander Berger (CEO), Holden Karnofsky (Board)
LocationSan Francisco, California
Websitecoefficientgiving.org
Grants Databasecoefficientgiving.org/grants

Coefficient Giving is a major philanthropic organization that has directed over $4 billion in grants since 2014 across global health, AI safety, pandemic preparedness, farm animal welfare, and other cause areas. In November 2025, the organization rebranded from Open Philanthropy to Coefficient Giving, signaling an expansion from serving primarily one anchor donor (Good Ventures, the foundation of Dustin Moskovitz and Cari Tuna) to operating 13 cause-specific funds open to multiple philanthropists. The name “Coefficient” reflects the organization’s goal of multiplying impact through research, grantmaking, and partnerships—with “co” nodding to collaboration and “efficient” reflecting their unusual focus on cost-effectiveness.

Coefficient Giving is widely considered the largest funder of AI safety work globally. Since 2014, approximately $336 million (12% of total grants) has gone to AI safety research and governance, with roughly $46 million deployed in 2023 alone—making it the dominant external funder in a field where most safety research happens inside frontier AI labs. The organization’s Navigating Transformative AI Fund supports technical AI safety research, AI governance and policy work, and capacity building, with a $40 million Technical AI Safety RFP launched in 2025 covering 21 research areas.

The organization distinguishes itself through its strategic cause selection methodology—identifying problems that are large, tractable, and neglected relative to their size. This approach, combined with a willingness to fund speculative research and support multiple funding mechanisms (direct grants, regranting programs, pooled funds), has made Coefficient Giving central to the effective altruism funding ecosystem. However, critics have noted concerns about funding concentration, the slow pace of spending relative to the scale of AI risks, and heavy focus on evaluations over alignment research in recent technical AI safety grants.

Coefficient Giving traces its origins to 2011 when GiveWell, the charity evaluator founded by Holden Karnofsky and Elie Hassenfeld, began advising Good Ventures on how to deploy Dustin Moskovitz’s philanthropic capital effectively. Good Ventures was established by Moskovitz (Facebook co-founder, net worth ≈$12 billion) and Cari Tuna in 2011. By 2014, this advising relationship formalized into “Open Philanthropy” as a distinct project within GiveWell, focused on identifying high-impact giving opportunities across a broader range of cause areas than GiveWell’s traditional global health focus.

In 2017, Open Philanthropy spun off from GiveWell as an independent LLC, enabling it to pursue its own strategic priorities while GiveWell continued focusing on evidence-backed global health interventions. The separation reflected diverging methodologies: GiveWell prioritizes robust evidence of effectiveness, while Open Philanthropy embraced “hits-based giving”—funding speculative, high-variance projects where a few major successes could justify many failures.

Open Philanthropy began supporting AI safety work in 2015, when the field was nascent and institutional support was minimal. Early grants helped establish foundational organizations including the Machine Intelligence Research Institute (MIRI), the Center for Human-Compatible AI at UC Berkeley, and the Future of Humanity Institute at Oxford. By 2023, AI safety had become Open Philanthropy’s largest longtermist cause area, reflecting growing concern about advanced AI risks among the leadership team.

YearAI Safety Milestone
2015First AI safety grants; field had ≈10 full-time researchers
2017Independent organization; Holden Karnofsky publishes AI concerns
2019AI safety spending exceeds $20M annually
2022$150M Regranting Challenge launched (not AI-specific)
2023≈$46M AI safety spending; largest funder in the field
2024≈$50M committed; 68% to evaluations/benchmarking
2025Rebrand to Coefficient Giving; $40M Technical AI Safety RFP

On November 18, 2025, Open Philanthropy announced its rebranding to Coefficient Giving. The change reflected several strategic shifts:

Multi-Donor Expansion: The organization moved from primarily serving Good Ventures to operating pooled funds open to any philanthropist. In 2024, Coefficient directed over $100 million from donors besides Good Ventures; by 2025, non-Good Ventures funding had more than doubled.

Brand Clarity: The “Open Philanthropy” name created confusion—journalists mistook them for OpenAI, potential grantees confused them with Open Society Foundations. “Coefficient” provided a distinctive identity.

Structural Reorganization: The organization restructured from program areas to 13 distinct funds, each with dedicated leadership and transparent goals, allowing donors to support specific causes at scale.

Loading diagram...

Since the November 2025 rebrand, Coefficient Giving operates through 13 cause-specific funds, each pooling money from multiple donors:

FundFocusKey Activities
Navigating Transformative AIAI safety & governanceTechnical research, policy, capacity building
Biosecurity & Pandemic PreparednessCatastrophic bio risksResearch, policy, infrastructure
Global Catastrophic Risks OpportunitiesCross-cutting x-risk workEcosystem support, foundational work
Science and Global Health R&DNeglected disease researchTB, malaria, high-risk transformational science
Global Health PolicyPolicy for health impactLead exposure, air pollution
Global Aid PolicyDevelopment effectivenessEvidence-based aid policy
Farm Animal WelfareFactory farming reformWelfare reforms, alternative proteins
Effective Giving and CareersEA movement buildingGiving What We Can, 80,000 Hours
Abundance & GrowthEconomic prosperity$120M launched 2025 for scientific progress
Criminal Justice ReformUS criminal justiceBail reform, prosecutorial accountability
Land Use ReformHousing and developmentYIMBY policy, zoning reform
Immigration PolicyImmigration reformPolicy research and advocacy
Other Global HealthRemaining health causesMalaria, deworming, direct cash transfers

The Navigating Transformative AI Fund is Coefficient’s primary vehicle for AI-related grantmaking, supporting:

Technical AI Safety Research: Work aimed at making advanced AI systems more trustworthy, robust, controllable, and aligned. This includes interpretability research, robustness to adversarial inputs, scalable oversight methods, and understanding emergent capabilities.

AI Governance and Policy: Frameworks for safe, secure, and responsibly managed AI development, including export controls, compute governance, international coordination, and corporate governance mechanisms.

Capacity Building: Growing and strengthening the field of researchers and practitioners working on AI challenges, including training programs, career development, and institutional infrastructure.

Short-Timeline Projects: New projects expected to be particularly impactful if timelines to transformative AI are short, reflecting Coefficient’s view that advanced AI could emerge within the next 5-15 years.

ComponentDescription
SelectionOP identifies trusted individuals with relevant expertise
BudgetEach regrantor receives $200K - $2M to distribute
AutonomyRegrantors make independent decisions within guidelines
ReportingRegrantors document grants, OP maintains oversight
RenewalStrong performers may receive additional budgets
CriterionDescription
Domain ExpertiseDeep knowledge in cause area
Community ConnectionsKnow who does good work
JudgmentTrack record of good decisions
CapacityTime to evaluate and make grants
Values AlignmentShare EA/longtermist priorities

Coefficient’s largest 2024 AI safety grants reflect priorities across evaluations, interpretability, and theoretical alignment work:

GranteeAmountFocusNotes
Center for AI Safety$8.5MField building, researchTraining programs, compute grants, advocacy
Redwood Research$6.2MAlignment researchInterpretability, control research; $21M+ total from OP
MIRI$4.1MTheoretical alignmentAgent foundations, deceptive alignment
Epoch AI≈$3MAI forecastingCompute trends, capability timelines
METR (formerly ARC Evals)≈$3MCapability evaluationsModel evaluations used by labs and governments
AI Safety Camp≈$500KTalent pipelineIntensive research programs
Various Individuals≈$10MResearchers, fellowshipsPhDs, postdocs, independent researchers

2024 Technical AI Safety Funding Breakdown

Section titled “2024 Technical AI Safety Funding Breakdown”

An analysis of Open Philanthropy’s Technical AI Safety funding revealed the following distribution of the $28M recorded in their database:

Research AreaPercentageAmount (~)Assessment
Evaluations/Benchmarking68%$19MPrimary focus; critics note AI Safety Institutes already well-resourced
Interpretability≈10%≈$3MMechanistic interpretability, circuit analysis
Robustness≈5%≈$1.5MAdversarial robustness, red-teaming
Value Alignment≈5%≈$1.5MRLHF alternatives, preference learning
Field Building≈5%≈$1.5MTraining programs, community
Forecasting≈3%≈$1MTimelines, capabilities
Other≈4%≈$1MGovernance research, miscellaneous

Note: The $28M figure underestimates total 2024 spending as some approved grants had not been posted to the database at time of analysis. Coefficient acknowledged spending “roughly $50 million” on technical AI safety in 2024.

GranteeTotal (All Years)PeriodNotable Impact
MIRI$14M+2014-2024Agent foundations, embedded agency
Redwood Research$21M+2021-2024Interpretability methods, control research
Center for AI Safety$15M+2022-2024Compute cluster, training programs
Future of Humanity Institute$10M+2015-2024Strategic analysis (closed 2024)
Center for Human-Compatible AI$8M+2016-2024Stuart Russell’s CHAI lab
Anthropic$0 directlyN/AVC-funded; OP staff invested personally
Long-Term Future Fund$3.15M2019-2024Regranting to LTFF for distribution

In early 2025, Coefficient launched a $40 million Request for Proposals across 21 research areas, with funding available for substantially more based on application quality. Key features:

Priority Research Areas (starred items are especially prioritized):

CategoryResearch Areas
Alignment FoundationsAlternatives to adversarial training*, alignment faking*, scalable oversight*
InterpretabilityMechanistic interpretability*, representation engineering, probing
EvaluationDangerous capability evaluations*, propensity evaluations*, automated red-teaming
RobustnessAdversarial robustness, distribution shift, specification gaming
Governance-AdjacentAI governance research, responsible scaling policies

Grant Characteristics:

AspectDetails
Size RangeAPI credits ($1-10K) to seed funding for new orgs ($1M+)
Application300-word expression of interest (EOI)
Response TimeWithin 2 weeks of EOI submission
Decision Timeline4-8 weeks for full proposals
EligibilityAcademic researchers, nonprofits, independent researchers, new orgs

Coefficient Giving supports multiple regranting platforms and mechanisms to achieve faster, more distributed funding decisions. This represents a deliberate strategy to complement slower direct grantmaking with nimble, expert-driven allocation.

Loading diagram...

The Long-Term Future Fund is a committee-based grantmaking fund that receives significant support from Coefficient. About half of LTFF funding historically comes from Open Philanthropy donations.

AspectDetails
Annual Volume≈$6.7M (2023)
AI Safety Portion≈$4.3M (≈65% of grants)
Grant Count≈200 grants per year
Median Grant≈$15-30K
Decision ModelCommittee of fund managers
TransparencyHigh (public grant reports)

LTFF grants tend toward smaller, faster decisions than direct Coefficient grants, serving researchers and projects that may not yet warrant Coefficient’s full evaluation process.

Manifund operates a distinct regranting model where individual experts receive budgets to make independent funding decisions. For 2025, Manifund raised $2.25 million and announced their first 10 regrantors.

Named 2025 Regrantors:

RegrantorBudgetBackgroundFocus
Evan Hubinger$450KAnthropic AGI Safety Researcher, former LTFF managerTechnical AI safety
Ryan Kidd≈$100K+Co-director of SERI MATSEmerging talent
Marius Hobbhahn≈$100K+CEO of Apollo ResearchEvaluations, scheming
Lisa Thiergart≈$100K+Director at SL5 Task Force, former MIRIGovernance
Gavin Leech≈$100K+Cofounder Arb ResearchResearch reviews
Dan Hendrycks≈$100K+Director of CAISSafety research
Adam Gleave≈$100K+CEO of FAR AIAdversarial robustness

Manifund Regranting Characteristics:

FeatureDetails
SpeedGrant to bank account in under 1 week
Typical Grant Size$5K-$50K
Decision AuthoritySolo regrantor decisions
OversightManifund reviews but doesn’t approve
Risk ToleranceHigh (encourages speculative grants)

Notable Manifund Grants:

ProjectAmountRegrantorsImpact
Timaeus (DevInterp)$143,200Evan Hubinger, Rachel Weinberg, Marcus Abramovitch, Ryan KiddFirst funding; accelerated research months
ChinaTalk$37,000Joel Becker, Evan HubingerCoverage of China/AI, including DeepSeek
Shallow Review 2024$9,000Neel Nanda, Ryan KiddInduced further $5K from OpenPhil

The Survival and Flourishing Fund uses a unique “S-process” algorithm for grant allocation, primarily funded by Jaan Tallinn (Skype co-founder). While Coefficient and SFF are independent, they share many grantees and strategic priorities.

AspectCoefficientSFF
2024 Volume≈$650M total≈$24M
AI Safety %≈12%≈86% ($20M)
Decision ModelStaff + regrantorsS-process algorithm
SpeedRollingTwice yearly rounds
OverlapHighHigh

The most straightforward path for substantial funding requests:

StepDetailsTimeline
1. Check RFPsReview active Requests for ProposalsOngoing
2. Submit EOI300-word expression of interest describing projectN/A
3. Initial ResponseCoefficient responds with interest level2 weeks
4. Full ProposalIf invited, submit detailed proposal with budget2-4 weeks to prepare
5. Due DiligenceCoefficient evaluates organization and proposal4-8 weeks
6. DecisionGrant approval or rejectionTotal: 2-4 months

Tips for Applicants (from Coefficient’s guidance):

The bar is intentionally low for submitting expressions of interest. Key failure modes to avoid include not demonstrating understanding of prior work (read papers linked in relevant RFP sections) and not demonstrating that your team has prior experience with ML projects. Even uncertain proposals are worth submitting as the RFP is partly an experiment to understand funding demand.

Faster and more accessible for smaller grants:

PlatformBest ForHow to Apply
Manifund$5-50K projects, emerging researchersCreate project on manifund.org, contact regrantors directly
LTFF$10-100K, established track recordApply via EA Funds
SFF$100K+, established organizationsApply during S-process rounds

Many regrantors are reachable through:

  • Direct outreach: Email or social media (many are publicly active on Twitter/X, LessWrong)
  • EA communities: EA Forum, Alignment Forum, local EA groups
  • Professional networks: AI safety conferences (NeurIPS safety track, ICML), SERI MATS alumni
  • Manifund platform: Create project and regrantors may proactively reach out
AspectCoefficient GivingLTFFSFFManifund
2024 AI Safety Volume≈$50M≈$4.3M≈$20M≈$2M
Total AssetsGood Ventures ($12B+)Pool of donorsJaan TallinnDonors
Decision ModelStaff + regrantorsCommitteeS-process algorithmIndividual regrantors
Typical Grant Size$100K-$5M$15-100K$100K-$2M$5-50K
Speed (EOI to decision)2-4 months1-3 months6 months (rounds)Under 2 weeks
TransparencyMedium (public database)High (detailed reports)High (S-process public)Very high (live on platform)
Risk ToleranceMediumMedium-HighMediumHigh
Best ForMajor grants, established orgsGrowing researchersEstablished orgsEarly-stage, speculative

According to an overview of AI safety funding, total external philanthropic AI safety funding (≈$100M annually) is dwarfed by:

ComparisonAmountRatio to Safety Funding
Generative AI Investment (2023)≈$24B240:1
Frontier Lab Safety Budgets≈$500M+ combined5:1
US Government AI R&D≈$3B annually30:1

This funding gap is a persistent concern in the AI safety community, though Coefficient and other funders argue that talent constraints, not funding, are often the binding limitation.

Scale and Stability: With Good Ventures’ multi-billion dollar backing, Coefficient can make commitments that smaller funders cannot. This enables multi-year organizational support, compute grants, and substantial research programs.

Strategic Sophistication: The organization’s cause selection methodology and research depth (public writeups, shallow investigations, deep dives) provides unusually transparent reasoning for grant decisions.

Ecosystem Building: By funding LTFF, Manifund, and other regranting mechanisms, Coefficient amplifies its reach while maintaining quality through trusted intermediaries.

Hits-Based Giving: Willingness to fund speculative research acknowledges that transformative progress often comes from unexpected directions, though this increases variance in outcomes.

Funding Concentration: With Coefficient representing ~60% of external AI safety funding, the field is heavily dependent on one organization’s worldview and priorities. Critics note this could lead to “possible solutions being overlooked or assumptions no longer being questioned.”

Evaluation Focus: The heavy focus on evaluations/benchmarking (68% of 2024 technical grants) has drawn criticism. As one researcher noted, “This looks much worse than I thought it would, both in terms of funding underdeployment, and in terms of overfocusing on evals.” Critics argue AI Safety Institutes are already well-resourced for evaluation work.

Alignment Neglect: Some researchers express disappointment that “there’s so little emphasis in this RFP about alignment, i.e. research on how to build an AI system that is doing what its developer intended it to do.”

Slow Spending: Coefficient has acknowledged that “in retrospect, our rate of spending was too slow, and we should have been more aggressively expanding support for technical AI safety work earlier.” Key reasons cited include difficulty making qualified senior hires and disappointment with returns to past spending.

Grants Database Limitations: The public grants database “offers an increasingly inaccurate picture” of Coefficient’s work, as it generally excludes funding advised from non-Good Ventures donors. Coefficient is considering deprecating it.

QuestionContext
Funding deployment rateIs $50M/year appropriate given AI development pace?
Evaluation vs alignment balanceShould more funding go to core alignment research?
Lab relationshipsHow to maintain independence while funding lab-adjacent work?
Multi-donor modelWill expanding beyond Good Ventures change priorities?
Talent vs funding constraintIs the field truly talent-constrained, or is this justifying underspending?