Coefficient Giving
coefficient-givingorganizationPath: /knowledge-base/organizations/coefficient-giving/
E521Entity ID (EID)
Page Recorddatabase.json — merged from MDX frontmatter + Entity YAML + computed metrics at build time
{
"id": "coefficient-giving",
"numericId": null,
"path": "/knowledge-base/organizations/coefficient-giving/",
"filePath": "knowledge-base/organizations/coefficient-giving.mdx",
"title": "Coefficient Giving",
"quality": 55,
"readerImportance": 35.5,
"researchImportance": 80,
"tacticalValue": null,
"contentFormat": "article",
"tractability": null,
"neglectedness": null,
"uncertainty": null,
"causalLevel": null,
"lastUpdated": "2026-03-12",
"dateCreated": "2026-02-15",
"llmSummary": "Coefficient Giving (formerly Open Philanthropy) has directed \\$4B+ in grants since 2014, including \\$336M to AI safety (~60% of external funding). The organization spent ~\\$50M on AI safety in 2024, with 68% going to evaluations/benchmarking, and launched a \\$40M Technical AI Safety RFP in 2025 covering 21 research areas with 2-week EOI response times.",
"description": "Coefficient Giving (formerly Open Philanthropy) is a major philanthropic organization that has directed over \\$4 billion in grants since 2014, including \\$336+ million to AI safety. In November 2025, Open Philanthropy rebranded to Coefficient Giving and restructured into 13 cause-specific funds open to multiple donors. The Navigating Transformative AI Fund supports technical safety research, AI governance, and capacity building, with a \\$40M Technical AI Safety RFP in 2025. Key grantees include Center for AI Safety (\\$8.5M in 2024), Redwood Research (\\$6.2M), and MIRI (\\$4.1M).",
"ratings": {
"novelty": 2.5,
"rigor": 5,
"actionability": 6.5,
"completeness": 6.5
},
"category": "organizations",
"subcategory": "funders",
"clusters": [
"community",
"ai-safety",
"governance"
],
"metrics": {
"wordCount": 3923,
"tableCount": 21,
"diagramCount": 2,
"internalLinks": 17,
"externalLinks": 50,
"footnoteCount": 0,
"bulletRatio": 0.09,
"sectionCount": 39,
"hasOverview": true,
"structuralScore": 15
},
"suggestedQuality": 100,
"updateFrequency": 45,
"evergreen": true,
"wordCount": 3923,
"unconvertedLinks": [
{
"text": "coefficientgiving.org",
"url": "https://coefficientgiving.org/",
"resourceId": "kb-360d9d206d186a79"
},
{
"text": "Coefficient Giving",
"url": "https://coefficientgiving.org/",
"resourceId": "kb-360d9d206d186a79"
},
{
"text": "\\$40 million Technical AI Safety RFP",
"url": "https://coefficientgiving.org/funds/navigating-transformative-ai/request-for-proposals-technical-ai-safety-research/",
"resourceId": "kb-fd5a7d6cfe6e7e1d"
},
{
"text": "Center for Human-Compatible AI",
"url": "https://humancompatible.ai/",
"resourceId": "9c4106b68045dbd6",
"resourceTitle": "Center for Human-Compatible AI"
},
{
"text": "Future of Humanity Institute",
"url": "https://www.fhi.ox.ac.uk/",
"resourceId": "1593095c92d34ed8",
"resourceTitle": "**Future of Humanity Institute**"
},
{
"text": "analysis of Coefficient Giving's Technical AI Safety funding",
"url": "https://www.lesswrong.com/posts/adzfKEW98TswZEA6T/brief-analysis-of-op-technical-ai-safety-funding",
"resourceId": "kb-fb66d73671ec9ced"
},
{
"text": "\\$40 million Request for Proposals",
"url": "https://coefficientgiving.org/funds/navigating-transformative-ai/request-for-proposals-technical-ai-safety-research/",
"resourceId": "kb-fd5a7d6cfe6e7e1d"
},
{
"text": "Long-Term Future Fund",
"url": "https://funds.effectivealtruism.org/funds/far-future",
"resourceId": "9baa7f54db71864d",
"resourceTitle": "Long-Term Future Fund"
},
{
"text": "Manifund",
"url": "https://manifund.org/about/regranting",
"resourceId": "kb-0c3cd3534fa36003"
},
{
"text": "Survival and Flourishing Fund",
"url": "https://survivalandflourishing.fund/",
"resourceId": "a01514f7c492ce4c",
"resourceTitle": "Survival and Flourishing Fund"
},
{
"text": "EA Funds",
"url": "https://funds.effectivealtruism.org/funds/far-future",
"resourceId": "9baa7f54db71864d",
"resourceTitle": "Long-Term Future Fund"
},
{
"text": "S-process rounds",
"url": "https://survivalandflourishing.fund/",
"resourceId": "a01514f7c492ce4c",
"resourceTitle": "Survival and Flourishing Fund"
},
{
"text": "overview of AI safety funding",
"url": "https://www.lesswrong.com/posts/WGpFFJo2uFe5ssgEb/an-overview-of-the-ai-safety-funding-situation",
"resourceId": "b1ab921f9cbae109",
"resourceTitle": "An Overview of the AI Safety Funding Situation (LessWrong)"
},
{
"text": "Coefficient Giving Official Website",
"url": "https://coefficientgiving.org/",
"resourceId": "kb-360d9d206d186a79"
},
{
"text": "Technical AI Safety Research RFP",
"url": "https://coefficientgiving.org/funds/navigating-transformative-ai/request-for-proposals-technical-ai-safety-research/",
"resourceId": "kb-fd5a7d6cfe6e7e1d"
},
{
"text": "An Overview of the AI Safety Funding Situation",
"url": "https://www.lesswrong.com/posts/WGpFFJo2uFe5ssgEb/an-overview-of-the-ai-safety-funding-situation",
"resourceId": "b1ab921f9cbae109",
"resourceTitle": "An Overview of the AI Safety Funding Situation (LessWrong)"
},
{
"text": "Brief Analysis of OP Technical AI Safety Funding",
"url": "https://www.lesswrong.com/posts/adzfKEW98TswZEA6T/brief-analysis-of-op-technical-ai-safety-funding",
"resourceId": "kb-fb66d73671ec9ced"
},
{
"text": "Manifund AI Safety Regranting",
"url": "https://manifund.org/about/regranting",
"resourceId": "kb-0c3cd3534fa36003"
},
{
"text": "Long-Term Future Fund",
"url": "https://funds.effectivealtruism.org/funds/far-future",
"resourceId": "9baa7f54db71864d",
"resourceTitle": "Long-Term Future Fund"
},
{
"text": "Survival and Flourishing Fund",
"url": "https://survivalandflourishing.fund/",
"resourceId": "a01514f7c492ce4c",
"resourceTitle": "Survival and Flourishing Fund"
},
{
"text": "Coefficient Giving Website",
"url": "https://coefficientgiving.org/",
"resourceId": "kb-360d9d206d186a79"
},
{
"text": "Long-Term Future Fund",
"url": "https://funds.effectivealtruism.org/funds/far-future",
"resourceId": "9baa7f54db71864d",
"resourceTitle": "Long-Term Future Fund"
},
{
"text": "Survival and Flourishing Fund",
"url": "https://survivalandflourishing.fund/",
"resourceId": "a01514f7c492ce4c",
"resourceTitle": "Survival and Flourishing Fund"
}
],
"unconvertedLinkCount": 23,
"convertedLinkCount": 0,
"backlinkCount": 129,
"hallucinationRisk": {
"level": "high",
"score": 75,
"factors": [
"biographical-claims",
"no-citations"
]
},
"entityType": "organization",
"redundancy": {
"maxSimilarity": 17,
"similarPages": [
{
"id": "ltff",
"title": "Long-Term Future Fund (LTFF)",
"path": "/knowledge-base/organizations/ltff/",
"similarity": 17
},
{
"id": "sff",
"title": "Survival and Flourishing Fund (SFF)",
"path": "/knowledge-base/organizations/sff/",
"similarity": 17
},
{
"id": "dustin-moskovitz",
"title": "Dustin Moskovitz (AI Safety Funder)",
"path": "/knowledge-base/people/dustin-moskovitz/",
"similarity": 17
},
{
"id": "rethink-priorities",
"title": "Rethink Priorities",
"path": "/knowledge-base/organizations/rethink-priorities/",
"similarity": 16
},
{
"id": "field-building-analysis",
"title": "AI Safety Field Building Analysis",
"path": "/knowledge-base/responses/field-building-analysis/",
"similarity": 16
}
]
},
"changeHistory": [
{
"date": "2026-02-18",
"branch": "claude/audit-webpage-errors-11sSF",
"title": "Fix factual errors found in wiki audit",
"summary": "Systematically audited ~35+ high-risk wiki pages for factual errors and hallucinations using parallel background agents plus direct reading. Fixed 13 confirmed errors across 11 files."
}
],
"coverage": {
"passing": 8,
"total": 13,
"targets": {
"tables": 16,
"diagrams": 2,
"internalLinks": 31,
"externalLinks": 20,
"footnotes": 12,
"references": 12
},
"actuals": {
"tables": 21,
"diagrams": 2,
"internalLinks": 17,
"externalLinks": 50,
"footnotes": 0,
"references": 9,
"quotesWithQuotes": 0,
"quotesTotal": 0,
"accuracyChecked": 0,
"accuracyTotal": 0
},
"items": {
"llmSummary": "green",
"schedule": "green",
"entity": "green",
"editHistory": "green",
"overview": "green",
"tables": "green",
"diagrams": "green",
"internalLinks": "amber",
"externalLinks": "green",
"footnotes": "red",
"references": "amber",
"quotes": "red",
"accuracy": "red"
},
"editHistoryCount": 1,
"ratingsString": "N:2.5 R:5 A:6.5 C:6.5"
},
"readerRank": 410,
"researchRank": 88,
"recommendedScore": 149.48
}External Links
No external links
Backlinks (129)
| id | title | type | relationship |
|---|---|---|---|
| longtermist-value-comparisons | Relative Longtermist Value Comparisons | analysis | — |
| the-foundation-layer | The Foundation Layer | organization | related |
| ajeya-cotra | Ajeya Cotra | person | — |
| dustin-moskovitz | Dustin Moskovitz (AI Safety Funder) | person | — |
| intervention-portfolio | AI Safety Intervention Portfolio | approach | — |
| training-programs | AI Safety Training Programs | approach | — |
| field-building-analysis | AI Safety Field Building Analysis | approach | — |
| case-for-xrisk | The Case FOR AI Existential Risk | argument | — |
| deep-learning-era | Deep Learning Revolution (2012-2020) | historical | — |
| ea-epistemic-failures-in-the-ftx-era | EA Epistemic Failures in the FTX Era | concept | — |
| ea-institutions-response-to-the-ftx-collapse | EA Institutions' Response to the FTX Collapse | concept | — |
| ea-longtermist-wins-losses | EA and Longtermist Wins and Losses | concept | — |
| earning-to-give | Earning to Give: The EA Strategy and Its Limits | concept | — |
| ftx-collapse-and-ea-public-credibility | FTX Collapse and EA's Public Credibility | concept | — |
| ftx-red-flags-pre-collapse-warning-signs-that-were-overlooked | FTX Red Flags: Pre-Collapse Warning Signs That Were Overlooked | concept | — |
| longtermism-credibility-after-ftx | Longtermism's Philosophical Credibility After FTX | concept | — |
| miri-era | The MIRI Era (2000-2015) | historical | — |
| ai-risk-portfolio-analysis | AI Risk Portfolio Analysis | analysis | — |
| ai-timelines | AI Timelines | concept | — |
| anthropic-pledge-enforcement | Anthropic Founder Pledges: Interventions to Increase Follow-Through | analysis | — |
| carlsmith-six-premises | Carlsmith's Six-Premise Argument | analysis | — |
| intervention-effectiveness-matrix | Intervention Effectiveness Matrix | analysis | — |
| model-organisms-of-misalignment | Model Organisms of Misalignment | analysis | — |
| planning-for-frontier-lab-scaling | Planning for Frontier Lab Scaling | analysis | — |
| safety-research-allocation | Safety Research Allocation Model | analysis | — |
| safety-research-value | Expected Value of AI Safety Research | analysis | — |
| safety-researcher-gap | AI Safety Talent Supply/Demand Gap Model | analysis | — |
| societal-response | Societal Response & Adaptation Model | analysis | — |
| 1day-sooner | 1Day Sooner | organization | — |
| 80000-hours | 80,000 Hours | organization | — |
| anthropic-investors | Anthropic (Funder) | analysis | — |
| anthropic-ipo | Anthropic IPO | analysis | — |
| anthropic-valuation | Anthropic Valuation Analysis | analysis | — |
| anthropic | Anthropic | organization | — |
| arb-research | Arb Research | organization | — |
| arc | ARC (Alignment Research Center) | organization | — |
| biosecurity-orgs-overview | Biosecurity Organizations (Overview) | concept | — |
| blueprint-biosecurity | Blueprint Biosecurity | organization | — |
| cais | CAIS (Center for AI Safety) | organization | — |
| cea | Centre for Effective Altruism | organization | — |
| center-for-applied-rationality | Center for Applied Rationality | organization | — |
| centre-for-long-term-resilience | Centre for Long-Term Resilience | organization | — |
| chan-zuckerberg-initiative | Chan Zuckerberg Initiative | organization | — |
| controlai | ControlAI | organization | — |
| cset | CSET (Center for Security and Emerging Technology) | organization | — |
| ea-funding-absorption-capacity | EA Funding Absorption Capacity | concept | — |
| ea-global | EA Global | organization | — |
| ea-shareholder-diversification-anthropic | EA Shareholder Diversification from Anthropic | concept | — |
| elicit | Elicit (AI Research Tool) | organization | — |
| epoch-ai | Epoch AI | organization | — |
| far-ai | FAR AI | organization | — |
| fhi | Future of Humanity Institute (FHI) | organization | — |
| fli | Future of Life Institute (FLI) | organization | — |
| fri | Forecasting Research Institute | organization | — |
| frontier-model-forum | Frontier Model Forum | organization | — |
| ftx-collapse-ea-funding-lessons | FTX Collapse: Lessons for EA Funding Resilience | concept | — |
| ftx-future-fund | FTX Future Fund | organization | — |
| ftx | FTX (cryptocurrency exchange) | organization | — |
| funders-overview | Longtermist Funders (Overview) | concept | — |
| futuresearch | FutureSearch | organization | — |
| giving-pledge | Giving Pledge | organization | — |
| giving-what-we-can | Giving What We Can | organization | — |
| good-judgment | Good Judgment (Forecasting) | organization | — |
| govai | GovAI | organization | — |
| hewlett-foundation | William and Flora Hewlett Foundation | organization | — |
| ibbis | IBBIS (International Biosecurity and Biosafety Initiative for Science) | organization | — |
| johns-hopkins-center-for-health-security | Johns Hopkins Center for Health Security | organization | — |
| lesswrong | LessWrong | organization | — |
| lionheart-ventures | Lionheart Ventures | organization | — |
| longview-philanthropy | Longview Philanthropy | organization | — |
| ltff | Long-Term Future Fund (LTFF) | organization | — |
| macarthur-foundation | MacArthur Foundation | organization | — |
| manifund | Manifund | organization | — |
| mats | MATS ML Alignment Theory Scholars program | organization | — |
| metaculus | Metaculus | organization | — |
| metr | METR | organization | — |
| miri | MIRI (Machine Intelligence Research Institute) | organization | — |
| nti-bio | NTI | bio (Nuclear Threat Initiative - Biological Program) | organization | — |
| open-philanthropy | Open Philanthropy | organization | — |
| openai-foundation | OpenAI Foundation | organization | — |
| pause-ai | Pause AI | organization | — |
| quri | QURI (Quantified Uncertainty Research Institute) | organization | — |
| redwood-research | Redwood Research | organization | — |
| rethink-priorities | Rethink Priorities | organization | — |
| safety-orgs-overview | AI Safety Organizations (Overview) | concept | — |
| secure-ai-project | Secure AI Project | organization | — |
| securebio | SecureBio | organization | — |
| securedna | SecureDNA | organization | — |
| sentinel | Sentinel (Catastrophic Risk Foresight) | organization | — |
| sff | Survival and Flourishing Fund (SFF) | organization | — |
| swift-centre | Swift Centre | organization | — |
| vara | Value Aligned Research Advisors | organization | — |
| dan-hendrycks | Dan Hendrycks | person | — |
| david-sacks | David Sacks (White House AI Czar) | person | — |
| eli-lifland | Eli Lifland | person | — |
| eliezer-yudkowsky | Eliezer Yudkowsky | person | — |
| helen-toner | Helen Toner | person | — |
| holden-karnofsky | Holden Karnofsky | person | — |
| __index__/knowledge-base/people | People | concept | — |
| nick-beckstead | Nick Beckstead | person | — |
| nuno-sempere | Nuño Sempere | person | — |
| philip-tetlock | Philip Tetlock (Forecasting Pioneer) | person | — |
| robin-hanson | Robin Hanson | person | — |
| sam-bankman-fried | Sam Bankman-Fried | person | — |
| stuart-russell | Stuart Russell | person | — |
| toby-ord | Toby Ord | person | — |
| vipul-naik | Vipul Naik | person | — |
| ai-forecasting-benchmark | AI Forecasting Benchmark Tournament | project | — |
| ai-watch | AI Watch | project | — |
| biosecurity-overview | Biosecurity Interventions (Overview) | concept | — |
| cooperative-ai | Cooperative AI | approach | — |
| donations-list-website | Donations List Website | project | — |
| ea-biosecurity-scope | Is EA Biosecurity Work Limited to Restricting LLM Biological Use? | analysis | — |
| eliciting-latent-knowledge | Eliciting Latent Knowledge (ELK) | approach | — |
| evals | Evals & Red-teaming | safety-agenda | — |
| forecastbench | ForecastBench | project | — |
| multi-agent | Multi-Agent Safety | approach | — |
| org-watch | Org Watch | project | — |
| recoding-america | Recoding America | resource | — |
| research-agendas | AI Alignment Research Agenda Comparison | crux | — |
| scalable-oversight | Scalable Oversight | safety-agenda | — |
| state-capacity-ai-governance | State Capacity and AI Governance | concept | — |
| technical-research | Technical AI Safety Research | crux | — |
| xpt | XPT (Existential Risk Persuasion Tournament) | project | — |
| bioweapons | Bioweapons | risk | — |
| existential-risk | Existential Risk from AI | concept | — |
| architecture | System Architecture | concept | — |
| longtermwiki-value-proposition | LongtermWiki Value Proposition | concept | — |
| page-types | Page Type System | concept | — |