Longterm Wiki

Coefficient Giving

coefficient-givingorganizationPath: /knowledge-base/organizations/coefficient-giving/
E521Entity ID (EID)
← Back to page129 backlinksQuality: 55Updated: 2026-03-12
Page Recorddatabase.json — merged from MDX frontmatter + Entity YAML + computed metrics at build time
{
  "id": "coefficient-giving",
  "numericId": null,
  "path": "/knowledge-base/organizations/coefficient-giving/",
  "filePath": "knowledge-base/organizations/coefficient-giving.mdx",
  "title": "Coefficient Giving",
  "quality": 55,
  "readerImportance": 35.5,
  "researchImportance": 80,
  "tacticalValue": null,
  "contentFormat": "article",
  "tractability": null,
  "neglectedness": null,
  "uncertainty": null,
  "causalLevel": null,
  "lastUpdated": "2026-03-12",
  "dateCreated": "2026-02-15",
  "llmSummary": "Coefficient Giving (formerly Open Philanthropy) has directed \\$4B+ in grants since 2014, including \\$336M to AI safety (~60% of external funding). The organization spent ~\\$50M on AI safety in 2024, with 68% going to evaluations/benchmarking, and launched a \\$40M Technical AI Safety RFP in 2025 covering 21 research areas with 2-week EOI response times.",
  "description": "Coefficient Giving (formerly Open Philanthropy) is a major philanthropic organization that has directed over \\$4 billion in grants since 2014, including \\$336+ million to AI safety. In November 2025, Open Philanthropy rebranded to Coefficient Giving and restructured into 13 cause-specific funds open to multiple donors. The Navigating Transformative AI Fund supports technical safety research, AI governance, and capacity building, with a \\$40M Technical AI Safety RFP in 2025. Key grantees include Center for AI Safety (\\$8.5M in 2024), Redwood Research (\\$6.2M), and MIRI (\\$4.1M).",
  "ratings": {
    "novelty": 2.5,
    "rigor": 5,
    "actionability": 6.5,
    "completeness": 6.5
  },
  "category": "organizations",
  "subcategory": "funders",
  "clusters": [
    "community",
    "ai-safety",
    "governance"
  ],
  "metrics": {
    "wordCount": 3923,
    "tableCount": 21,
    "diagramCount": 2,
    "internalLinks": 17,
    "externalLinks": 50,
    "footnoteCount": 0,
    "bulletRatio": 0.09,
    "sectionCount": 39,
    "hasOverview": true,
    "structuralScore": 15
  },
  "suggestedQuality": 100,
  "updateFrequency": 45,
  "evergreen": true,
  "wordCount": 3923,
  "unconvertedLinks": [
    {
      "text": "coefficientgiving.org",
      "url": "https://coefficientgiving.org/",
      "resourceId": "kb-360d9d206d186a79"
    },
    {
      "text": "Coefficient Giving",
      "url": "https://coefficientgiving.org/",
      "resourceId": "kb-360d9d206d186a79"
    },
    {
      "text": "\\$40 million Technical AI Safety RFP",
      "url": "https://coefficientgiving.org/funds/navigating-transformative-ai/request-for-proposals-technical-ai-safety-research/",
      "resourceId": "kb-fd5a7d6cfe6e7e1d"
    },
    {
      "text": "Center for Human-Compatible AI",
      "url": "https://humancompatible.ai/",
      "resourceId": "9c4106b68045dbd6",
      "resourceTitle": "Center for Human-Compatible AI"
    },
    {
      "text": "Future of Humanity Institute",
      "url": "https://www.fhi.ox.ac.uk/",
      "resourceId": "1593095c92d34ed8",
      "resourceTitle": "**Future of Humanity Institute**"
    },
    {
      "text": "analysis of Coefficient Giving's Technical AI Safety funding",
      "url": "https://www.lesswrong.com/posts/adzfKEW98TswZEA6T/brief-analysis-of-op-technical-ai-safety-funding",
      "resourceId": "kb-fb66d73671ec9ced"
    },
    {
      "text": "\\$40 million Request for Proposals",
      "url": "https://coefficientgiving.org/funds/navigating-transformative-ai/request-for-proposals-technical-ai-safety-research/",
      "resourceId": "kb-fd5a7d6cfe6e7e1d"
    },
    {
      "text": "Long-Term Future Fund",
      "url": "https://funds.effectivealtruism.org/funds/far-future",
      "resourceId": "9baa7f54db71864d",
      "resourceTitle": "Long-Term Future Fund"
    },
    {
      "text": "Manifund",
      "url": "https://manifund.org/about/regranting",
      "resourceId": "kb-0c3cd3534fa36003"
    },
    {
      "text": "Survival and Flourishing Fund",
      "url": "https://survivalandflourishing.fund/",
      "resourceId": "a01514f7c492ce4c",
      "resourceTitle": "Survival and Flourishing Fund"
    },
    {
      "text": "EA Funds",
      "url": "https://funds.effectivealtruism.org/funds/far-future",
      "resourceId": "9baa7f54db71864d",
      "resourceTitle": "Long-Term Future Fund"
    },
    {
      "text": "S-process rounds",
      "url": "https://survivalandflourishing.fund/",
      "resourceId": "a01514f7c492ce4c",
      "resourceTitle": "Survival and Flourishing Fund"
    },
    {
      "text": "overview of AI safety funding",
      "url": "https://www.lesswrong.com/posts/WGpFFJo2uFe5ssgEb/an-overview-of-the-ai-safety-funding-situation",
      "resourceId": "b1ab921f9cbae109",
      "resourceTitle": "An Overview of the AI Safety Funding Situation (LessWrong)"
    },
    {
      "text": "Coefficient Giving Official Website",
      "url": "https://coefficientgiving.org/",
      "resourceId": "kb-360d9d206d186a79"
    },
    {
      "text": "Technical AI Safety Research RFP",
      "url": "https://coefficientgiving.org/funds/navigating-transformative-ai/request-for-proposals-technical-ai-safety-research/",
      "resourceId": "kb-fd5a7d6cfe6e7e1d"
    },
    {
      "text": "An Overview of the AI Safety Funding Situation",
      "url": "https://www.lesswrong.com/posts/WGpFFJo2uFe5ssgEb/an-overview-of-the-ai-safety-funding-situation",
      "resourceId": "b1ab921f9cbae109",
      "resourceTitle": "An Overview of the AI Safety Funding Situation (LessWrong)"
    },
    {
      "text": "Brief Analysis of OP Technical AI Safety Funding",
      "url": "https://www.lesswrong.com/posts/adzfKEW98TswZEA6T/brief-analysis-of-op-technical-ai-safety-funding",
      "resourceId": "kb-fb66d73671ec9ced"
    },
    {
      "text": "Manifund AI Safety Regranting",
      "url": "https://manifund.org/about/regranting",
      "resourceId": "kb-0c3cd3534fa36003"
    },
    {
      "text": "Long-Term Future Fund",
      "url": "https://funds.effectivealtruism.org/funds/far-future",
      "resourceId": "9baa7f54db71864d",
      "resourceTitle": "Long-Term Future Fund"
    },
    {
      "text": "Survival and Flourishing Fund",
      "url": "https://survivalandflourishing.fund/",
      "resourceId": "a01514f7c492ce4c",
      "resourceTitle": "Survival and Flourishing Fund"
    },
    {
      "text": "Coefficient Giving Website",
      "url": "https://coefficientgiving.org/",
      "resourceId": "kb-360d9d206d186a79"
    },
    {
      "text": "Long-Term Future Fund",
      "url": "https://funds.effectivealtruism.org/funds/far-future",
      "resourceId": "9baa7f54db71864d",
      "resourceTitle": "Long-Term Future Fund"
    },
    {
      "text": "Survival and Flourishing Fund",
      "url": "https://survivalandflourishing.fund/",
      "resourceId": "a01514f7c492ce4c",
      "resourceTitle": "Survival and Flourishing Fund"
    }
  ],
  "unconvertedLinkCount": 23,
  "convertedLinkCount": 0,
  "backlinkCount": 129,
  "hallucinationRisk": {
    "level": "high",
    "score": 75,
    "factors": [
      "biographical-claims",
      "no-citations"
    ]
  },
  "entityType": "organization",
  "redundancy": {
    "maxSimilarity": 17,
    "similarPages": [
      {
        "id": "ltff",
        "title": "Long-Term Future Fund (LTFF)",
        "path": "/knowledge-base/organizations/ltff/",
        "similarity": 17
      },
      {
        "id": "sff",
        "title": "Survival and Flourishing Fund (SFF)",
        "path": "/knowledge-base/organizations/sff/",
        "similarity": 17
      },
      {
        "id": "dustin-moskovitz",
        "title": "Dustin Moskovitz (AI Safety Funder)",
        "path": "/knowledge-base/people/dustin-moskovitz/",
        "similarity": 17
      },
      {
        "id": "rethink-priorities",
        "title": "Rethink Priorities",
        "path": "/knowledge-base/organizations/rethink-priorities/",
        "similarity": 16
      },
      {
        "id": "field-building-analysis",
        "title": "AI Safety Field Building Analysis",
        "path": "/knowledge-base/responses/field-building-analysis/",
        "similarity": 16
      }
    ]
  },
  "changeHistory": [
    {
      "date": "2026-02-18",
      "branch": "claude/audit-webpage-errors-11sSF",
      "title": "Fix factual errors found in wiki audit",
      "summary": "Systematically audited ~35+ high-risk wiki pages for factual errors and hallucinations using parallel background agents plus direct reading. Fixed 13 confirmed errors across 11 files."
    }
  ],
  "coverage": {
    "passing": 8,
    "total": 13,
    "targets": {
      "tables": 16,
      "diagrams": 2,
      "internalLinks": 31,
      "externalLinks": 20,
      "footnotes": 12,
      "references": 12
    },
    "actuals": {
      "tables": 21,
      "diagrams": 2,
      "internalLinks": 17,
      "externalLinks": 50,
      "footnotes": 0,
      "references": 9,
      "quotesWithQuotes": 0,
      "quotesTotal": 0,
      "accuracyChecked": 0,
      "accuracyTotal": 0
    },
    "items": {
      "llmSummary": "green",
      "schedule": "green",
      "entity": "green",
      "editHistory": "green",
      "overview": "green",
      "tables": "green",
      "diagrams": "green",
      "internalLinks": "amber",
      "externalLinks": "green",
      "footnotes": "red",
      "references": "amber",
      "quotes": "red",
      "accuracy": "red"
    },
    "editHistoryCount": 1,
    "ratingsString": "N:2.5 R:5 A:6.5 C:6.5"
  },
  "readerRank": 410,
  "researchRank": 88,
  "recommendedScore": 149.48
}
External Links

No external links

Backlinks (129)
idtitletyperelationship
longtermist-value-comparisonsRelative Longtermist Value Comparisonsanalysis
the-foundation-layerThe Foundation Layerorganizationrelated
ajeya-cotraAjeya Cotraperson
dustin-moskovitzDustin Moskovitz (AI Safety Funder)person
intervention-portfolioAI Safety Intervention Portfolioapproach
training-programsAI Safety Training Programsapproach
field-building-analysisAI Safety Field Building Analysisapproach
case-for-xriskThe Case FOR AI Existential Riskargument
deep-learning-eraDeep Learning Revolution (2012-2020)historical
ea-epistemic-failures-in-the-ftx-eraEA Epistemic Failures in the FTX Eraconcept
ea-institutions-response-to-the-ftx-collapseEA Institutions' Response to the FTX Collapseconcept
ea-longtermist-wins-lossesEA and Longtermist Wins and Lossesconcept
earning-to-giveEarning to Give: The EA Strategy and Its Limitsconcept
ftx-collapse-and-ea-public-credibilityFTX Collapse and EA's Public Credibilityconcept
ftx-red-flags-pre-collapse-warning-signs-that-were-overlookedFTX Red Flags: Pre-Collapse Warning Signs That Were Overlookedconcept
longtermism-credibility-after-ftxLongtermism's Philosophical Credibility After FTXconcept
miri-eraThe MIRI Era (2000-2015)historical
ai-risk-portfolio-analysisAI Risk Portfolio Analysisanalysis
ai-timelinesAI Timelinesconcept
anthropic-pledge-enforcementAnthropic Founder Pledges: Interventions to Increase Follow-Throughanalysis
carlsmith-six-premisesCarlsmith's Six-Premise Argumentanalysis
intervention-effectiveness-matrixIntervention Effectiveness Matrixanalysis
model-organisms-of-misalignmentModel Organisms of Misalignmentanalysis
planning-for-frontier-lab-scalingPlanning for Frontier Lab Scalinganalysis
safety-research-allocationSafety Research Allocation Modelanalysis
safety-research-valueExpected Value of AI Safety Researchanalysis
safety-researcher-gapAI Safety Talent Supply/Demand Gap Modelanalysis
societal-responseSocietal Response & Adaptation Modelanalysis
1day-sooner1Day Soonerorganization
80000-hours80,000 Hoursorganization
anthropic-investorsAnthropic (Funder)analysis
anthropic-ipoAnthropic IPOanalysis
anthropic-valuationAnthropic Valuation Analysisanalysis
anthropicAnthropicorganization
arb-researchArb Researchorganization
arcARC (Alignment Research Center)organization
biosecurity-orgs-overviewBiosecurity Organizations (Overview)concept
blueprint-biosecurityBlueprint Biosecurityorganization
caisCAIS (Center for AI Safety)organization
ceaCentre for Effective Altruismorganization
center-for-applied-rationalityCenter for Applied Rationalityorganization
centre-for-long-term-resilienceCentre for Long-Term Resilienceorganization
chan-zuckerberg-initiativeChan Zuckerberg Initiativeorganization
controlaiControlAIorganization
csetCSET (Center for Security and Emerging Technology)organization
ea-funding-absorption-capacityEA Funding Absorption Capacityconcept
ea-globalEA Globalorganization
ea-shareholder-diversification-anthropicEA Shareholder Diversification from Anthropicconcept
elicitElicit (AI Research Tool)organization
epoch-aiEpoch AIorganization
far-aiFAR AIorganization
fhiFuture of Humanity Institute (FHI)organization
fliFuture of Life Institute (FLI)organization
friForecasting Research Instituteorganization
frontier-model-forumFrontier Model Forumorganization
ftx-collapse-ea-funding-lessonsFTX Collapse: Lessons for EA Funding Resilienceconcept
ftx-future-fundFTX Future Fundorganization
ftxFTX (cryptocurrency exchange)organization
funders-overviewLongtermist Funders (Overview)concept
futuresearchFutureSearchorganization
giving-pledgeGiving Pledgeorganization
giving-what-we-canGiving What We Canorganization
good-judgmentGood Judgment (Forecasting)organization
govaiGovAIorganization
hewlett-foundationWilliam and Flora Hewlett Foundationorganization
ibbisIBBIS (International Biosecurity and Biosafety Initiative for Science)organization
johns-hopkins-center-for-health-securityJohns Hopkins Center for Health Securityorganization
lesswrongLessWrongorganization
lionheart-venturesLionheart Venturesorganization
longview-philanthropyLongview Philanthropyorganization
ltffLong-Term Future Fund (LTFF)organization
macarthur-foundationMacArthur Foundationorganization
manifundManifundorganization
matsMATS ML Alignment Theory Scholars programorganization
metaculusMetaculusorganization
metrMETRorganization
miriMIRI (Machine Intelligence Research Institute)organization
nti-bioNTI | bio (Nuclear Threat Initiative - Biological Program)organization
open-philanthropyOpen Philanthropyorganization
openai-foundationOpenAI Foundationorganization
pause-aiPause AIorganization
quriQURI (Quantified Uncertainty Research Institute)organization
redwood-researchRedwood Researchorganization
rethink-prioritiesRethink Prioritiesorganization
safety-orgs-overviewAI Safety Organizations (Overview)concept
secure-ai-projectSecure AI Projectorganization
securebioSecureBioorganization
securednaSecureDNAorganization
sentinelSentinel (Catastrophic Risk Foresight)organization
sffSurvival and Flourishing Fund (SFF)organization
swift-centreSwift Centreorganization
varaValue Aligned Research Advisorsorganization
dan-hendrycksDan Hendrycksperson
david-sacksDavid Sacks (White House AI Czar)person
eli-liflandEli Liflandperson
eliezer-yudkowskyEliezer Yudkowskyperson
helen-tonerHelen Tonerperson
holden-karnofskyHolden Karnofskyperson
__index__/knowledge-base/peoplePeopleconcept
nick-becksteadNick Becksteadperson
nuno-sempereNuño Sempereperson
philip-tetlockPhilip Tetlock (Forecasting Pioneer)person
robin-hansonRobin Hansonperson
sam-bankman-friedSam Bankman-Friedperson
stuart-russellStuart Russellperson
toby-ordToby Ordperson
vipul-naikVipul Naikperson
ai-forecasting-benchmarkAI Forecasting Benchmark Tournamentproject
ai-watchAI Watchproject
biosecurity-overviewBiosecurity Interventions (Overview)concept
cooperative-aiCooperative AIapproach
donations-list-websiteDonations List Websiteproject
ea-biosecurity-scopeIs EA Biosecurity Work Limited to Restricting LLM Biological Use?analysis
eliciting-latent-knowledgeEliciting Latent Knowledge (ELK)approach
evalsEvals & Red-teamingsafety-agenda
forecastbenchForecastBenchproject
multi-agentMulti-Agent Safetyapproach
org-watchOrg Watchproject
recoding-americaRecoding Americaresource
research-agendasAI Alignment Research Agenda Comparisoncrux
scalable-oversightScalable Oversightsafety-agenda
state-capacity-ai-governanceState Capacity and AI Governanceconcept
technical-researchTechnical AI Safety Researchcrux
xptXPT (Existential Risk Persuasion Tournament)project
bioweaponsBioweaponsrisk
existential-riskExistential Risk from AIconcept
architectureSystem Architectureconcept
longtermwiki-value-propositionLongtermWiki Value Propositionconcept
page-typesPage Type Systemconcept
Longterm Wiki