Longterm Wiki
Navigation
Updated 2026-03-12HistoryData
Page StatusContent
Edited 1 day ago2.4k wordsUpdated quarterlyDue in 13 weeks
74QualityGood68ImportanceUseful82ResearchHigh
Summary

Relative value framework comparing longtermist funding vehicles to a GiveWell reference. Key findings: (1) Coefficient Navigating TAI Fund is ~50x–500,000x per GEU vs global health under longtermist priors; (2) LTFF is 0.3x–10x per dollar relative to Coefficient TAI, higher variance; (3) Anthropic stock (secondary market) has near-zero to slightly negative expected value vs direct safety funding; (4) 1 year before TAI is worth ~\$1B–\$100T in AI safety research equivalents; (5) all comparisons are dominated by P(x-risk) and moral weight of future people as key cruxes.

Content6/13
LLM summaryScheduleEntityEdit history1Overview
Tables6/ ~10Diagrams3/ ~1Int. links15/ ~19Ext. links0/ ~12Footnotes0/ ~7References0/ ~7Quotes0Accuracy0RatingsN:7 R:6.5 A:7 C:7.5
Change History1
Batch content fixes + stale-facts validator + 2 new validation rules#9242 weeks ago

(fill in)

claude-sonnet-4-6

Relative Longtermist Value Comparisons

Analysis

Relative Longtermist Value Comparisons

Relative value framework comparing longtermist funding vehicles to a GiveWell reference. Key findings: (1) Coefficient Navigating TAI Fund is ~50x–500,000x per GEU vs global health under longtermist priors; (2) LTFF is 0.3x–10x per dollar relative to Coefficient TAI, higher variance; (3) Anthropic stock (secondary market) has near-zero to slightly negative expected value vs direct safety funding; (4) 1 year before TAI is worth ~\$1B–\$100T in AI safety research equivalents; (5) all comparisons are dominated by P(x-risk) and moral weight of future people as key cruxes.

Related
Organizations
Coefficient GivingLong-Term Future Fund (LTFF)
2.4k words
Analysis

Relative Longtermist Value Comparisons

Relative value framework comparing longtermist funding vehicles to a GiveWell reference. Key findings: (1) Coefficient Navigating TAI Fund is ~50x–500,000x per GEU vs global health under longtermist priors; (2) LTFF is 0.3x–10x per dollar relative to Coefficient TAI, higher variance; (3) Anthropic stock (secondary market) has near-zero to slightly negative expected value vs direct safety funding; (4) 1 year before TAI is worth ~\$1B–\$100T in AI safety research equivalents; (5) all comparisons are dominated by P(x-risk) and moral weight of future people as key cruxes.

Related
Organizations
Coefficient GivingLong-Term Future Fund (LTFF)
2.4k words

Overview

When comparing longtermist allocations — $1 to Coefficient Giving's Navigating Transformative AI Fund vs. $1 to GiveWell global health, or $1 in Anthropic equity vs. $1 to the Long-Term Future Fund — standard cost-effectiveness analysis breaks down. The estimates span six or more orders of magnitude, some distributions straddle zero (meaning an allocation could be net-negative), and the dominant variables are philosophical rather than empirical.

This page develops a relative value framework using GiveWell-Equivalent Units (GEUs) as the common reference anchor. Rather than converting everything to DALYs or welfare units, it expresses each allocation as a ratio relative to $1 to GiveWell top charities. This makes assumptions explicit, enables structured comparison across worldviews, and is directly actionable for allocation decisions.

Key conclusions:

AllocationGEU per dollar (90% CI)Sign certain?
GiveWell global health1 (reference)Yes
Pandemic preparedness3 – 300Yes
Farm animal welfare0.001 – 100Yes
Coefficient TAI Fund50 – 500,000Yes
Long-Term Future Fund20 – 2,000,000Yes
Anthropic stock (secondary)−150,000 – 50,000No
1 year before TAIequivalent to $1B – $100T in AI safetyYes

The Anthropic secondary equity row is unique: under many plausible assumptions it is net-negative relative to direct safety funding, because secondary-market purchases provide no capital to Anthropic but may sustain capability-accelerating valuations.


Part I — The Reference Unit

GiveWell-Equivalent Units (GEUs)

1 GEU = $1 deployed to GiveWell top-recommended charities (currently Against Malaria Foundation or Malaria Consortium).

This translates to approximately:

  • 0.02–0.05 DALYs averted per dollar (at $20–50/DALY from GiveWell's central estimates)
  • Roughly 1 statistical life per $3,000–8,000 (factoring child mortality weighting)
  • About 100× more effective than unconditional cash transfers at face value

GEUs are useful as a reference because:

  1. GiveWell's methodology is public, debated, and stable enough to serve as a comparison baseline
  2. Many EA donors have direct intuitions about what $1 to AMF "does"
  3. Unlike DALYs, GEUs don't require committing to a specific moral weight for each condition

Important caveat: Using GEUs as a reference does not imply that global health is the correct moral baseline. It is simply the most widely-understood calibration point. Depending on one's ethical framework, 1 GEU could be worth dramatically more or less than any of the longtermist allocations below.

Calibrating the Unit

To make this concrete: $1 to the Against Malaria Foundation in 2025 buys approximately 0.35 insecticide-treated nets, which over five years prevents roughly 0.03 DALYs of child malaria. At scale, $1 million saves roughly 200–500 statistical lives by GiveWell's methodology.


Part II — Variable Analysis

2.1 Value of 1% X-Risk Reduction

This is the master variable that governs all longtermist comparisons. Its calculation follows:

V(1% x-risk reduction)=P(AI x-risk)×Nfuture×wfuture×Vperson in GEUsV(1\% \text{ x-risk reduction}) = P(\text{AI x-risk}) \times N_\text{future} \times w_\text{future} \times V_\text{person in GEUs}

Where:

  • P(AI x-risk): probability that AI causes existential catastrophe (Carlsmith: >10%; superforecasters: ≈0.4%; Toby Ord: 10%; AI experts median: ≈5%)
  • N_future: expected number of future people if catastrophe is avoided (~10²² to 10²⁸, depending on space settlement and timescales)
  • w_future: moral weight of future generations relative to present (0 under pure time discount, 1 under person-affecting views that extend far forward)
  • V_person in GEUs: GEU value of one statistical life (≈$4,000–10,000 / person, so 0.5–3 GEUs per person using GiveWell's ≈$3,000–8,000/life)

Working estimate: Under moderate longtermist assumptions (P(x-risk) = 15%, N_future = 10²³, w_future = 0.5, V_person = 1 GEU), the value of 1% x-risk reduction is approximately 10²² GEUs — a number so large it only becomes actionable through tractability: how much does $1 actually move x-risk?

Key cruxes:

  • Moral weight of future generations is the single most contested variable. Strong time discounting (e.g., 1–3%/year) reduces the value of far-future people to near zero. Longtermism requires assuming this discount is not ethically justified.
  • Person-affecting vs. total-utilitarian views: If we only count people who will exist regardless of our choices (person-affecting), the value of x-risk reduction shrinks dramatically.
  • Astronomical waste: Nick Bostrom's "astronomical waste" framing argues that the long-run future dwarfs the present so overwhelmingly that almost any improvement in x-risk dominates all other considerations.
Loading diagram...

2.2 Value of 1 Year Before TAI

The AI Acceleration Tradeoff Model estimates that each additional year of preparation time before transformative AI reduces existential risk by approximately 1–4 percentage points (conditional on AI being developed), accounting for:

  • Additional alignment research capacity deployed
  • Governance infrastructure built
  • Safety culture established at frontier labs
  • International coordination progress

Conversion to GEUs: If 1 year reduces x-risk by 2.5 pp on average, and the value of 1% x-risk reduction is V, then:

V(1 year before TAI)=2.5×V(1% x-risk)V(\text{1 year before TAI}) = 2.5 \times V(1\% \text{ x-risk})

Dollar equivalent: At current tractability rates ($50M/year of Coefficient TAI funding buys perhaps 0.001–0.01% x-risk reduction per year), one additional year of preparation is equivalent to approximately $1B–$100T in direct AI safety funding. This is why interventions that slow AI timelines or accelerate readiness — compute governance, international treaties, recruitment of safety-focused researchers — are potentially among the most impactful in existence, even when they appear bureaucratic or indirect.

Important caveat: This calculation assumes that safety preparation during a longer timeline is used productively. If a longer timeline primarily benefits capabilities researchers, the x-risk reduction per year may be negative. The sign of the tractability estimate itself is contested.


2.3 $1 to Coefficient Giving — Navigating Transformative AI Fund

Coefficient Giving (formerly Open Philanthropy) is the dominant external funder of AI safety work, spending approximately $50M/year through its Navigating Transformative AI Fund. Key 2024 grants include Center for AI Safety ($8.5M), Redwood Research ($6.2M), and MIRI ($4.1M).

Counterfactual impact chain:

StepEstimateNotes
Total AI safety funding$120–150M/year externalCoefficient = ≈55% of this
Coefficient TAI Fund share$50M/year2024 committed
% going to direct alignment≈32% ($16M)68% to evaluations/benchmarking
Marginal x-risk reduction per $M≈0.00001–0.001%Extremely uncertain
Counterfactual (would SFF/LTFF fund this?)30–60% would find alternative fundingReduces counterfactual impact

Estimate: $1 to Coefficient TAI Fund ≈ 50–500,000 GEUs, with the wide range driven almost entirely by one's prior on P(x-risk) and tractability. Under skeptical assumptions (P(x-risk) ≈ 1%, low tractability), AI safety funding is only 10–100x GiveWell. Under concerned assumptions (P(x-risk) ≈ 30%, high tractability), it reaches 10⁶× or beyond.

Known limitations:

  • Evaluation-heavy allocation: 68% of Coefficient's 2024 AI safety spending went to evaluations and benchmarking. Critics argue this systematically underfunds direct alignment research — the harder, more speculative work.
  • Fungibility with internal labs: Coefficient funds work that often complements commercial safety teams at Anthropic and DeepMind. The counterfactual impact depends on whether these orgs would do the work anyway.
  • Concentration risk: One organization controlling ≈55% of external AI safety funding creates single points of failure in strategic decisions.

2.4 $1 to the Long-Term Future Fund

The Long-Term Future Fund (LTFF) operates at roughly $8–12M/year, functioning as a regrantor within the EA ecosystem. It specializes in:

  • Individual researchers not affiliated with established orgs
  • Speculative or early-stage projects Coefficient wouldn't fund without a track record
  • Fast turnaround for time-sensitive opportunities (weeks vs. months)
  • Riskier bets at smaller funding sizes ($10K–$200K per grant)

Comparison to Coefficient TAI Fund:

DimensionCoefficient TAILTFF
Size$50M/year$8–12M/year
Grant size$500K–$10M$10K–$200K
Application processMonthsWeeks
Counterfactual granteesMany would get Coefficient fundingMost would not get Coefficient
Risk toleranceLow-mediumMedium-high
Variance in outcomesLowerHigher

Estimate: $1 to LTFF ≈ 0.3x–10x Coefficient TAI per dollar, distributed log-normally around parity with higher variance. The LTFF is not systematically better or worse than Coefficient per dollar — it fills a different niche. For the marginal longtermist funder:

  • If the work you care about would receive Coefficient funding: donate to Coefficient (more infrastructure)
  • If the work requires fast turnaround or is too speculative for Coefficient: prefer LTFF
  • If you are uncertain: LTFF has better expected value per dollar on the margin because it is smaller and more funding-constrained

2.5 $1 in Anthropic Equity (Secondary Market)

This is the most analytically interesting comparison because the sign of expected value is genuinely uncertain — it cannot be expressed as "N× GiveWell" but as a distribution that spans negative values.

What secondary-market purchase does and doesn't do:

EffectDirectionMagnitude
Provides capital to AnthropicNoneSecondary market; money goes to selling shareholder
Influences Anthropic's safety prioritiesSlightly positiveExtremely small per dollar; voting power negligible
Sustains high Anthropic valuationMixedHigh valuation → more capital raised, more capabilities work
Accelerates AI capabilities timelineSlightly negativeVia enabling Anthropic's compute purchases and talent hiring
Aligns EA community incentives with Anthropic successMixedCould create conflicts of interest
EA influence via large ownership blocPositive if EA owns >1%Current EA ownership is well below threshold for meaningful pressure

The acceleration problem: Anthropic's capabilities research is directly entangled with safety research — the company trains frontier models to study alignment. $1 of secondary-market Anthropic stock primarily benefits the previous shareholder, but sustaining Anthropic's valuation indirectly supports its ability to raise future primary-round capital. This chains into capabilities deployment with uncertain safety implications.

Contrast with primary-round investing: In a primary round (e.g., Series C), Anthropic receives the capital directly and deploys it for compute, talent, and research. The value analysis is more complex — this provides direct resources that fund safety work and capabilities work simultaneously.

Estimate: $1 of Anthropic secondary stock ≈ −150,000 to +50,000 GEUs, with the distribution skewed negative relative to Coefficient TAI Fund. The expected value is likely slightly negative relative to direct AI safety funding — not because Anthropic is bad, but because secondary purchases provide essentially zero direct safety benefit while potentially sustaining valuations that accelerate capabilities deployment.

Important context: This comparison is against direct AI safety funding, not against GiveWell. Relative to doing nothing, Anthropic secondary stock is probably mildly positive (Anthropic does meaningful safety work). The point is that for an EA donor choosing between vehicles, secondary Anthropic equity compares poorly to LTFF or Coefficient TAI.


2.6 Coefficient Giving's 13 Cause Clusters

In November 2025, Coefficient Giving (formerly Open Philanthropy) rebranded and restructured into 13 cause-specific funds. From a longtermist perspective, the relative value of $1 across these funds varies enormously:

FundGEU per dollar (longtermist prior)GEU per dollar (neartermist prior)Key crux
Navigating Transformative AI50 – 500,0000.5 – 5P(AI x-risk), tractability
Pandemic Preparedness3 – 3001 – 20P(bio x-risk), neglectedness
Global Health & Wellbeing1 (reference)1 (reference)
Farm Animal Welfare0.001 – 1005 – 500Moral weight of animals
Scientific Research0.2 – 100.3 – 5Research compounding rate
Criminal Justice Reform0.05 – 20.1 – 3Bounded US-focused impact
Nuclear Weapons Policy1 – 5000.5 – 50P(nuclear x-risk)
Biosecurity (non-pandemic)2 – 2001 – 30Near-x-risk importance
Climate & Energy0.2 – 200.5 – 10Long-run importance, neglectedness

The starkest divergence is between Navigating Transformative AI and Farm Animal Welfare: longtermists find AI safety dramatically superior, while those focused on present-day animal welfare find the reverse. The comparison is almost entirely a crux about moral circle expansion vs. existential risk priority rather than a tractability or neglectedness argument.

The table above omits the final three Coefficient funds (in areas like immigration, housing, and democracy) because their longtermist value is sufficiently unclear that producing a confidence interval would require substantially more analysis than this page provides.


Part III — Cross-Comparison Matrix

The table below expresses all pairwise comparisons as ratios. Read across a row to find "how many dollars of column Y equals $1 of row X."

GiveWellPandemic PrepFarm AnimalCoefficient TAILTFFAnthropic Secondary
GiveWell10.003 – 0.30.01 – 10000.000002 – 0.020.0000005 – 0.05≈ ambiguous
Pandemic Prep3 – 30010.003 – 2000.00003 – 6ambiguousambiguous
Coefficient TAI50 – 500K0.2 – 200K500 – 5×10⁸10.3 – 10≈ −50 to +3
LTFF20 – 2M0.1 – 1M200 – 2×10⁹0.1 – 31≈ −100 to +5

Ranges represent 90% credible intervals given current knowledge. Ratios marked "ambiguous" span negative values and cannot be expressed as simple multipliers.


Squiggle Model

This model encodes the core value chain. All values are expressed in GEUs ($1 to GiveWell = 1). The output dictionary shows the estimated GEU-per-dollar across all six comparison vehicles.

Relative Longtermist Value Comparisons (GEUs per dollar)

Loading Squiggle...

Interactive Calculator

Adjust the key parameters — P(x-risk), moral weight, tractability — to see how your assumptions change the relative values. The model is intentionally transparent: the ratio of Coefficient TAI Fund to GiveWell is almost entirely determined by three variables.

Longtermist Value Calculator — adjust parameters to explore

Loading Squiggle...

Key Cruxes

Key Questions

  • ?What probability do you assign to AI causing existential catastrophe? This single parameter moves the Coefficient TAI / GiveWell ratio by ~40x as it goes from 1% to 40%.
  • ?Do you apply a moral discount rate to future generations? At 1%/year discount, the value of far-future people collapses to near zero, making longtermist allocations indistinguishable from global health.
  • ?How tractable is AI safety research at the margin? The tractability estimate (how much does $1M of research reduce x-risk?) is the most uncertain variable and dominates all conclusions.
  • ?Does secondary-market Anthropic equity meaningfully influence Anthropic's safety priorities? If yes, the estimate for Anthropic stock shifts positive; if no, it remains near-zero or slightly negative.
  • ?How much of the Long-Term Future Fund's portfolio is counterfactually funded by Coefficient Giving? If LTFF mostly funds things Coefficient would eventually fund anyway, LTFF's counterfactual impact falls substantially.

  • Carlsmith's Six-Premise Argument — the probabilistic decomposition underlying P(x-risk) estimates used here
  • AI Acceleration Tradeoff Model — source for the 1–4 pp x-risk reduction per year of preparation estimate
  • Coefficient Giving — full profile of the dominant AI safety funder
  • Long-Term Future Fund — LTFF profile and grant history
  • Existential Risk from AI — background on the risk landscape informing these estimates
  • Longtermist Funders — funding landscape overview with current spending figures

Related Pages

Top Related Pages

Analysis

AI Acceleration Tradeoff ModelAnthropic (Funder)

Organizations

Machine Intelligence Research InstituteCentre for Long-Term Resilience

Other

Nick BostromDustin Moskovitz (AI Safety Funder)Holden Karnofsky

Concepts

Funders OverviewEA Shareholder Diversification from Anthropic

Historical

The MIRI Era