Skip to content

LongtermWiki Impact Model

📋Page Status
Page Type:ContentStyle Guide →Standard knowledge base article
Quality:55 (Adequate)
Importance:50 (Useful)
Last edited:2026-02-04 (2 days ago)
Words:2.2k
Structure:
📊 18📈 1🔗 5📚 06%Score: 11/15
LLM Summary:Fermi estimation of LongtermWiki value grounded in base rates. GiveWell shows 3% of donors choose based on effectiveness research; 80k Hours achieves ~107 significant plan changes/year; think tanks rarely demonstrate causal policy impact. Conservative central estimate: $100-500K/yr in effective value through researcher onboarding (primary), funder information improvement (secondary), with high-variance 'inspiration' pathway that's hard to quantify. Much lower than naive estimates due to: limited counterfactual impact of information on decisions, small target audience, and low probability of behavioral change.

InfoBox requires type prop or entityId/expertId/orgId for data lookup

LongtermWiki is an AI-assisted knowledge base covering AI safety, longtermist prioritization, and related topics. This model attempts to estimate its potential value creation using base rates from comparable interventions rather than inside-view reasoning.

Core Question: How much value does LongtermWiki create, and how does this compare to alternative uses of resources?

Key Finding: Naive estimates that assume “better information → better decisions” dramatically overstate impact. Base rates from GiveWell, 80k Hours, and think tank research suggest information interventions change behavior far less than expected.

DimensionAssessmentEvidence
Central Value Estimate$100-500K/yrBase-rate-grounded Fermi model
Primary PathwayResearcher onboardingClearest counterfactual impact
Secondary PathwayFunder information improvementLow confidence in behavioral change
High-Variance Pathway”Inspiration” for epistemic infrastructureToo speculative to quantify
Cost-Effectiveness$0.25-1.25/$ vs typical interventionsUncertain but plausibly positive

Base Rates: What We Know About Information Interventions

Section titled “Base Rates: What We Know About Information Interventions”

GiveWell provides the clearest data on whether effectiveness research changes funder behavior:

MetricValueSource
Donors who “choose based on effectiveness research”3%GiveWell surveys
Donors aware of GiveWell recommendations10.1%GiveWell awareness studies
Conversion rate: awareness → action≈30%Implied
Total giving influenced by GiveWell≈$500M/yrGiveWell reports
GiveWell operating budget≈$30M/yr990 filings

Key insight: Even the most successful effectiveness research organization achieves only 3% behavioral change among its target audience, and this after 15+ years of operation with significant resources.

Application to LongtermWiki: If LongtermWiki achieved GiveWell-level penetration in the AI safety funding space (≈$300M/yr), it might influence 3% × $300M = $9M/yr in decisions. But LongtermWiki is not GiveWell-level quality or reach, so realistic penetration is likely 0.1-1%, suggesting $300K-3M/yr in influenced decisions (not improved decisions).

80,000 Hours tracks “significant plan changes” attributable to their advice:

MetricValueSource
Significant plan changes/year≈10780k Hours impact reports
Definition of “significant”≥20% career shift attributable to 80kSelf-reported
Users engaging with content≈100,000/yrTraffic estimates
Conversion rate: engagement → plan change≈0.1%Implied
Operating budget≈$4M/yr990 filings
Cost per plan change≈$40KBudget / changes

Key insight: Even excellent career advice changes behavior in only 0.1% of readers. Self-reported attribution is likely inflated.

Application to LongtermWiki: If LongtermWiki had 10,000 engaged users/year and achieved 80k-level conversion, that’s ~10 “significant” decision changes. At $50K average impact per decision change, that’s $500K/yr. But LongtermWiki likely has fewer users and lower conversion.

Research on think tank policy influence is sobering:

FindingSource
77% of think tanks claim policy influence, but causal evidence is weakMcGann (2019)
Policy changes attributed to specific research are rare and hard to verifyRich (2004)
Think tank influence is mediated by relationships, not publicationsAbelson (2009)
Congressional staff report using think tank research for “ammunition” not “education”Weiss (1991)

Key insight: Think tanks provide legitimacy and talking points, not decision-driving analysis. Policymakers who cite research were usually already inclined toward that position.

Application to LongtermWiki: Policy influence pathway should be discounted heavily. LongtermWiki might provide “ammunition” for already-aligned actors but is unlikely to change minds.

Open Philanthropy, the largest AI safety funder, describes its decision-making process:

AspectReality
Primary inputProgram officer judgment and relationships
Role of external researchSupplementary, not determinative
Decision style”Hits-based” with heavy reliance on internal worldviews
Response to external analysisMay inform but rarely drives decisions

Key insight: Major funders rely on internal expertise and relationships, not external knowledge bases. Even excellent external analysis is filtered through existing worldviews.

Application to LongtermWiki: Direct influence on major funder decisions is likely small. Value more likely comes from indirect channels: improving researcher quality, providing shared vocabulary, etc.

Pathway 1: Researcher Onboarding (Primary)

Section titled “Pathway 1: Researcher Onboarding (Primary)”

This is likely LongtermWiki’s clearest counterfactual impact.

ParameterEstimateReasoning
New AI safety researchers/year≈200Field growth estimates
Current onboarding time6-12 months to productivityResearcher interviews
LongtermWiki’s reach20-40% of new researchersOptimistic given competition with AI Safety Fundamentals, Alignment Forum
Time reduction if used1-2 monthsModest improvement, not transformation
Researchers actually affected40-80200 × 30% reach
Time saved per researcher1.5 monthsMidpoint
Value of researcher time$8K/monthJunior researcher cost
Total time value$480K-960K/yr40-80 × 1.5 × $8K
Counterfactual adjustment50%Would partially upskill anyway
Net value$240K-480K/yrAfter counterfactual

Confidence: Medium. This pathway has clearer counterfactual than others.

Pathway 2: Funder Information Improvement (Secondary)

Section titled “Pathway 2: Funder Information Improvement (Secondary)”
ParameterEstimateReasoning
AI safety funding/year$300MField estimates
Funders who might use LongtermWiki10-20%Optimistic
Funding “influenced”$30-60M/yr300M × 15%
Base rate: information → behavior change3%GiveWell data
Decisions actually changed$1-2M/yr30-60M × 3%
Quality of change20% improvementModest, not dramatic
Net value$200K-400K/yrDecisions changed × improvement

Confidence: Low. The “information → behavior” chain is weak based on base rates.

Pathway 3: Field Coordination & Vocabulary

Section titled “Pathway 3: Field Coordination & Vocabulary”
ParameterEstimateReasoning
Value of shared vocabularyHard to quantifyEnables better disagreement
Reduced duplication$50-100K/yr equivalentSome analyst time saved
Better gap identification$50-100K/yr equivalentMaybe one project better targeted
Net value$100-200K/yrHighly uncertain

Confidence: Very low. Real but hard to measure.

ParameterEstimateReasoning
P(LongtermWiki cited in policy)5-15%Low penetration expected
P(citation influences decision)5-10%Base rates suggest low
P(influenced decision is good)60-70%Some net improvement if any
Expected policy value$0-100K/yrVery low base rates
Net value$0-100K/yrEssentially speculative

Confidence: Very low. Think tank research suggests minimal causal impact.

Pathway 5: “Inspiration” for Epistemic Infrastructure

Section titled “Pathway 5: “Inspiration” for Epistemic Infrastructure”

This is the highest-variance pathway but hardest to estimate.

ParameterEstimateReasoning
P(relevant person sees LongtermWiki)20-40%If we actively promote
P(they find it compelling)10-30%Quality-dependent
P(it influences their decisions)5-15%Idea may already exist independently
P(resulting action is valuable)50-70%Uncertain what “inspired” action looks like
Conditional value if chain completes$5-50M+Wide range
Expected valueHighly uncertainToo many conjunctive probabilities

Confidence: Cannot reliably estimate. Include in sensitivity analysis but not central estimate.

PathwayLowCentralHighConfidence
Researcher onboarding$100K$300K$600KMedium
Funder information$50K$200K$500KLow
Field coordination$25K$100K$300KVery Low
Policy influence$0$25K$100KVery Low
Subtotal (quantifiable)$175K$625K$1.5MLow-Medium
Inspiration pathway?????????Unquantifiable

Central estimate: $100-500K/yr after accounting for uncertainty and optimism bias.

Loading diagram...

The value proposition document suggested $15-40M/yr central estimate. Why is this Fermi model 50-100x lower?

Error 1: Assuming Information Changes Behavior

Section titled “Error 1: Assuming Information Changes Behavior”
Naive assumptionReality (base rates)
“If funders have better info, they’ll make better decisions”3% of donors change behavior based on effectiveness research (GiveWell)
“Policy staff will use our analysis”Think tanks rarely demonstrate causal policy impact
”Researchers will use our onboarding materials”0.1% conversion rate for career advice (80k Hours)
ClaimCounterfactual question
”LongtermWiki saves researcher time”Would they not learn this from other sources?
”LongtermWiki enables better funder decisions”Are funders actually information-constrained?
”LongtermWiki creates shared vocabulary”Does the Alignment Forum already serve this function?

Most benefits are partially counterfactual—the impact would occur through other channels without LongtermWiki.

PathwayProbability chain
Anthropic “inspiration”P(sees) × P(compelled) × P(acts) × P(valuable) ≈ 0.5-5%
Policy influenceP(reaches policy) × P(influences decision) × P(good decision) ≈ 0.2-1%

Multiplying many uncertain probabilities yields very low expected values, even with high conditional values.

Error 4: Conflating “Influenced” with “Improved”

Section titled “Error 4: Conflating “Influenced” with “Improved””

Even if LongtermWiki influences $30M in decisions:

  • Most “influenced” decisions were already trending that direction
  • Influence doesn’t mean improvement (could be neutral or negative)
  • Measurement is confounded (users seek confirmation, not education)
InterventionAnnual CostE[Impact]Impact/$
LongtermWiki (2 FTE)$400K$100-500K$0.25-1.25
GiveWell operations$30M$500M influenced≈$17
80,000 Hours$4M≈$5M value of plan changes≈$1.25
Direct AI safety research (per researcher)$200K$0.5-2M$2.50-10
Grantmaking (per $ moved)$0.05-0.10$1 moved$10-20 leverage

Interpretation: LongtermWiki’s cost-effectiveness is uncertain but plausibly competitive with other information interventions. It is likely less cost-effective than direct research or grantmaking if those options are available.

CruxIf True → ImpactIf False → ImpactCurrent Belief
Information changes funder behavior$500K+/yr from funder pathway$100K/yr (mostly onboarding)20% true
LongtermWiki is unique resourceHigher counterfactual valueLower (other sources substitute)40% true
”Inspiration” pathway is realCould be $1M+Negligible10-20% real
Quality can be maintainedSustained valueValue decays over 2-3 years50% maintainable
AI safety is information-constrainedInformation interventions valuableResources better spent elsewhere30% constrained
  1. Self-assessment bias: This model is produced by LongtermWiki, creating incentive to underestimate (for credibility) or overestimate (for motivation)

  2. Base rate generalization: GiveWell/80k Hours may not transfer to AI safety funding context

  3. Unmeasurable pathways: “Inspiration” and “coordination” benefits are real but hard to quantify

  4. Temporal dynamics: Value may be front-loaded (early field benefits most) or back-loaded (compounding effects)

  5. Reference class selection: Different reference classes (encyclopedia, think tank, community wiki) yield different estimates

EvidenceImplication
Funders report using LongtermWiki in actual decisionsDirect behavioral change
Significant user growth beyond AI safety communityBroader reach
Demonstrated policy citationsPolicy pathway becomes real
Anthropic or similar org builds on the concept”Inspiration” pathway validates
EvidenceImplication
User research shows researchers prefer other resourcesOnboarding pathway weakens
Funder interviews show no behavior changeFunder pathway essentially zero
Content quality degradesAll pathways weaken
Better alternatives emergeCounterfactual value drops

Given this analysis:

  1. Primary focus should be researcher onboarding — this has clearest counterfactual impact

  2. Funder influence claims should be modest — base rates suggest limited behavioral change

  3. Policy pathway should be de-prioritized — unless strong relationships exist

  4. “Inspiration” pathway is worth trying — but shouldn’t drive resource allocation

  5. Track actual behavior change, not just usage — pageviews don’t equal impact

  • LongtermWiki Value Proposition — Strategic analysis and pathways
  • Anthropic Impact Assessment — Similar impact modeling approach
  • AI Transition Model — Broader framework