Skip to content

LongtermWiki Value Proposition

Status: Working document, not polished Purpose: Explore ambitious pathways through which LongtermWiki could create substantial value Last Updated: 2026-02-04


LongtermWiki has the potential to be far more than a reference wiki. At its most ambitious, it could:

  1. Transform longtermist prioritization by giving funders a shared, structured framework for comparing interventions under different worldviews
  2. Unlock new capital by providing the institutional credibility that attracts billionaires and governments who won’t engage with blog posts and forum threads
  3. Demonstrate epistemic infrastructure to Anthropic, potentially catalyzing significant investment in similar tools for civilization-critical decisions
  4. Coordinate a fragmented field by mapping disagreements explicitly and generating actionable research agendas
  5. Accelerate talent development by serving as the “living textbook” that doesn’t exist, reducing researcher onboarding time by months

This document explores each pathway in detail, maps the causal mechanisms, and identifies the key uncertainties that would determine success.


Causal Diagram: How LongtermWiki Creates Value

Section titled “Causal Diagram: How LongtermWiki Creates Value”
Loading diagram...

Pathway A: Improved Longtermist Prioritization

Section titled “Pathway A: Improved Longtermist Prioritization”

The core thesis: The AI safety field suffers from poor prioritization legibility. Funders make decisions based on incomplete information, network effects, and intuitions that are hard to communicate or examine. LongtermWiki could make the reasoning transparent and improve marginal allocation decisions.

Coefficient is building infrastructure for better grantmaking. LongtermWiki could integrate with this in several ways:

  • Pre-populated grant evaluation templates: When evaluating a grant for, say, interpretability research, pull relevant context from LongtermWiki’s interpretability pages, including key uncertainties and how different worldviews value the area
  • Intervention comparison views: Side-by-side comparison of interventions under different crux assumptions
  • Track record integration: Connect researcher pages to Coefficient’s track record assessments

What this requires:

  • API access to LongtermWiki data
  • Collaboration with Coefficient team
  • Consistent data structure across intervention pages

Expected value if successful: High. Could directly improve allocation of tens of millions of dollars annually.

Currently, Open Phil, SFF, Founders Pledge, EA Funds, and smaller funders each do their own prioritization analysis. This is duplicated work and leads to coordination failures (some areas over-funded, others neglected).

LongtermWiki could serve as a shared knowledge base that:

  • Reduces duplicated analysis work by 50%+
  • Makes it visible which areas are “crowded” vs. “neglected”
  • Enables explicit coordination: “We’ll cover X if you cover Y” (see also: racing dynamics in the funding landscape)

What this requires:

  • Active engagement from 2-3 major funders
  • Trust that the resource is high quality and kept current
  • Some mechanism for tracking funder activity

Expected value if successful: Very high. Could be one of the highest-leverage coordination mechanisms in EA.

Node size/shading indicates annual AI safety spending (darker = more). Edge thickness indicates potential LongtermWiki integration value.

Loading diagram...

Legend:

  • 🔵 Dark blue/green: Major funders (highest leverage targets)
  • 🟠 Orange: LongtermWiki integration points
  • 🔴 Red: New money opportunities
  • ⚫ Gray: Supporting funders
  • 🟢 Green: Value outcomes

Key insight: Open Phil and SFF control ~90% of longtermist AI safety funding. Direct integration with their workflows would have outsized impact. Coefficient is the natural API integration partner. New billionaire donors represent the highest-variance opportunity.

A3: Worldview-to-Priority Mapping

Section titled “A3: Worldview-to-Priority Mapping”

The most ambitious version: build an interactive tool where funders can input their credences on key cruxes (timelines, deceptive alignment risk, governance tractability, etc.) and get a personalized priority ranking.

This makes implicit reasoning explicit and:

  • Helps funders understand why they disagree with others
  • Surfaces which cruxes matter most for their decisions
  • Enables “what if” analysis: “If I changed my view on X, how would my priorities shift?”

What this requires:

  • Well-defined crux taxonomy
  • Quantitative models linking cruxes to intervention value
  • User-friendly interface
  • Validation that the tool matches expert intuitions

Expected value if successful: Transformative for the field. But high execution risk—may be too complex to build well.


The core thesis: There is substantially more capital that could flow to AI safety but doesn’t because the field lacks institutional credibility. Billionaires and governments need to see “real” infrastructure, not blog posts and Twitter threads.

Consider the pitch to a billionaire considering AI safety:

  • Current state: “Here, read these LessWrong posts and 80,000 Hours articles”
  • With LongtermWiki: “Here’s a comprehensive strategic intelligence platform with 400+ pages, causal models, and explicit uncertainty tracking. Spend 4 hours with this before we talk.”

LongtermWiki signals:

  • Institutional seriousness
  • Intellectual rigor
  • Transparency about limitations
  • A field that can build real infrastructure

Potential targets:

  • Tech billionaires not yet engaged (or partially engaged)
  • Family offices looking for cause areas
  • HNWIs who’ve given to climate but not AI safety

What this requires:

  • High-quality landing pages for newcomers
  • Clear “executive summary” content
  • Referrals from trusted intermediaries (e.g., Giving What We Can, advisors)

Expected value if successful: Could unlock $10M-$100M+ over time. Even one additional major donor would be transformative.

Governments are increasingly interested in AI safety but need credible reference materials:

  • EU AI Office needs rapid briefings on technical concepts
  • UK AI Safety Institute needs shared vocabulary with US counterparts
  • Congressional staffers need accessible yet rigorous resources

LongtermWiki could become:

  • A cited reference in government reports (like IPCC citations in climate)
  • A shared baseline for international coordination discussions
  • The “official” starting point for policy staff

Pathway to adoption:

  1. Partner with established policy orgs (CSET, GovAI successors, Gladstone)
  2. Get cited in 2-3 influential reports
  3. Build credibility feedback loop

What this requires:

  • Policy-accessible versions of technical content
  • Relationships with policy organizations
  • Possibly government contracts for maintenance

Expected value if successful: Huge. Government funding for AI safety infrastructure could be in the tens of millions. Plus influence on actual policy.

Policy Credibility Pathway: Concrete Organizations

Section titled “Policy Credibility Pathway: Concrete Organizations”

Node shading indicates influence/importance. Arrows show credibility flow.

Loading diagram...

Legend:

  • 🔵 Dark blue: Key government AI safety bodies (highest priority)
  • 🟠 Orange: LongtermWiki inputs
  • 🔵 Medium blue: High-credibility think tanks (entry points)
  • 🟢 Green: Desired outcomes

Key insight: The path to government adoption runs through established think tanks (CSET, RAND, Gladstone). Getting cited in their reports creates credibility that flows to government bodies. UK AISI and US AISI are the most AI-safety-focused government entities.

There’s something specific about an encyclopedic wiki that signals credibility:

  • Comprehensiveness: “They’ve thought about everything”
  • Structure: “This isn’t just opinions, it’s organized knowledge”
  • Transparency: Explicit about limitations and biases
  • Persistence: Not ephemeral like social media
  • Neutrality-adjacent: Even if opinionated, wiki format suggests fairness

This is distinct from:

  • Blog posts (personal opinions)
  • Academic papers (narrow scope, inaccessible)
  • Think tank reports (often advocacy)

Pathway C: Anthropic Epistemic Infrastructure

Section titled “Pathway C: Anthropic Epistemic Infrastructure”

The core thesis: Anthropic is building AGI. They should also be building the epistemic infrastructure humanity needs to navigate AGI. LongtermWiki demonstrates what’s possible.

LongtermWiki demonstrates that Claude can:

  • Build sophisticated, interconnected knowledge bases
  • Synthesize across hundreds of sources
  • Maintain consistency and quality at scale
  • Create transparent, well-structured analysis

This is valuable to Anthropic because (see also our Anthropic impact model):

  • It’s a compelling use case for Claude
  • It shows a path to “AI-assisted strategic intelligence”
  • It addresses the critique: “You’re building powerful AI but not the tools to use it wisely”

Narrative:

“You’re building AGI. The most important question isn’t ‘can we build it?’ but ‘can we navigate the transition wisely?’ LongtermWiki is a proof-of-concept for the epistemic infrastructure humanity needs:

  • Structured knowledge bases for civilization-critical decisions
  • Explicit uncertainty and disagreement mapping
  • Worldview-sensitive analysis
  • Living documents that stay current

If this approach is valuable for AI safety, it’s valuable for biosecurity, nuclear risk, climate, pandemic preparedness, and governance writ large.

Anthropic should:

  1. Fund expansion of LongtermWiki as a flagship demo
  2. Build LongtermWiki-style tools into Claude’s capabilities
  3. Create ‘epistemic infrastructure’ products for major institutions”

Tier 1: Funding support (see Anthropic investors and valuation context)

  • Grant to maintain and expand LongtermWiki
  • $500K-$2M would be transformative for the project

Tier 2: Technical integration

  • LongtermWiki as grounding data for Claude’s AI safety reasoning
  • Custom features in Claude for wiki-style knowledge management
  • Partnership on keeping content current

Tier 3: Strategic priority

  • Epistemic infrastructure as a core Anthropic mission
  • LongtermWiki-style tools for governments and institutions
  • Major investment in “AI for collective intelligence”

Expected value if Tier 3: Potentially the single highest-impact pathway. Could reshape how civilization handles complex decisions.

Node shading indicates estimated importance (darker = higher). Edge thickness indicates connection strength.

Loading diagram...

Legend:

  • 🟠 Orange nodes: LongtermWiki assets (what we control)
  • 🔵 Blue nodes: High-importance Anthropic stakeholders/interests
  • 🟢 Green nodes: Outcomes (darker = higher value)
  • ⚫ Gray nodes: Supporting elements

Key insight: The path to Tier 3 likely runs through demonstrating mission credibility (MC) to Dario, which requires the AI safety content quality and the demo being compelling. Jan Leike (alignment lead) is the most natural entry point for Tier 1/2.


The core thesis: The AI safety field has many smart people who disagree with each other. Much of this disagreement is unproductive—people talking past each other, unclear about cruxes, repeating debates. LongtermWiki could structure this productively.

The Crux Graph could become the canonical map of “where AI safety researchers actually disagree.”

Current state: Disagreements are scattered across papers, Twitter threads, conference conversations. It’s hard to know what the key debates are or what would resolve them.

With LongtermWiki:

  • “We’ve identified 30 key cruxes”
  • Each has explicit operationalization
  • Each has “what would change my mind”
  • Evidence is catalogued for each side

Potential innovation: Crux bounties. “$50K for research that resolves Crux #7.”

Integrate with forecasting platforms to:

  • Track researcher predictions on key parameters
  • Surface where predictions diverge most
  • Update as evidence comes in

This creates a “collective intelligence” layer on top of the wiki.

The wiki’s structure naturally reveals gaps:

  • Topics with low-quality pages
  • Cruxes with little evidence either way
  • Relationships with high uncertainty

Automated research agenda: “Here are the 20 most important questions where we have the least information.”

Expected value if successful: Could significantly accelerate the field’s epistemic progress. High uncertainty on tractability.


The core thesis: Getting up to speed in AI safety takes too long. New researchers spend months reading scattered sources. A good onboarding resource could multiply the field’s effective capacity.

Currently, new AI safety researchers need to:

  • Read 50+ papers and blog posts
  • Understand which sources are authoritative
  • Build mental models of the field’s structure
  • Learn which debates matter

This takes 6-18 months to do well.

LongtermWiki could cut this to 2-4 months by providing:

  • Curated paths through the material
  • Clear explanations of what’s contested vs. settled
  • Connections between concepts
  • “If you believe X, read Y next”

Expected value: If AI safety is talent-constrained (it is), accelerating onboarding is high-leverage.

LongtermWiki could be training data for AI systems that need to reason about AI safety:

  • Well-structured, cited, transparent
  • Clear separation of claims and evidence
  • Explicit uncertainty

Potential Anthropic interest: Use LongtermWiki to improve Claude’s AI safety reasoning.

There’s no good AI safety textbook. LongtermWiki could become this:

  • More comprehensive than any book
  • Always current
  • Hyperlinked and explorable
  • Free

Pathway: Partner with AI Safety Fundamentals and other training programs to recommend LongtermWiki as supplementary reading.


The core thesis: AI labs and policymakers need decision support. LongtermWiki could serve this directly.

Labs face constant prioritization decisions:

  • Should we invest more in interpretability or control?
  • How much red-teaming is enough?
  • Which capabilities should we not release?

LongtermWiki-style analysis could help:

  • “Given your threat model, here’s what the evidence suggests about intervention priorities”
  • Neutral analysis that labs can use internally

What this requires:

  • Trust from labs (hard)
  • Customized views for different threat models
  • Possibly confidential versions

Beyond credibility, direct partnership:

  • Contracted to maintain policy-relevant sections
  • Produce briefings on request
  • Train policy staff

Models: RAND, Brookings, CSIS relationships with government.

Partner with established policy organizations:

  • CSET, GovAI successors, Gladstone, RAND
  • LongtermWiki provides research infrastructure
  • They provide credibility and distribution

Mutual benefit: they get a better knowledge base; we get legitimacy and reach.


Loading diagram...

PathwayExpected ImpactTractabilityDependenciesRecommended Priority
A: PrioritizationVery HighMediumQuality content, funder buy-inHigh
B: New CapitalVery HighMedium-LowCredibility, relationshipsMedium
C: AnthropicPotentially TransformativeLow-MediumAnthropic interestHigh (asymmetric upside)
D: Field CoordinationHighMediumExpert engagementMedium
E: OnboardingMedium-HighHighGood content existsHigh (low-hanging fruit)
F: Lab/PolicyHighLowTrust, relationshipsLow (long-term)

Recommended focus:

  1. Short-term: Pathway E (onboarding) — most tractable, builds foundation for others
  2. Medium-term: Pathway A (prioritization) — highest certain-impact pathway
  3. Long-shot: Pathway C (Anthropic) — pursue actively given asymmetric upside

Optimistic view: Funders are drowning in information and would love structured analysis.

Pessimistic view: Funders make decisions based on relationships and gut feel; structured tools don’t fit their workflow.

How to resolve: Talk to 10 funders. Ask about their actual decision process.

Crux 2: Can we achieve sufficient quality?

Section titled “Crux 2: Can we achieve sufficient quality?”

Optimistic view: AI-assisted writing plus strong editorial oversight can match or exceed existing resources.

Pessimistic view: Without deep domain expertise, we’ll produce shallow content that experts dismiss.

How to resolve: Expert review of sample pages. Are they useful to people who know the topic?

Crux 3: Is Anthropic interested in epistemic infrastructure?

Section titled “Crux 3: Is Anthropic interested in epistemic infrastructure?”

Optimistic view: This aligns with Anthropic’s mission and Claude’s capabilities.

Pessimistic view: Anthropic has many priorities; this is too niche.

How to resolve: Direct conversation with relevant Anthropic stakeholders.

Optimistic view: Good structure + AI assistance + limited scope = manageable.

Pessimistic view: Content rot is inevitable; wiki maintenance is a losing battle.

How to resolve: Build staleness tracking from day one. Plan for finite lifespan if needed.

Optimistic view: Build it well, and usage follows. Good content is rare.

Pessimistic view: Information abundance means even good resources get ignored.

How to resolve: User research. Who would use this weekly? Can we name 20 specific people?


  1. User research: Interview 10-15 potential users (funders, researchers, policy people)

    • What information do you wish you had?
    • Would you use X if it existed?
    • How do you make prioritization decisions today?
  2. Funder outreach: Talk to Coefficient, Open Phil program officers, SFF

    • Is there interest in integration?
    • What would make this useful for them?
  3. Anthropic exploration: Reach out to relevant contacts

    • Is there interest in epistemic infrastructure?
    • What would a partnership look like?
  4. Quality validation: Get expert review of 5 high-priority pages

    • Are these useful to people who know the topic?
    • What’s missing?
  1. Prioritization tool MVP: Build a minimal worldview-to-priority mapping
  2. Onboarding pathway: Curated reading path for new researchers
  3. Funder dashboard: Custom views for grantmaker needs
  4. Policy brief template: Accessible versions of key topics
  • 3+ funders actively using LongtermWiki in decision-making
  • 5+ expert endorsements/testimonials
  • Referenced in 2+ policy documents or reports
  • 1,000+ monthly active users
  • Active conversation with Anthropic about partnership

If AI safety spending is currently suboptimal (likely true), then improving allocation is high-leverage. The field spends roughly $100-300M annually. If LongtermWiki improves allocation by even 5%, that’s $5-15M of additional effective spending per year—likely more valuable than most direct work.

If LongtermWiki unlocks new capital (billionaires, governments), the impact compounds. $100M of new funding, even if less efficiently allocated, could double the field’s resources.

If Anthropic (or similar actors) build epistemic infrastructure as a priority, the impact extends far beyond AI safety. Better tools for civilization-critical decisions could matter for every existential risk.

The expected value case for investing in LongtermWiki is strong, even under significant uncertainty about which pathway succeeds.


Assuming LongtermWiki is well-executed, how much value could it create for the world? Focus is on counterfactual impact, not value captured.

Theory of Change: Epistemic Infrastructure → Better Outcomes

Section titled “Theory of Change: Epistemic Infrastructure → Better Outcomes”
StageMechanismExample
1. Knowledge synthesisScattered info → structured, queryable”What’s the state of interpretability research?” answerable in minutes, not hours
2. Uncertainty mappingImplicit disagreements → explicit cruxes”We disagree because you think X and I think Y” becomes visible
3. Prioritization clarityGut feelings → transparent reasoningFunders can see why different worldviews lead to different priorities
4. Coordination enablingSiloed analysis → shared knowledge baseReduces duplication, enables “I’ll cover X if you cover Y”
5. Legitimacy buildingBlog posts → institutional infrastructureOpens doors to governments, new donors, skeptical observers

Impact Category 1: Better Allocation of Existing AI Safety Funding

Section titled “Impact Category 1: Better Allocation of Existing AI Safety Funding”

Baseline: ≈$300M/yr flows to AI safety. Current allocation is based on networks, intuitions, and fragmented analysis.

Improvement MechanismHow LW HelpsEstimated % ImprovementValue (on $300M)
Reduced analytical duplicationShared reference eliminates redundant research20-40% analyst time saved$1-3M/yr equivalent
Gap identificationVisible “crowded” vs “neglected” areas2-5% funds moved to better uses$6-15M/yr effective
Explicit worldview-priority mapping”If you believe X, fund Y” becomes tractable1-3% better targeting$3-9M/yr effective
Faster grant evaluationPre-existing context for each topic30-50% time reduction$0.5-1M/yr equivalent
Cross-funder coordination”We’ll cover interpretability, you cover governance”1-2% reduced overlap$3-6M/yr effective

Total for existing funders: If LongtermWiki improves allocation quality by 3-10%, that’s $9-30M/yr in “effective dollars” (money achieving more impact).

Impact Category 2: New Capital for AI Safety

Section titled “Impact Category 2: New Capital for AI Safety”

Baseline: Many potential funders are not engaged because the field lacks legible, credible infrastructure.

SourceBarrier LW AddressesPotential New $/yrP(Engagement)E[Value]
Tech billionaires (new)“Show me something serious, not blog posts”$10-50M5-15%$0.5-7.5M
Family officesNeed institutional-grade analysis$2-10M10-20%$0.2-2M
Government AI safety budgetsNeed citable, credible references$10-50M10-20%$1-10M
Corporate giving (beyond labs)Need clear landscape view$2-10M5-10%$0.1-1M

Total new capital: $2-20M/yr in new funding entering the field, with high uncertainty.

MechanismCurrent ProblemLW SolutionImpact Estimate
Onboarding time6-18 months to get productiveCurated paths, 2-4 months50-100 researcher-months/yr saved
Literature navigationHours finding relevant workCross-linked, comprehensive20-30% time saved on lit review
Avoiding duplicationUnknowingly repeat others’ workVisible prior work5-10 projects/yr not wasted
Gap identificationHard to see what’s missingExplicit gap analysisBetter research targeting
Disagreement clarityTalking past each otherExplicit cruxesMore productive debates

Researcher-equivalent value: 50-100 months saved × $20K/month = $1-2M/yr equivalent

Research quality: If better information leads to even 1-2% faster alignment progress, this could be worth $10-100M+ depending on timeline beliefs.

Impact Category 4: Policy & Governance Improvement

Section titled “Impact Category 4: Policy & Governance Improvement”
MechanismCurrent ProblemLW SolutionImpact Estimate
Rapid briefingsStaffers piece together from scattered sourcesComprehensive referenceHours → minutes per topic
Shared vocabularyTerms differ across jurisdictionsCommon definitionsBetter international coordination
Technical accuracyOften rely on industry-biased sourcesIndependent analysisMore informed regulation
Anticipatory policyHard to see emerging issuesForward-looking modelsBetter timed interventions

Policy value: Highly uncertain. If LongtermWiki contributes to one significantly better policy decision (regulation, international agreement, lab practice), value could be $10M-$1B+. Assign $1-10M E[V] given low probability of direct influence.

Impact Category 5: Inspiring Broader Epistemic Infrastructure

Section titled “Impact Category 5: Inspiring Broader Epistemic Infrastructure”

The Anthropic Angle: The most ambitious value isn’t Anthropic using LongtermWiki directly—it’s LongtermWiki demonstrating what’s possible and inspiring Anthropic (or similar actors) to build epistemic infrastructure at scale.

What LongtermWiki demonstrates:

  • AI can build sophisticated, interconnected knowledge bases
  • Uncertainty and disagreement can be mapped explicitly
  • “Living documents” can stay current and improve over time
  • Transparent methodology builds trust

If this inspires Anthropic to build epistemic tools:

Potential Anthropic ActionValue if HappensP(Happens)E[Value]
Builds “epistemic Claude” features (uncertainty tracking, structured knowledge)$50-200M product value + societal benefit5-15%$2.5-30M
Creates epistemic infrastructure for governmentsBetter decisions on $T of spending2-5%$5-50M+
Makes this a standard Claude capabilityCivilizational upgrade in decision-making1-3%Unbounded
Other labs copy the approachMultiplied across AI ecosystem5-10%$5-20M

Total “inspiration” value: Very uncertain, but potentially $5-50M+ E[V] if LongtermWiki successfully demonstrates the concept.

What would make this work:

  • LongtermWiki needs to be genuinely impressive—a clear demonstration, not a mediocre wiki
  • Someone at Anthropic needs to see it and think “we should build something like this”
  • The timing needs to align with Anthropic’s product/mission priorities
Impact CategoryConservativeCentralOptimisticConfidence
Better allocation (existing $300M)$9M/yr$15M/yr$30M/yrMedium
New capital attracted$2M/yr$8M/yr$20M/yrLow
Research acceleration$1M/yr$5M/yr$15M/yrMedium
Policy improvement$1M/yr$3M/yr$10M/yrVery Low
Inspiring epistemic infrastructure$2M/yr$10M/yr$50M+/yrVery Low
Total$15M/yr$41M/yr$125M+/yrLow-Medium

Note: Categories are not fully independent—success in one often enables others.

FactorIf WeakIf StrongImportance
Content quality$5M/yr$50M/yrCritical
Funder adoption$3M/yr$30M/yrCritical
Anthropic/lab interest in concept$10M/yr$100M+/yrHigh variance
Policy adoption$10M/yr$50M+/yrHigh variance
Maintenance/freshness$5M/yr (decaying)$30M/yrHigh
Research community adoption$5M/yr$15M/yrMedium

Key insight: The critical drivers are content quality and funder adoption. Without high-quality content, nothing else works. The high-variance drivers (Anthropic inspiration, policy) could multiply impact 10x but are harder to predict or control.

ApproachAnnual CostE[Impact/yr]Impact per $1 Spent
LongtermWiki (2 FTE)$400K$15-40M$37-100
Typical AI safety research org$2M$5-20M$2.5-10
Typical grantmaking overhead$1MEnables $20M grants$20 leverage
Direct AI safety research$200K/researcher$0.5-5M$2.5-25

If these estimates are roughly right, epistemic infrastructure is highly cost-effective. The main risk is that the estimates are wrong—specifically, that better information doesn’t actually lead to better decisions.

PathwayPrimary ImpactValue CreatedConfidenceKey Dependency
A: Funder prioritizationBetter $ allocation$10-30M/yr effectiveMediumFunder engagement
B: New capitalMore total resources$2-20M/yr new $LowCredibility demonstration
C: Anthropic inspirationEpistemic tools at scale$5-50M+ potentialVery LowSomeone sees it, acts
D: Field coordinationLess duplication, clearer debates$2-10M/yrMediumExpert buy-in
E: Research accelerationFaster progress$1-15M/yr equivalentMediumContent quality
F: Policy influenceBetter decisions$1-50M+Very LowAdoption pathway
  1. Does better information actually change behavior? Funders might continue making decisions based on relationships regardless of available analysis.

  2. Can quality be maintained at scale? Initial quality is achievable; sustaining it is the real test.

  3. Is the “inspiration” pathway real? Anthropic might never notice, or might notice but not act.

  4. How counterfactual is this? Would similar resources emerge anyway through other means?

  5. What’s the timeline? Value accrues over years; might not see results for 2-5 years.

Central estimate: A well-executed LongtermWiki creates $15-40M/yr in value for AI safety, primarily through better allocation of existing funding and research acceleration.

Upside scenario: If it successfully demonstrates the value of epistemic infrastructure and inspires broader adoption (Anthropic, governments, other fields), value could be $100M+/yr or more.

Downside scenario: If content quality degrades, funders don’t engage, or the “better information → better decisions” theory of change is wrong, value could be $2-5M/yr—still positive but not transformative.

The bet: Epistemic infrastructure is undersupplied because it’s a public good. If we can demonstrate it works, others will copy and scale it. The value isn’t just LongtermWiki—it’s proving the concept.