Skip to content

Similar Projects to LongtermWiki: Research Report

FindingKey DataImplication for LongtermWiki
Arbital failed despite innovationDiscontinued 2017, content migrated to LessWrongNovel features alone don’t ensure adoption
Stampy succeeds with narrow focusFAQ format, semantic search, paid fellowship ($2,500/mo)Clear use case + paid contributors > ambitious scope
MIT Risk Repository is authoritative1,600+ risks, 65 frameworks, academic backingComprehensive databases need institutional support
EA Forum Wiki integrated with platformTags = Wiki pages, visible in post contextIntegration beats standalone wikis
BlueDot trained 7,000+ people75% completion rate, structured curriculumEducational scaffolding works at scale
Knowledge management fails without ownership”When everyone owns it, no one owns it”Dedicated maintainer role is essential

LongtermWiki aims to be a strategic intelligence platform for AI safety prioritization. Before building, we should understand what similar projects have attempted, what worked, and what failed.

This report analyzes 12+ projects across categories: wikis/knowledge bases, educational resources, prioritization tools, and data repositories.


What it was: An ambitious “Wikipedia successor” for explanatory content, focused heavily on AI alignment and mathematics. Founded by Eliezer Yudkowsky and others.

Innovative features:

  • “Lenses” for different reading levels
  • Custom summaries per audience
  • Redlinks for content that should exist
  • Requisites and dependencies between concepts

What happened:

  • Discontinued in 2017
  • No ability to register new accounts by end of life
  • Content eventually migrated to LessWrong
  • Yudkowsky alone wrote ~250,000 words

Lessons:

  1. Innovative features don’t save unclear value proposition
  2. Heavy dependence on a few prolific authors is fragile
  3. Content organization matters as much as content quality
  4. Migration path to LessWrong preserved value — plan for graceful failure

Source: Arbital has been imported to LessWrong


LessWrong Wiki/Tags — Successful Integration

Section titled “LessWrong Wiki/Tags — Successful Integration”

What it is: A combined tagging and wiki system where tag pages serve as concept explanations, and posts tagged with concepts appear on the wiki page.

Key design choices:

  • Wiki pages are not standalone — they’re integrated with the discussion platform
  • Clicking a tag shows both the concept explanation AND all relevant posts
  • Anyone can tag posts, but quality control exists
  • “The Sequences” provide canonical content that wiki summarizes

Why it works:

  • Wiki provides context for active discussion, not just reference
  • Content stays fresh because it’s tied to ongoing posts
  • Clear purpose: “summarize concepts and link to blog posts”
  • Eliezer’s original vision: “bounce back and forth between blog and wiki”

Implication for LongtermWiki: Integration with active discourse may matter more than comprehensive standalone content.

Source: Wiki-Tag FAQ


What it is: Wiki pages integrated with the EA Forum, similar to LessWrong’s approach.

History:

  • Multiple previous attempts failed (including “EA Concepts”)
  • Current version succeeded because Pablo Stafforini received an EA Infrastructure Fund grant to create initial articles
  • Tag pages require relevance to at least 3 existing posts by different authors

Key insight: Volunteer-only approaches failed repeatedly. Paid initial creation + platform integration succeeded.

Source: Our plans for hosting an EA wiki on the Forum


Stampy / AISafety.info — Narrow Focus Success

Section titled “Stampy / AISafety.info — Narrow Focus Success”

What it is: An interactive FAQ about existential risk from AI, started by Rob Miles.

Model:

  • FAQ format with semantic search (avoids “too long/too short” trade-off)
  • Hundreds of questions with expandable answers
  • Related questions appear as you explore
  • Automated distiller chatbot for long-tail questions

Team structure:

  • Rob Miles as quality control manager
  • Paid Distillation Fellowship: $2,500/month for 3 months, up to 5 fellows
  • Global volunteer team for ongoing contributions

Key differentiators:

  1. Clear user need: “I have a question about AI risk”
  2. Novel interface: semantic search + progressive disclosure
  3. Paid fellowship creates quality content pipeline
  4. Single owner (Rob Miles) with clear editorial vision

Implication for LongtermWiki: FAQ format + paid contributors + clear owner = viable model.

Sources:


BlueDot Impact / AI Safety Fundamentals — Scale Success

Section titled “BlueDot Impact / AI Safety Fundamentals — Scale Success”

What it is: Free courses on AI alignment and governance with cohort-based discussion groups.

Scale:

  • 7,000+ people trained since 2022
  • 75% completion rate (far above typical online courses)
  • Alumni at Anthropic, DeepMind, UK AI Safety Institute

Curriculum structure (Alignment Course):

  1. AI and the years ahead
  2. What is AI alignment?
  3. RLHF
  4. Scalable oversight
  5. Robustness, unlearning and control
  6. Mechanistic interpretability
  7. Technical governance approaches
  8. Contributing to AI safety

Why it works:

  • Structured cohorts with facilitators create accountability
  • 2-3 hours reading + 2-hour discussion per week is sustainable
  • Clear goal: “prepare to work in the field”
  • Visible success stories (alumni placements)

Implication for LongtermWiki: Educational framing with cohort structure has proven adoption. A “LongtermWiki Study Group” format could work.

Source: BlueDot Impact


80,000 Hours Problem Profiles — Deep Analysis Model

Section titled “80,000 Hours Problem Profiles — Deep Analysis Model”

What it is: Long-form analysis of cause areas, updated periodically.

Approach:

  • Deep dives on specific problems (10,000+ word profiles)
  • Explicit framework: scale, neglectedness, tractability
  • Regular updates when understanding changes
  • Clear recommendations tied to career paths

AI Safety coverage:

  • “80,000 Hours has considered risks from AI to be the world’s most pressing problem since 2016”
  • Profile breaks argument into 5 explicit claims
  • Each claim gets its own evidence section

Implication for LongtermWiki: Deep profiles with explicit argument structure is a proven format. But 80K’s scope is careers, not field prioritization.

Source: Risks from power-seeking AI systems


MIT AI Risk Repository — Institutional Authority

Section titled “MIT AI Risk Repository — Institutional Authority”

What it is: Comprehensive database of 1,600+ AI risks extracted from 65+ frameworks.

Structure:

  • Causal Taxonomy: Entity (Human/AI) × Intentionality × Timing
  • Domain Taxonomy: 7 domains, 23 subdomains
  • Updated April 2025 with 22 new frameworks

Key findings from their data:

  • 51% of risks attributed to AI systems vs 34% to humans
  • 65% of risks are post-deployment
  • 35% intentional vs 37% unintentional risks

Why it works:

  • MIT institutional backing provides credibility
  • Clear methodology (meta-review of existing frameworks)
  • Quantitative focus suits academic users
  • Regular updates with new frameworks

Implication for LongtermWiki: Academic backing + systematic methodology + regular updates = authoritative resource. But this required significant institutional investment.

Sources:


What it is: Database of 3,200+ ML models tracking compute, parameters, and capabilities from 1950-present.

Key metrics tracked:

  • Training compute (doubling every 6 months since 2010)
  • Cost trends (2-3x per year growth)
  • Capability benchmarks
  • Hardware specifications

Value proposition:

  • Empirical grounding for AI progress discussions
  • Free data under Creative Commons
  • Regular blog posts interpreting trends
  • Cited widely in policy discussions

Implication for LongtermWiki: Pure data plays can be highly valuable, but LongtermWiki’s focus is more qualitative/strategic.

Source: Epoch AI


Multiple attempts to map the field:

  • “AI Alignment: A Comprehensive Survey” (2023): Introduces RICE framework (Robustness, Interpretability, Controllability, Ethicality)
  • Neel Nanda’s Bird’s Eye View: Focuses on threat models and research agendas
  • Victoria Krakovna’s resource list: Regularly updated links to key papers

Common pattern: These are snapshots, not living documents. They become outdated within 1-2 years.

Implication for LongtermWiki: Static mappings have limited shelf life. Living updates are essential but expensive.

Sources:


What they provide:

  • Compute cluster for researchers (free access)
  • Textbook: “AI Safety, Ethics and Society”
  • Research publications (e.g., “Overview of Catastrophic AI Risks”)
  • The 2023 extinction risk statement (signed by 100s of AI leaders)

Success pattern: CAIS combines resources (compute), education (textbook), research (papers), and advocacy (statement). Multiple touchpoints to the field.

Source: safe.ai


Failure ModeDescriptionMitigation
No clear owner”When everyone owns it, no one owns it”Appoint single accountable maintainer
Volunteer dependency”Relies on initiative and goodwill”Paid contributors for core content
Scope creepTrying to cover everythingRuthlessly narrow initial scope
Staleness spiralContent rots faster than updatesVisible freshness dates, automated alerts
No integrationStandalone wiki nobody visitsIntegrate with active community platform
Novel features over fitBuilding innovation before validationProve value with simple version first
Success FactorExamplesApplication to LongtermWiki
Clear, narrow purposeStampy (FAQ), Epoch (data), 80K (careers)Pick ONE thing LongtermWiki does best
Paid initial contentEA Forum Wiki grant, Stampy fellowshipBudget for content creation, not just platform
Platform integrationLW/EA Forum wikisConsider building on existing platform
Institutional backingMIT Risk Repository, BlueDotPartner with established org
Single editorial ownerRob Miles for StampyHire/designate chief editor
Regular update cadenceEpoch blog, 80K profile updatesCommit to quarterly review cycle

FactorDirectionTypeEvidenceConfidence
Clear owner/editor↑ SuccesscauseEA Wiki failed without, succeeded with grantHigh
Narrow initial scope↑ SuccesscauseStampy FAQ vs Arbital everythingHigh
Paid contributors↑ QualitycauseStampy fellowship, EA Wiki grantHigh
Platform integration↑ AdoptioncauseLW/EA Forum wikis get usedMedium
FactorDirectionTypeEvidenceConfidence
Visible freshness dates↑ TrustintermediateStaleness is major failure modeMedium
Institutional backing↑ CredibilitycauseMIT Risk Repository cited widelyMedium
Novel featuresMixedintermediateArbital had them, still failedMedium
FactorDirectionTypeEvidenceConfidence
Comprehensive coverageWeak ↑causeNarrow often beats broadLow
Community contributionMixedintermediateWorks for Wikipedia, not most projectsLow

QuestionWhy It MattersCurrent State
Would funders actually use a prioritization tool?Core value prop for LongtermWikiUnvalidated; need interviews
Is crux-mapping valuable beyond intellectual interest?Unique LongtermWiki differentiatorNo clear success examples
What’s the minimum viable LongtermWiki?Determines initial scopeOptions: FAQ, profiles, database
Should LongtermWiki be standalone or integrated?Platform strategyLW/EAF integration worked for wikis
What’s the maintenance budget long-term?SustainabilityMost projects underestimate this

Based on this research, LongtermWiki should:

  1. Interview 10+ potential users before building — especially funders
  2. Start with the narrowest possible scope — probably 10-20 deep pages, not 200 shallow ones
  3. Budget for paid content creation — volunteer-only has repeatedly failed
  4. Appoint a single editorial owner — not a committee
  1. Integrate with existing platform (LessWrong, EA Forum) rather than standalone
  2. Use FAQ format for discoverable content (Stampy model)
  3. Commit to update cadence (quarterly review, visible dates)
  1. Don’t build novel features before proving basic value
  2. Don’t aim for comprehensiveness initially
  3. Don’t rely on community contributions for core content