Skip to content

About This Wiki

The Longterm Wiki is a strategic intelligence platform for AI safety prioritization. It serves as a decision-support tool for funders, researchers, and policymakers asking: “Where should the next marginal dollar or researcher-hour go?”

This page documents the technical side—how the wiki is built, how content is organized, and how to contribute.

For strategic vision and goals, see:


SectionPurposeExamples
Knowledge BaseCore content on risks, interventions, organizations, peopleDeceptive Alignment, AI Safety Institutes
AI Transition ModelComprehensive factor network with outcomes and scenariosFactors, scenarios, quantitative estimates
ModelsAnalytical frameworks for understanding dynamicsRisk models, cascade models, governance models
ProjectPublic-facing documentation about LongtermWiki itselfVision, strategy, similar projects
InternalContributor documentation, style guides, technical referenceThis page, style guides, automation tools

The wiki contains approximately:

  • ~550 MDX pages across all sections
  • ~100 structured data entities (experts, organizations, cruxes, estimates)
  • ~80 analytical model pages with causal diagrams

The wiki uses a two-level classification system.

TypeQuality Scored?Use Case
contentYesAll substantive knowledge base pages (default)
stubNoRedirects, brief profiles, placeholders
documentationNoStyle guides, internal docs (like this page)
overviewNo (auto)Index pages for navigation

Templates determine expected structure and applicable style guide:

TemplateStyle Guide
knowledge-base-riskKnowledge Base Style Guide
knowledge-base-responseKnowledge Base Style Guide
knowledge-base-modelModel Style Guide
ai-transition-model-factorATM Style Guide

For complete details, see Page Type System.


Content pages are scored on six dimensions (0-10 scale, harsh—7+ is exceptional):

DimensionWhat It Measures
FocusDoes it answer the title’s promise?
NoveltyValue beyond obvious sources
RigorEvidence quality and precision
CompletenessThorough coverage of claimed topic
ConcretenessSpecific vs. abstract recommendations
ActionabilityCan readers make different decisions?

These combine into an overall quality score (0-100).

Quality must only be set through the grading pipeline, never manually:

Terminal window
# Grade a specific page
npm run crux -- content grade --page scheming --apply
# Grade all ungraded pages
node scripts/content/grade-content.mjs --skip-graded --apply

For grading criteria and workflows, see Content Quality.


The wiki maintains YAML databases in src/data/:

FileContents
experts.yamlAI safety researchers and their positions on cruxes
organizations.yamlLabs, research orgs, funders
cruxes.yamlKey uncertainties with expert positions
estimates.yamlProbability distributions for key variables
publications.yamlResearch papers and reports
external-links.yamlCurated resource database

Running npm run build:data generates:

OutputPurpose
database.jsonAll entities merged for browser use
pathRegistry.jsonEntity ID → URL path mapping
backlinks.jsonReverse reference indices
tagIndex.jsonSearchable tag index

Components pull from YAML databases to display structured information. For example, EntityLink provides stable cross-references, while DataInfoBox displays expert or organization profiles from YAML.

For database details and component usage, see Content Database.


The wiki uses stable ID-based linking that survives path reorganization:

import {EntityLink} from '@components/wiki';
The <EntityLink id="scheming">scheming</EntityLink> risk relates to
<EntityLink id="deceptive-alignment">deceptive alignment</EntityLink>.

Benefits:

  • Automatic title lookup from database
  • Entity type icons
  • Backlink tracking
  • Link validity checked during CI

Every page can display incoming links:

import {Backlinks} from '@components/wiki';
<Backlinks entityId="deceptive-alignment" />

After creating or editing a page, verify cross-linking:

Terminal window
npm run crux -- analyze entity-links <entity-id>

This shows inbound links, missing inbound links (pages that mention but don’t link), and outbound links.


Flowcharts, sequences, and graphs for static illustrations:

flowchart LR
A[Training] --> B[Alignment Techniques]
B --> C{Robust?}
C -->|Yes| D[Safe AI]
C -->|No| E[Misalignment Risk]

See Mermaid Diagrams for guidelines.

Interactive causal diagrams using ReactFlow for complex causal models:

import {CauseEffectGraph} from '@components/CauseEffectGraph';
<CauseEffectGraph
initialNodes={graphNodes}
initialEdges={graphEdges}
selectedNodeId="current-factor"
/>

Features: zoom, pan, minimap, node highlighting, path tracing, entity linking.

See Cause-Effect Diagrams for schema and examples.


All tools are accessible via the crux CLI:

Terminal window
npm run crux -- --help # Show all domains
npm run crux -- validate # Run all validators
npm run crux -- analyze # Analysis and reporting
npm run crux -- fix # Auto-fix common issues
npm run crux -- content # Page management
npm run crux -- generate # Content generation
npm run crux -- resources # External resource management
TaskCommand
Validate before commitnpm run precommit
Full validation suitenpm run validate
Rebuild entity databasenpm run build:data
Grade a specific pagenpm run crux -- content grade --page <id>
Find unlinked mentionsnpm run crux -- analyze mentions
Fix escaping issuesnpm run crux -- fix escaping

The validation suite includes 20+ rules:

ValidatorWhat It Checks
compileMDX syntax and compilation
frontmatter-schemaYAML frontmatter validity
dollar-signsLaTeX escaping (\$100 not $100)
comparison-operatorsJSX escaping (\<100ms not <100ms)
entitylink-idsAll EntityLink references exist
quality-sourceQuality set by pipeline, not manually
mermaidDiagram syntax validation

For complete tool reference, see Automation Tools.


LayerTechnology
FrameworkAstro 5 with Starlight theme
ComponentsReact 19
StylingTailwind CSS 4
Type SafetyTypeScript + Zod schemas
GraphsReactFlow (XYFlow) + Dagre/ELK layout
DiagramsMermaid 11
MathKaTeX
DataYAML sources → JSON build artifacts
apps/longterm/
├── src/
│ ├── content/docs/ # ~550 MDX pages
│ │ ├── knowledge-base/ # Main content (risks, responses, orgs, people)
│ │ ├── ai-transition-model/# Comprehensive factor network
│ │ ├── project/ # Public project documentation
│ │ └── internal/ # Contributor docs and style guides
│ ├── components/
│ │ ├── wiki/ # 50+ content components
│ │ ├── CauseEffectGraph/ # Interactive graph system
│ │ └── ui/ # shadcn components
│ ├── data/
│ │ ├── *.yaml # Source data files
│ │ └── *.json # Generated (build artifacts)
│ └── pages/ # Astro dynamic routes
├── scripts/
│ ├── crux.mjs # CLI entry point
│ ├── build-data.mjs # Data compilation pipeline
│ ├── commands/ # CLI domain handlers
│ └── validate/ # 23 validators
└── astro.config.mjs # Sidebar and site config
FilePurpose
astro.config.mjsSidebar structure, Starlight setup
src/content.config.tsMDX frontmatter schema
src/data/schema.tsEntity type definitions (Zod)
package.jsonDependencies and npm scripts

Terminal window
# Install dependencies
npm install
# Start development server (auto-runs build:data)
npm run dev
# Build for production
npm run build
Terminal window
# After editing YAML data files
npm run build:data
# After editing any content
npm run precommit # Quick validation
npm run validate # Full validation
  1. New pages: Follow the appropriate style guide
  2. New entities: Add to relevant YAML in src/data/, run npm run build:data
  3. New components: Add to src/components/wiki/, use path aliases

For content generation workflows, see Research Reports.