Skip to content

LongtermWiki

Encyclopedia for AI safety prioritization

LongtermWiki is an encyclopedic resource for AI safety. It helps funders, researchers, and policymakers understand the landscape of AI risks and interventions.

The site contains 400+ pages covering risks, technical approaches, governance proposals, key debates, and the organizations and researchers working on these problems.


Knowledge Base

Encyclopedic coverage of AI safety topics:

  • Risks — Accident, misuse, structural, epistemic
  • Responses — Technical and governance approaches
  • Models — Analytical frameworks

Key Debates

Structured arguments on contested questions:

  • The Case for AI X-Risk
  • Why Alignment Might Be Hard
  • Why Alignment Might Be Easy

Field Reference

Who’s working on what:

  • Organizations — Labs, research orgs, government
  • People — Key researchers and their positions

High-quality pages worth reading:

TopicPageDescription
RiskDeceptive AlignmentAI systems that appear aligned during training but pursue different goals when deployed
RiskRacing DynamicsHow competition between labs may compromise safety
ResponseAI ControlUsing untrusted AI safely through monitoring and restrictions
ResponseExport ControlsRestricting AI chip exports as a governance lever
CapabilityLanguage ModelsCurrent capabilities and safety implications of LLMs

Browse all content →


This resource reflects an AI safety community perspective. It takes seriously the possibility of existential risk from AI and maps the arguments, organizations, and research from that viewpoint.

What it does well:

  • Steelmans the case for AI existential risk
  • Maps the landscape of AI safety research and governance
  • Presents the range of views within the AI safety community

What it does less well:

  • Representing perspectives that reject the x-risk framing entirely
  • Engaging deeply with AI ethics/fairness concerns
  • Covering non-Western perspectives on AI development

Alternative viewpoints: Gary Marcus’s Substack (AI skepticism) • Yann LeCun’s posts (AGI skepticism) • Timnit Gebru et al. (AI ethics)

Read full transparency statement →