Longterm Wiki

Projects

AI safety tools, platforms, forecasting systems, and research projects.

Projects
17
With Website
10
With Organization
5
Showing 17 of 17 projects
DescriptionLinks
AI Forecasting Benchmark TournamentQuarterly competition run by Metaculus comparing human Pro Forecasters against AI forecasting bots.active
AI WatchAI Watch is a tracking database by Issa Rice that monitors AI safety organizations, people, funding, and publications as part of his broader knowledge infrastructure ecosystem. The article provides useful context about Rice's systematic approach to documentation but lacks concrete details about AI Wactive
Donations List WebsiteComprehensive documentation of an open-source database tracking $72.8B in philanthropic donations (1969-2023) across 75+ donors, with particular coverage of EA/AI safety funding. The page thoroughly describes the tool's features, data coverage, and limitations, but is purely descriptive reference maactive
ForecastBenchDynamic, contamination-free benchmark for evaluating LLM forecasting capabilities, published at ICLR 2025.Forecasting Research Institute (FRI)active
GrokipediaxAI's AI-generated encyclopedia launched October 2025, growing to 6M+ articles with documented quality concerns including political bias and scientific inaccuracies.active
Longterm WikiA self-referential documentation page describing the Longterm Wiki platform itself—a strategic intelligence tool with ~550 pages, crux mapping of ~50 uncertainties, and quality scoring across 6 dimensions. Features include entity cross-linking, interactive causal diagrams, and structured YAML databaactive
MetaforecastForecast aggregation platform combining predictions from 10+ sources into a unified search interface.QURI (Quantified Uncertainty Research Institute)maintained
MIT AI Risk RepositoryThe MIT AI Risk Repository catalogs 1,700+ AI risks from 65+ frameworks into a searchable database with dual taxonomies (causal and domain-based). Updated quarterly since August 2024, it provides the first comprehensive public catalog of AI risks but is limited by framework extraction methodology anactive
Org WatchOrg Watch is a tracking website by Issa Rice that monitors EA and AI safety organizations, but the article lacks concrete information about its actual features, scope, or current status. The piece reads more like speculative analysis about what the tool might do rather than documentation of an estabactive
RoastMyPostRoastMyPost is an LLM tool (Claude Sonnet 4.5 + Perplexity) that evaluates written content through multiple specialized AI agents—fact-checking, logical fallacy detection, math verification, and more. Aimed at improving epistemic quality of research posts, particularly in EA/rationalist communities.active
SquiggleDomain-specific programming language for probabilistic estimation with native distribution types and Monte Carlo sampling.QURI (Quantified Uncertainty Research Institute)active
SquiggleAILLM-powered tool for generating probabilistic models in Squiggle from natural language descriptions.QURI (Quantified Uncertainty Research Institute)active
Stampy / AISafety.infoAISafety.info is a volunteer-maintained wiki with 280+ answers on AI existential risk, complemented by Stampy, an LLM chatbot searching 10K-100K alignment documents via RAG. Features include a Discord bot bridging YouTube comments, PageRank-style karma voting for answer quality control, and the Distactive
Timelines WikiTimelines Wiki is a specialized MediaWiki project documenting chronological histories of AI safety and EA organizations, created by Issa Rice with funding from Vipul Naik in 2017. While useful as a historical reference source, it primarily serves as documentation infrastructure rather than providingactive
Wikipedia ViewsThis article provides a comprehensive overview of Wikipedia pageview analytics tools and their declining traffic due to AI summaries reducing direct visits. While well-documented, it's primarily about web analytics infrastructure rather than core AI safety concerns.active
X Community NotesCommunity Notes uses a bridging algorithm requiring cross-partisan consensus to display fact-checks, reducing retweets 25-50% when notes appear. However, only 8.3% of notes achieve visibility, taking median 7 hours (mean 38.5 hours) by which time 96.7% of spread has occurred, limiting aggregate effeactive
XPT (Existential Risk Persuasion Tournament)Four-month structured forecasting tournament bringing together superforecasters and domain experts through adversarial collaboration.Forecasting Research Institute (FRI)active