Longterm Wiki
Updated 2026-03-13HistoryData
Page StatusDocumentationDashboard
Edited today
Content0/12
LLM summaryScheduleEntityEdit history
Tables0Diagrams0Int. links0/ ~5Ext. links0Footnotes0References0Quotes0Accuracy0

Pages

Admin overview of 676 wiki pages. Use preset buttons to switch between views (overview, coverage, quality, citations, updates) or toggle individual columns. Hover column headers for descriptions.

607 rated (avg quality 54), 159 high hallucination risk.

676 pages
Page title
Quality score (0–100)
Reader importance (0–100)
Coverage: passing items out of 13 (5 bool + 8 numeric)
Hallucination risk level
Time since last update
Word count
Entity type (person, org, risk, etc.)
Page category
AI Timelines95936/13medium0d6.5kconceptmodels
Superintelligence92953/13medium1d1.6kconceptrisks
Existential Risk from AI92954/13medium1d1.2kconceptrisks
AI Scaling Laws92936/13medium1d2.5kconceptmodels
US AI Safety Institute91324/13high1d4.8korganizationorganizations
Voluntary Industry Commitments91505/13medium1d4.6kpolicyresponses
Multipolar Trap (AI Development)91845/13medium1d3.9kriskrisks
International Coordination Mechanisms91246/13medium1d4.1kpolicyresponses
AI Distributional Shift91176/13medium0d3.6kriskrisks
Reward Hacking91166/13medium0d4.0kriskrisks
Long-Timelines Technical Worldview91156/13medium1d4.7kconceptworldviews
Deepfake Detection91227/13low0d2.9kapproachresponses
Eliciting Latent Knowledge (ELK)91247/13low1d2.5kapproachresponses
Pause Advocacy91527/13medium1d5.3kapproachresponses
AI Safety Cases91517/13low0d4.1kapproachresponses
Sandboxing / Containment91587/13low0d4.3kapproachresponses
Sparse Autoencoders (SAEs)91207/13low0d3.2kapproachresponses
Structured Access / API-Only91797/13low0d3.5kapproachresponses
Compute Thresholds91567/13medium0d4.0kpolicyresponses
US Executive Order on Safe, Secure, and Trustworthy AI91577/13medium1d4.5kpolicyresponses
Weak-to-Strong Generalization91207/13medium0d2.9kapproachresponses
Cyberweapons91837/13medium0d4.2kriskrisks
Optimistic Alignment Worldview91837/13medium1d4.4kconceptworldviews
Capability Elicitation91508/13low0d3.5kapproachresponses
Scheming & Deception Detection91588/13low0d3.3kapproachresponses
Tool-Use Restrictions91588/13medium0d3.9kapproachresponses
Authoritarian Tools91188/13medium0d2.9kriskrisks
Bioweapons91638/13medium1d10.8kriskrisks
AI Governance Coordination Technologies91709/13low0d2.9kapproachresponses
AI-Human Hybrid Systems91639/13medium0d2.4kapproachresponses
AI-Induced Enfeeblement91779/13medium1d2.4kriskrisks
Erosion of Human Agency91199/13medium0d1.8kriskrisks
Scientific Knowledge Corruption91389/13medium0d1.9kriskrisks
AI Model Steganography91709/13medium1d2.4kriskrisks
AI Alignment919510/13medium0d5.7kapproachresponses
AI Safety Intervention Portfolio916110/13low1d2.8kapproachresponses
AI-Enabled Untraceable Misuse88485/13medium0d2.8kriskrisks
OpenAI Foundation87877/13medium1d9.0korganizationorganizations
EA Epistemic Failures in the FTX Era84625/13medium1d4.9k-history
AI Compute Scaling Metrics78825/13medium0d3.5kanalysismodels
Centre for Effective Altruism78425/13high1d2.0korganizationorganizations
FTX Collapse: Lessons for EA Funding Resilience78656/13high1d5.7kconceptorganizations
Sleeper Agents: Training Deceptive LLMs78176/13medium0d1.8kriskrisks
Redwood Research78327/13medium1d1.5korganizationorganizations
FAR AI76858/13high0d3.3korganizationorganizations
State Capacity and AI Governance75725/13medium1d2.2kconceptresponses
OpenAI Foundation Governance Paradox75406/13medium1d2.6kanalysisorganizations
AI Control75698/13low0d3.1ksafety-agendaresponses
Deceptive Alignment75199/13medium1d2.0kriskrisks
OpenClaw Matplotlib Incident (2026)74524/13medium0d3.5k-incidents
Scheming74714/13medium1d5.1kriskrisks
Relative Longtermist Value Comparisons74686/13medium1d2.4kanalysismodels
Anthropic74528/13high1d5.1korganizationorganizations
FTX (cryptocurrency exchange)74628/13high0d3.1korganizationorganizations
Philip Tetlock (Forecasting Pioneer)73614/13medium0d2.7kpersonpeople
AI-Driven Institutional Decision Capture73395/13medium0d7.7kriskrisks
California SB 5373726/13medium0d2.5kpolicyresponses
New York RAISE Act73386/13medium0d2.7kpolicyresponses
AI Chip Export Controls73887/13medium0d4.1kpolicyresponses
Capabilities-to-Safety Pipeline Model73468/13medium0d1.3kanalysismodels
Leading the Future super PAC73808/13medium1d2.3korganizationorganizations
Intervention Effectiveness Matrix73909/13medium0d4.2kanalysismodels
Projecting Compute Spending72727/13medium0d6.0kanalysismodels
Representation Engineering72627/13medium0d1.8kapproachresponses
Capability Threshold Model72478/13medium0d1.3kanalysismodels
Evals & Red-teaming72268/13medium0d2.7ksafety-agendaresponses
AI Evaluation72798/13medium0d1.7kapproachresponses
Pause / Moratorium72798/13medium1d2.0kpolicyresponses
AI Development Racing Dynamics72208/13medium1d2.7kriskrisks
Intervention Timing Windows72909/13medium0d4.4kanalysismodels
Anthropic Valuation Analysis72349/13medium0d1.5kanalysisorganizations
Reward Hacking Taxonomy and Severity Model71455/13medium0d6.6kanalysismodels
AI Safety Solution Cruxes71947/13medium0d6.1kcruxcruxes
AI Risk Critical Uncertainties Model71938/13medium0d2.5kcruxmodels
Citation Architecture: Current State & Unified Proposal70852/13medium0d2.2kinternalinternal
AI Uplift Assessment Model70764/13medium0d4.4kanalysismodels
Epistemic & Forecasting Organizations (Overview)70875/13low0d217-organizations
Anthropic-Pentagon Standoff (2026)70786/13low1d3.3keventincidents
Musk v. OpenAI Lawsuit70296/13medium1d1.9kanalysisorganizations
Long-Term Benefit Trust (Anthropic)70787/13medium1d2.4kanalysisorganizations
AI Safety via Debate70717/13medium0d1.7kapproachresponses
Compute Concentration70587/13medium1d2.3kriskrisks
Warning Signs Model70438/13medium0d3.4kanalysismodels
Hardware-Enabled Governance70238/13medium0d3.4kpolicyresponses
US State AI Legislation70388/13medium0d5.1kpolicyresponses
Constitutional AI70249/13medium0d1.5kapproachresponses
AI Safety Training Programs70569/13medium0d2.2kapproachresponses
Compute Monitoring69635/13medium0d4.4kpolicyresponses
AI Safety Institutes69646/13medium1d4.2kpolicyresponses
AI Alignment Research Agenda Comparison69586/13medium1d4.3kcruxresponses
AI-Powered Fraud69586/13medium0d4.5kriskrisks
Self-Improvement and Recursive Enhancement69477/13medium1d5.0kcapabilitycapabilities
AI Standards Bodies69837/13medium0d3.5kpolicyresponses
Bioweapons Attack Chain Model69728/13medium0d2.0kanalysismodels
Defense in Depth Model69618/13medium1d1.6kanalysismodels
Sharp Left Turn69578/13medium1d4.3kriskrisks
Agentic AI68735/13high0d8.8kcapabilitycapabilities
Giving Pledge68375/13high1d2.3korganizationorganizations
Scalable Oversight68525/13medium1d5.7ksafety-agendaresponses
Scientific Research Capabilities68726/13medium0d5.8kcapabilitycapabilities
Evaluation Awareness68426/13low0d3.5kapproachresponses
Reducing Hallucinations in AI-Generated Wiki Content68556/13low0d4.2kapproachresponses
Model Registries68217/13medium0d1.7kpolicyresponses
Multi-Agent Safety68217/13low0d3.6kapproachresponses
Corporate AI Safety Responses68708/13medium0d1.3kapproachresponses
Goodfire68869/13medium0d2.4korganizationorganizations
International Compute Regimes67634/13medium1d5.4kpolicyresponses
AI Capability Sandbagging67396/13medium0d2.7kriskrisks
Governance-Focused Worldview67676/13medium1d3.9kconceptworldviews
AI Safety Talent Supply/Demand Gap Model67447/13medium0d2.6kanalysismodels
Treacherous Turn67177/13medium1d4.0kriskrisks
Situational Awareness67928/13medium0d3.6kcapabilitycapabilities
Tool Use and Computer Use67928/13medium0d3.8kcapabilitycapabilities
Risk Cascade Pathways67598/13medium0d1.8kanalysismodels
Power-Seeking AI67398/13medium0d3.0kriskrisks
AI Accident Risk Cruxes67949/13medium1d4.1kcruxcruxes
Elon Musk: Track Record66264/13medium0d2.8k-people
The Case FOR AI Existential Risk66535/13medium1d6.7kargumentdebates
Bridgewater AIA Labs66465/13high0d4.0korganizationorganizations
California SB 104766235/13medium1d3.9kpolicyresponses
AI Structural Risk Cruxes66877/13medium0d2.0kcruxcruxes
Risk Activation Timeline Model66547/13medium0d2.0kanalysismodels
METR66847/13high1d4.4korganizationorganizations
Corporate Influence on AI Policy66237/13medium1d3.3kcruxresponses
AI Governance and Policy66657/13medium1d3.1kcruxresponses
Technical AI Safety Research66867/13medium0d3.8kcruxresponses
Mechanistic Interpretability66418/13low0d3.7ksafety-agendaresponses
Sleeper Agent Detection66518/13low1d4.3kapproachresponses
Evals-Based Deployment Gates66429/13medium0d4.1kpolicyresponses
Content Verification Tiers65702/13medium0d1.9kinternalinternal
SecureBio65294/13high0d1.3korganizationorganizations
The Sequences by Eliezer Yudkowsky65314/13high1d1.9korganizationorganizations
David Sacks (White House AI Czar)65264/13medium1d2.3kpersonpeople
Council of Europe Framework Convention on Artificial Intelligence65724/13medium0d2.6kpolicyresponses
Page Type System65114/13medium0d1.5kinternalinternal
Eval Saturation & The Evals Gap65235/13low1d4.6kapproachresponses
Carlsmith's Six-Premise Argument65386/13medium0d2.2kanalysismodels
Electoral Impact Assessment Model65506/13medium0d3.5kanalysismodels
Anthropic (Funder)65336/13medium1d7.1kanalysisorganizations
Scalable Eval Approaches65406/13low0d3.5kapproachresponses
Model Organisms of Misalignment65737/13medium1d2.2kanalysismodels
Safety Culture Equilibrium65887/13medium0d2.1kanalysismodels
Safety Research Allocation Model65897/13medium0d1.4kanalysismodels
MacArthur Foundation65297/13high0d3.5korganizationorganizations
Cooperative IRL (CIRL)65257/13medium1d1.9kapproachresponses
AI Safety Field Building Analysis65417/13medium1d3.6kapproachresponses
Formal Verification (AI Safety)65437/13medium0d2.1kapproachresponses
Process Supervision65497/13medium0d1.7kapproachresponses
Provably Safe AI (davidad agenda)65507/13medium0d2.2kapproachresponses
Reasoning and Planning65928/13medium0d4.9kcapabilitycapabilities
AI Proliferation Risk Model65858/13medium0d1.9kanalysismodels
Anthropic IPO65338/13medium1d3.8kanalysisorganizations
Palisade Research65888/13high1d2.0korganizationorganizations
Alignment Evaluations65658/13medium0d3.8kapproachresponses
Capability Unlearning / Removal65668/13medium0d1.7kapproachresponses
AI-Driven Concentration of Power65398/13medium0d1.2kriskrisks
AI-Induced Expertise Atrophy65918/13high0d915riskrisks
Long-Horizon Autonomous Tasks65559/13medium0d2.7kcapabilitycapabilities
AI Misuse Risk Cruxes65829/13medium0d2.1kcruxcruxes
Risk Interaction Matrix Model65859/13medium0d2.6kanalysismodels
Red Teaming65399/13medium0d1.4kapproachresponses
Sycophancy65159/13medium0d766riskrisks
US Government Authority Over Commercial AI Infrastructure64624/13medium0d2.1kpolicyresponses
Concentrated Compute as a Cybersecurity Risk64634/13medium0d2.0kriskrisks
Similar Projects to LongtermWiki: Research Report6494/13medium1d2.1k-project
AI Epistemic Cruxes64825/13medium0d1.3kcruxcruxes
Responsible Scaling Policies64635/13medium0d4.5kpolicyresponses
Mass Surveillance64175/13medium0d4.4kriskrisks
Safety-Capability Tradeoff Model64866/13medium1d5.8kanalysismodels
AI Flash Dynamics64686/13medium0d3.3kriskrisks
AI-Induced Irreversibility64776/13medium1d3.5kriskrisks
Provable / Guaranteed Safe AI64897/13low1d2.5kconceptintelligence-paradigms
AI Surveillance and Regime Durability Model64437/13medium0d3.3kanalysismodels
Circuit Breakers / Inference Interventions64437/13low0d3.2kapproachresponses
AI-Powered Consensus Manufacturing64167/13medium0d3.4kriskrisks
AI Value Lock-in64167/13medium1d3.5kriskrisks
Alignment Robustness Trajectory64878/13medium0d3.2kanalysismodels
Risk Interaction Network64448/13medium0d1.9kanalysismodels
Dangerous Capability Evaluations64718/13low1d3.6kapproachresponses
Policy Effectiveness Assessment64248/13medium0d3.6kanalysisresponses
Third-Party Model Auditing64778/13low1d3.8kapproachresponses
Instrumental Convergence64648/13medium1d5.0kriskrisks
AI Risk Portfolio Analysis64479/13medium1d2.2kanalysismodels
Peter Thiel (Funder)63454/13medium1d3.3korganizationorganizations
Financial Stability Risks from AI Capital Expenditure63584/13medium0d2.8kriskrisks
Centre for Long-Term Resilience63715/13medium0d2.7korganizationorganizations
Elicit (AI Research Tool)63835/13high0d3.1korganizationorganizations
Johns Hopkins Center for Health Security63335/13medium0d1.9korganizationorganizations
Max Tegmark63825/13medium1d2.6kpersonpeople
International AI Safety Summits63675/13medium0d4.7kpolicyresponses
AI Welfare and Digital Minds63625/13medium1d2.8kconceptrisks
Earning to Give: The EA Strategy and Its Limits63526/13medium1d2.4k-history
Claude Code Espionage Incident (2025)63466/13medium0d3.2k-incidents
Vipul Naik63246/13high1d3.0kpersonpeople
AI-Assisted Deliberation Platforms63226/13medium0d3.5kapproachresponses
Mesa-Optimization63196/13medium1d4.3kriskrisks
Autonomous Cyber Attack Timeline63727/13medium0d1.7kanalysismodels
Power-Seeking Emergence Conditions Model63737/13medium1d2.2kanalysismodels
ControlAI63427/13high1d2.2korganizationorganizations
NIST and AI Safety63777/13high1d2.8korganizationorganizations
AI-Era Epistemic Security63677/13medium0d3.4kapproachresponses
AI Output Filtering63637/13low0d2.6kapproachresponses
Refusal Training63217/13low0d2.8kapproachresponses
Persuasion and Social Manipulation63538/13medium0d2.8kcapabilitycapabilities
Longterm Wiki63218/13medium0d2.2kprojectresponses
AI Whistleblower Protections63488/13medium1d2.6kpolicyresponses
Goal Misgeneralization63848/13medium0d3.5kriskrisks
Autonomous Coding63539/13medium0d2.5kcapabilitycapabilities
AI-Assisted Alignment63259/13medium1d1.9kapproachresponses
Failed and Stalled AI Policy Proposals63419/13medium0d4.5kpolicyresponses
RLHF / Constitutional AI63239/13medium0d3.0kcapabilityresponses
Authoritarian Tools Diffusion Model62384/13medium0d7.0kanalysismodels
Short Timeline Policy Implications62805/13medium0d1.9kanalysismodels
Center for Applied Rationality62856/13high1d3.4korganizationorganizations
AI Lab Safety Culture62426/13medium1d4.0kapproachresponses
Corrigibility Failure62176/13medium1d3.9kriskrisks
Technical Pathway Decomposition62547/13medium0d2.3kanalysismodels
Anthropic Core Views62537/13medium1d3.1ksafety-agendaresponses
Preference Optimization Methods62497/13medium0d2.8kapproachresponses
Autonomous Weapons Escalation Model62458/13medium0d2.6kanalysismodels
Corrigibility Failure Pathways62738/13medium0d1.9kanalysismodels
Giving What We Can62238/13high1d1.7korganizationorganizations
Open Source AI Safety62498/13medium0d2.0kapproachresponses
Responsible Scaling Policies62518/13medium1d3.4kpolicyresponses
Large Language Models62909/13medium0d3.7kconceptcapabilities
Capability-Alignment Race Model62769/13medium0d1.8kanalysismodels
Deceptive Alignment Decomposition Model62859/13medium0d2.1kanalysismodels
Worldview-Intervention Mapping62529/13medium0d2.2kanalysismodels
OpenAI627210/13high1d3.8korganizationorganizations
Eliezer Yudkowsky: Track Record61264/13medium1d4.2k-people
Leopold Aschenbrenner61274/13high1d2.6kpersonpeople
Why Alignment Might Be Hard61956/13high0d7.5kargumentdebates
Racing Dynamics Impact Model61797/13medium0d1.6kanalysismodels
Samotsvety61467/13high1d2.3korganizationorganizations
Goal Misgeneralization Probability Model61878/13medium0d1.7kanalysismodels
Mesa-Optimization Risk Analysis61548/13medium0d1.6kanalysismodels
Multipolar Trap Dynamics Model61598/13medium0d1.4kanalysismodels
Scheming Likelihood Assessment61808/13medium0d1.5kanalysismodels
AI-Enabled Authoritarian Takeover61788/13medium0d4.0kriskrisks
Emergent Capabilities61588/13medium1d3.0kriskrisks
Epic Page Conventions60501/13medium0d439internalinternal
LAWS Proliferation Model60734/13medium0d5.3kanalysismodels
1Day Sooner60334/13high0d1.7korganizationorganizations
NTI | bio (Nuclear Threat Initiative - Biological Program)60664/13high0d1.8korganizationorganizations
Schmidt Futures60464/13medium1d3.0korganizationorganizations
Yann LeCun: Track Record60244/13medium1d2.8k-people
Jeffrey Epstein's Connections to AI Researchers60425/13medium0d2.9k-history
Blueprint Biosecurity60335/13high0d1.1korganizationorganizations
Nick Beckstead60585/13medium1d1.9kpersonpeople
Sam Altman: Track Record60645/13medium0d1.9k-people
NIST AI Risk Management Framework60405/13medium0d4.7kpolicyresponses
Recoding America60625/13medium1d1.8kresourceresponses
IBBIS (International Biosecurity and Biosafety Initiative for Science)60756/13high0d1.8korganizationorganizations
Bletchley Declaration60536/13medium1d2.0kpolicyresponses
Epistemic Sycophancy60686/13medium0d3.5kriskrisks
Open vs Closed Source AI60527/13medium1d2.2kcruxdebates
Instrumental Convergence Framework60547/13medium0d2.4kanalysismodels
Rethink Priorities60887/13high0d3.7korganizationorganizations
SecureDNA60297/13high0d1.1korganizationorganizations
Will MacAskill60337/13high1d2.1kpersonpeople
Seoul AI Safety Summit Declaration60577/13medium1d2.8kpolicyresponses
Anthropic Stakeholders60857/12medium1d952tableorganizations
Compounding Risks Analysis60788/13medium0d1.8kanalysismodels
Expected Value of AI Safety Research60548/13high0d1.4kanalysismodels
FTX Future Fund60728/13medium1d2.3korganizationorganizations
MATS ML Alignment Theory Scholars program60328/13high1d2.5korganizationorganizations
Proliferation60578/13medium0d2.4kriskrisks
Large Language Models60949/13high0d6.1kcapabilitycapabilities
EA Shareholder Diversification from Anthropic606410/13medium1d2.2kconceptorganizations
Authentication Collapse Timeline Model59754/13medium0d6.3kanalysismodels
Situational Awareness LP59305/13high1d2.2korganizationorganizations
Flash Dynamics Threshold Model59736/13medium0d2.9kanalysismodels
Institutional Adaptation Speed Model59786/13medium0d3.2kanalysismodels
Trust Erosion Dynamics Model59576/13medium0d2.5kanalysismodels
Pause AI59836/13high0d2.2korganizationorganizations
Feedback Loop & Cascade Model59367/13medium0d2.2kanalysismodels
AI-Era Epistemic Infrastructure59707/13medium0d2.7kapproachresponses
Mechanistic Interpretability59407/13medium1d3.6kapproachresponses
International AI Coordination Game59368/13medium0d1.9kanalysismodels
Multi-Actor Strategic Landscape59368/13medium0d1.9kanalysismodels
Survival and Flourishing Fund (SFF)59298/13high1d4.8korganizationorganizations
Agent Foundations59268/13medium0d2.2kapproachresponses
Corrigibility Research59249/13medium1d2.4ksafety-agendaresponses
AGI Timeline595610/13medium1d2.0kconceptforecasting
Anthropic Pre-IPO DAF Transfers58324/13medium0d3.1kanalysisorganizations
Trust Cascade Failure Model58765/13medium0d4.4kanalysismodels
CSER (Centre for the Study of Existential Risk)58365/13high1d2.1korganizationorganizations
AI-Bioweapons Timeline Model58446/13medium0d2.6kanalysismodels
Microsoft AI58436/13high1d8.0korganizationorganizations
Marc Andreessen (AI Investor)58306/13high1d3.2kpersonpeople
Compute Governance: AI Chips Export Controls Policy58646/13high0d2.2kpolicyresponses
Adversarial Training58267/13medium0d1.8kapproachresponses
AI Content Authentication58227/13medium0d2.4kapproachresponses
Goal Misgeneralization Research58437/13medium0d2.0kapproachresponses
The Case AGAINST AI Existential Risk58908/13medium1d1.7kargumentdebates
Dense Transformers58808/13medium1d3.4kconceptintelligence-paradigms
Eli Lifland58278/13high1d1.1kpersonpeople
Apollo Research58419/13high0d2.9korganizationorganizations
Frontier Model Forum58849/13medium1d2.9korganizationorganizations
Expertise Atrophy Cascade Model57735/13medium0d4.2kanalysismodels
Irreversibility Threshold Model57545/13medium0d3.1kanalysismodels
Winner-Take-All Concentration Model57355/13medium0d3.1kanalysismodels
Cyber Offense-Defense Balance Model57596/13medium0d2.7kanalysismodels
Societal Response & Adaptation Model57787/13medium0d1.9kanalysismodels
ARC (Alignment Research Center)57397/13medium1d3.7korganizationorganizations
China AI Regulations57727/13medium0d3.3kpolicyresponses
Heavy Scaffolding / Agentic Systems57378/13medium0d2.8kconceptintelligence-paradigms
Authentication Collapse57578/13medium0d1.9kriskrisks
Critical Insights56123/13medium0d1.2k-project
Whistleblower Dynamics Model56714/13medium0d6.4kanalysismodels
Automation Bias (AI Systems)56165/13medium0d2.9kriskrisks
Regulatory Capacity Threshold Model56586/13medium0d1.4kanalysismodels
Wikipedia and AI Content56436/13medium0d1.8kconceptresponses
Collective Intelligence / Coordination56807/13medium0d2.7kcapabilityintelligence-paradigms
Long-Term Future Fund (LTFF)56317/13high1d4.8korganizationorganizations
Prediction Markets (AI Forecasting)56217/13medium0d1.4kapproachresponses
Autonomous Weapons56177/13medium0d2.9kriskrisks
Fact System Strategy55101/13medium0d2.3kinternalinternal
Red Queen Bio55353/13high0d1.5korganizationorganizations
Wiki Generation Architecture: Multi-Agent Multi-Pass Design55753/13medium0d5.0kinternalinternal
Biosecurity Organizations (Overview)55664/13medium0d1.1k-organizations
AI Trust Cascade Failure55184/13high0d3.2kriskrisks
AI Surveillance and US Democratic Erosion55854/13medium0d2.6kriskrisks
Controlled Vocabulary for Longtermist Analysis55134/13medium0d1.1k-reports
AI Revenue Sources55675/13high0d3.1korganizationorganizations
Ajeya Cotra55555/13high1d1.9kpersonpeople
Neuromorphic Hardware55376/13medium0d4.5kcapabilityintelligence-paradigms
Disinformation Detection Arms Race Model55896/13medium0d2.7kanalysismodels
LongtermWiki Impact Model55346/13medium0d2.1kanalysismodels
Forecasting Research Institute55366/13high1d3.9korganizationorganizations
Turion55306/13high0d509organizationorganizations
Probing / Linear Probes55216/13medium0d2.7kapproachresponses
Texas TRAIGA Responsible AI Governance Act55186/13medium0d2.2kpolicyresponses
Rogue AI Scenarios55396/13high0d4.0kriskrisks
About This Wiki55126/13medium0d1.1kinternalinternal
Neuro-Symbolic Hybrid Systems55747/13medium0d2.9kcapabilityintelligence-paradigms
Sparse / MoE Transformers55397/13medium0d2.7kcapabilityintelligence-paradigms
Anthropic Impact Assessment Model55507/13medium0d1.7kanalysismodels
Sam Bankman-Fried55687/13medium1d3.2kpersonpeople
EU AI Act55427/13medium0d3.5kpolicyresponses
MAIM (Mutually Assured AI Malfunction)55487/13medium0d1.4kpolicyresponses
Reward Modeling55207/13medium0d1.9kapproachresponses
Planning for Frontier Lab Scaling5568/13medium0d3.3kanalysismodels
Safety Spending at Scale5568/13medium0d2.6kanalysismodels
Coefficient Giving55368/13high1d3.9korganizationorganizations
William and Flora Hewlett Foundation55818/13medium0d3.0korganizationorganizations
AI for Human Reasoning Fellowship55258/13medium1d2.2kapproachresponses
Cooperative AI55818/13medium0d2.0kapproachresponses
Is EA Biosecurity Work Limited to Restricting LLM Biological Use?55408/13medium1d2.0kanalysisresponses
AI-Driven Trust Decline55628/13medium0d1.5kriskrisks
Pre-TAI Capital Deployment: $100B-$300B+ Spending Analysis5569/13medium0d3.1kanalysismodels
AI Preference Manipulation55919/13medium0d969riskrisks
Cross-Link Automation Proposal54103/13medium0d634-reports
Surveillance Chilling Effects Model54754/13medium0d2.3kanalysismodels
AI-Augmented Forecasting54665/13medium0d2.5kapproachresponses
X Community Notes54226/13medium0d1.8kprojectresponses
XPT (Existential Risk Persuasion Tournament)54566/13medium0d2.0kprojectresponses
Disinformation54666/13medium0d3.0kriskrisks
Biological / Organoid Computing54817/13medium0d2.6kcapabilityintelligence-paradigms
State-Space Models / Mamba54347/13medium0d3.5kcapabilityintelligence-paradigms
AI Winner-Take-All Dynamics54777/13medium0d1.5kriskrisks
Government Regulation vs Industry Self-Governance54758/13medium0d1.7kcruxdebates
World Models + Planning54758/13medium1d2.2kcapabilityintelligence-paradigms
Deepfakes Authentication Crisis Model53644/13medium0d4.7kanalysismodels
Media-Policy Feedback Loop Model53844/13medium0d2.8kanalysismodels
Robin Hanson53254/13high0d2.9kpersonpeople
EA Institutions' Response to the FTX Collapse53625/13medium1d4.2k-history
Sycophancy Feedback Loop Model53765/13medium0d3.2kanalysismodels
Coalition for Epidemic Preparedness Innovations53375/13high0d2.2korganizationorganizations
Why Alignment Might Be Easy53526/13medium1d4.1kargumentdebates
EA and Longtermist Wins and Losses53506/13high1d7.7k-history
FTX Red Flags: Pre-Collapse Warning Signs That Were Overlooked53527/13medium0d3.7k-history
Light Scaffolding53757/13medium0d2.0kcapabilityintelligence-paradigms
ForecastBench53207/13medium0d1.9kprojectresponses
Epistemic Learned Helplessness53627/13medium0d1.5kriskrisks
Risk Pages Style Guide53137/13medium0d425internalinternal
Novel / Unknown Approaches53448/13medium0d3.3kcapabilityintelligence-paradigms
AI Impacts53898/13high0d1.5korganizationorganizations
Colorado AI Act (SB 205)53238/13medium0d3.5kpolicyresponses
Frontier Lab Cost Structure5369/13medium0d3.1kanalysismodels
Jaan Tallinn532810/13high1d1.1kpersonpeople
Expertise Atrophy Progression Model52764/13medium0d2.5kanalysismodels
Donations List Website52614/13medium1d3.5kprojectresponses
Automation Bias Cascade Model52525/13medium0d3.7kanalysismodels
Frontier AI Company Comparison (2026)52675/13medium1d3.7k-organizations
Post-Incident Recovery Model52436/13medium0d1.9kanalysismodels
Gwern Branwen52276/13high1d2.7kpersonpeople
AI Megaproject Infrastructure5277/13medium0d2.7kanalysismodels
UK AI Safety Institute52327/13high1d3.6korganizationorganizations
AGI Development52508/13medium1d2.3k-forecasting
Minimal Scaffolding52798/13medium0d2.5kcapabilityintelligence-paradigms
AI Talent Market Dynamics5268/13medium0d3.7kanalysismodels
LongtermWiki Vision Document5194/13medium0d1.2k-project
Fraud Sophistication Curve Model51435/13medium0d3.5kanalysismodels
Parameter Interaction Network51736/13medium0d1.3kanalysismodels
Genetic Enhancement / Selection51797/13medium1d3.6kcapabilityintelligence-paradigms
Epoch AI51887/13high1d4.6korganizationorganizations
Future of Humanity Institute (FHI)51518/13high1d4.2korganizationorganizations
Meta AI (FAIR)51309/13high1d3.0korganizationorganizations
AI Risk Public Education51629/13medium0d2.0kapproachresponses
Content Pipeline Architecture: Faster Page Creation50801/13medium0d2.4kinternalinternal
Reasoning Traces: Making Every Claim's Derivation Auditable50901/13medium0d2.4kinternalinternal
Knowledge Base Architecture50102/13medium0d1.6kinternalinternal
Canonical Facts & Calc Usage Guide50123/13medium0d1.0kinternalinternal
Cooperate-Bot50454/13medium0d1.5kconceptmodels
Value Aligned Research Advisors50304/13high1d1.7korganizationorganizations
FTX Collapse and EA's Public Credibility50625/13medium0d2.2k-history
Longtermism's Philosophical Credibility After FTX50585/13medium1d3.7k-history
Autonomous Cooperative Agents50555/13medium0d1.3kconceptmodels
AI Futures Project50835/13high0d2.4korganizationorganizations
Chan Zuckerberg Initiative50335/13high1d4.8korganizationorganizations
Founders Fund50485/13high0d3.0korganizationorganizations
Global Partnership on Artificial Intelligence (GPAI)50495/13high0d2.5korganizationorganizations
Biosecurity Interventions (Overview)50475/13medium0d627-responses
AI Acceleration Tradeoff Model50736/13medium0d3.5kanalysismodels
Epistemic Collapse Threshold Model50446/13medium0d1.4kanalysismodels
Swift Centre50466/13high0d2.3korganizationorganizations
Grokipedia50296/13medium1d1.2kprojectresponses
Arb Research50417/13high0d1.7korganizationorganizations
FutureSearch50747/13high0d1.7korganizationorganizations
Good Judgment (Forecasting)50657/13high1d3.7korganizationorganizations
Manifund50317/13high1d3.8korganizationorganizations
Metaculus50317/13high0d4.6korganizationorganizations
MIRI (Machine Intelligence Research Institute)50327/13high1d1.9korganizationorganizations
Nuño Sempere50837/13high1d2.6kpersonpeople
AI Model Specifications50407/13medium0d2.7kpolicyresponses
Deepfakes50167/13medium0d1.5kriskrisks
Lionheart Ventures50658/13high0d2.2korganizationorganizations
Manifest (Forecasting Conference)50758/13high0d991organizationorganizations
AI Knowledge Monopoly50158/13medium0d1.9kriskrisks
Research-First Page Creation Pipeline49103/13medium0d1.1k-reports
Tools & Platforms (Overview)49394/13medium1d842-responses
Epistemic Collapse49865/13medium0d779riskrisks
Is Interpretability Sufficient for Safety?49506/13medium1d2.0kcruxdebates
Brain-Computer Interfaces49377/13medium0d3.0kcapabilityintelligence-paradigms
Dustin Moskovitz (AI Safety Funder)49287/13high1d4.8kpersonpeople
LongtermWiki Strategy Brainstorm4892/13medium0d2.1k-project
AI Safety Organizations (Overview)48524/13medium0d952-organizations
xAI48305/13high1d2.1korganizationorganizations
Design Sketches for Collective Epistemics48705/13medium0d1.4kapproachresponses
Public Opinion Evolution Model48716/13medium0d2.8kanalysismodels
QURI (Quantified Uncertainty Research Institute)48376/13high1d4.4korganizationorganizations
Rating System48116/13medium0d874internalinternal
Whole Brain Emulation48477/13medium1d3.5kcapabilityintelligence-paradigms
AI-Assisted Knowledge Management48297/13medium0d2.2kconceptresponses
Secure AI Project47826/13high1d1.6korganizationorganizations
Should We Pause AI Development?47468/13medium1d1.2kcruxdebates
AI-Assisted Research Workflows: Best Practices46114/13medium0d2.2k-reports
Causal Diagram Visualization: Tools & Best Practices46124/13medium0d1.7k-reports
Future of Life Institute (FLI)46767/13high1d6.1korganizationorganizations
Canada AIDA46708/13medium0d3.3kpolicyresponses
Knowledge Graph Ontology: Design & Implementation Status45851/13medium0d3.7kinternalinternal
Anthropic Founder Pledges: Interventions to Increase Follow-Through45364/13medium1d4.0kanalysismodels
Safe Superintelligence Inc (SSI)45324/13high1d2.5korganizationorganizations
Community Notes for Everything45414/13medium0d1.6kapproachresponses
AI Content Provenance Tracing45494/13medium0d2.7kapproachresponses
Key Near-Term AI Risks45804/13medium0d2.9kriskrisks
Cooperative Funding Mechanisms45405/13medium0d1.6kconceptmodels
Seldon Lab45865/13high1d2.8korganizationorganizations
AI-Assisted Rhetoric Highlighting45185/13medium0d2.4kapproachresponses
Timelines Wiki45785/13medium1d1.3kprojectresponses
AI System Reliability Tracking45626/13medium1d2.6kapproachresponses
Singapore Consensus on AI Safety Research Priorities4556/13medium0d1.2kpolicyresponses
Stampy / AISafety.info45196/13medium0d1.3kprojectresponses
80,000 Hours45517/13high1d3.8korganizationorganizations
Elon Musk (Funder)45337/13medium1d1.6kanalysisorganizations
Longview Philanthropy45467/13high0d3.5korganizationorganizations
Vitalik Buterin (Funder)45297/13high0d1.3korganizationorganizations
Issa Rice45267/13high1d1.9kpersonpeople
Epistemic Virtue Evals45227/13medium0d1.5kapproachresponses
Demis Hassabis45299/13high1d3.2kpersonpeople
AI Governance & Policy (Overview)44724/13high0d519-responses
Cause-Effect Diagram Style Guide44104/13medium0d867internalinternal
Content Database System44115/13medium0d746internalinternal
Deep Learning Revolution (2012-2020)44916/13high1d9.1khistoricalhistory
LessWrong44336/13high1d1.9korganizationorganizations
Helen Toner43274/13high1d5.5kpersonpeople
Evan Hubinger43765/13high1d4.4kpersonpeople
GovAI43516/13high0d1.7korganizationorganizations
Manifold (Prediction Market)43656/13high1d4.1korganizationorganizations
AI-Driven Legal Evidence Crisis43706/13medium0d1.1kriskrisks
Historical Revisionism43157/13medium0d1.3kriskrisks
CSET (Center for Security and Emerging Technology)43348/13high1d3.8korganizationorganizations
Accident Risks (Overview)42734/13high0d452-risks
Cause-Effect Graph Demo42744/13low0d275-guides
Mainstream Era (2020-Present)42476/13high1d4.3khistoricalhistory
CAIS (Center for AI Safety)42897/13high1d2.9korganizationorganizations
Is Scaling All You Need?42558/13medium0d1.0kcruxdebates
Geoffrey Hinton42288/13high1d2.0kpersonpeople
AI-Driven Economic Disruption42579/13medium0d1.7kriskrisks
Automation Tools41104/13medium0d1.3kinternalinternal
Government AI Safety Organizations (Overview)41525/13medium0d333-organizations
Yann LeCun41625/13high1d4.4kpersonpeople
AI Forecasting Benchmark Tournament41236/13medium0d1.7kprojectresponses
Squiggle41166/13medium0d1.9kprojectresponses
Toby Ord41267/13high1d2.5kpersonpeople
Dario Amodei41318/13high0d2.6kpersonpeople
Page Coverage Guide40102/13medium0d1.1kinternalinternal
AI-Powered Investigation4063/13medium0d2.8kcapabilitycapabilities
AI for Accountability and Anti-Corruption4073/13medium0d2.0kapproachresponses
AI-Powered Deanonymization4063/13medium0d1.9kriskrisks
Importance Ranking System4073/13medium0d573internalinternal
Misuse Risks (Overview)40594/13high0d366-risks
Lighthaven (Event Venue)40317/13high0d2.5korganizationorganizations
Sam Altman40277/13high1d6.7kpersonpeople
MIT AI Risk Repository40537/13medium0d1.1kprojectresponses
AI-Powered Investigation Risks4077/13medium0d2.3kriskrisks
Holden Karnofsky40308/13high1d1.8kpersonpeople
Paul Christiano39287/13high1d1.1kpersonpeople
Yoshua Bengio39278/13high1d1.8kpersonpeople
Sentinel (Catastrophic Risk Foresight)392910/13high1d2.1korganizationorganizations
Wikipedia Views38144/13medium1d3.9kprojectresponses
EA Global38785/13high1d3.4korganizationorganizations
Elon Musk (AI Industry)38286/13high1d4.8kpersonpeople
AI Doomer Worldview38216/13high1d2.2kconceptworldviews
Models Style Guide38456/13high0d1.0kinternalinternal
Council on Strategic Risks38427/13high0d1.9korganizationorganizations
Lightning Rod Labs38657/13high0d1.9korganizationorganizations
Vidur Kapur38257/13high1d1.3kpersonpeople
Epistemic Risks (Overview)37584/13high0d409-risks
Structural Risks (Overview)37584/13high0d432-risks
SquiggleAI37155/13medium0d1.6kprojectresponses
AI-Induced Cyber Psychosis37795/13high0d935riskrisks
Mermaid Diagram Style Guide37125/13medium0d422internalinternal
CHAI (Center for Human-Compatible AI)37697/13high1d1.2korganizationorganizations
Conjecture37367/13high1d1.6korganizationorganizations
Google DeepMind37358/13high1d2.7korganizationorganizations
Frontier AI Labs (Overview)36524/13high0d398-organizations
Community Building Organizations (Overview)35295/13high1d326-organizations
AI Labor Transition & Economic Resilience35386/13medium0d1.6kapproachresponses
Metaforecast35486/13medium0d1.6kprojectresponses
Eliezer Yudkowsky35828/13high1d3.2kpersonpeople
RoastMyPost35179/13medium0d677projectresponses
Knowledge Base Style Guide34125/13high0d596internalinternal
Response Pages Style Guide3496/13medium0d274internalinternal
Polymarket33283/13high0d2.8korganizationorganizations
When Will AGI Arrive?33924/13medium0d1.0kcruxdebates
Evaluation & Detection (Overview)32633/13medium0d99-responses
Track Records (Overview)32334/13medium0d182-people
Model Style Guide32126/13high0d2.6kinternalinternal
Factor Diagram Naming: Research Report31114/13high0d1.6k-reports
Early Warnings (1950s-2000)31805/13high0d5.7khistoricalhistory
The MIRI Era (2000-2015)31866/13high0d5.2khistoricalhistory
Approaches (Overview)30393/13medium0d129-responses
Stuart Russell30275/13high1d4.1kpersonpeople
Astralis Foundation3068/13high0d971organizationorganizations
Project Roadmap29143/13high0d520internalinternal
Gap Analysis28142/13medium0d181-insight-hunting
Quantitative Claims28132/13high0d304-insight-hunting
AI-Accelerated Reality Fragmentation28155/13high0d750riskrisks
Training Methods (Overview)27623/13medium0d88-responses
Chris Olah27796/13high1d3.4kpersonpeople
Jan Leike27826/13high1d2.6kpersonpeople
Ilya Sutskever26345/13high1d3.3kpersonpeople
Neel Nanda26856/13high1d644personpeople
Gratified25145/13high0d1.3korganizationorganizations
Nick Bostrom25825/13high1d1.2kpersonpeople
Kalshi (Prediction Market)25377/13high0d3.5korganizationorganizations
AI Watch23235/13high1d1.7kprojectresponses
Org Watch23206/13high1d1.1kprojectresponses
Theoretical Foundations (Overview)22423/13medium0d93-responses
Deployment & Control (Overview)21423/13medium0d62-responses
Interpretability (Overview)21523/13medium0d62-responses
Policy & Governance (Overview)21423/13medium0d45-responses
Daniela Amodei21286/13high1d2.8kpersonpeople
Architecture Scenarios Table20433/12low0d0-knowledge-base
Deployment Architectures Table20353/12low0d0-knowledge-base
Evaluation Types Table20353/12low0d0-models
Safety Approaches Table20193/12low0d0-responses
Safety Generalizability Table20193/12low0d0-responses
Accident Risks Table20183/12low0d0-risks
Research Report Style Guide20134/13high0d846internalinternal
X.com Platform Epistemics20186/13medium0d2.2kapproachresponses
Stub Pages Style Guide19143/13medium0d172internalinternal
Connor Leahy19484/13high1d2.9kpersonpeople
Dan Hendrycks19875/13high1d2.7kpersonpeople
Entity Relationship Graph18492/12low0d223-dashboard
Insights Index14133/12low0d130-insight-hunting
Is AI Existential Risk Real?12944/13medium0d32cruxdebates
Browse by Tag10741/13medium0d25-tools
Interactive Views & Tables8564/13medium0d156-guides
External Resources4761/13medium0d27-tools
LongtermWiki Strategy Brainstorm4484/13high0d2.1kinternalinternal
LongtermWiki Value Proposition4124/13high1d4.5kinternalinternal
Venture Capital (Overview)3864/13medium0d178-organizations
The Foundation Layer355/13high0d1.2korganizationorganizations
Longtermist Funders (Overview)3896/13high0d1.3k-organizations
Parameters Strategy3396/13high0d1.4kinternalinternal
EA Funding Absorption Capacity3417/13medium0d2.0kconceptorganizations
LongtermWiki Vision2595/13high0d938internalinternal
Concepts Directory-640/13low0d25-knowledge-base
Incidents--0/13low0d202-incidents
Active Agents--0/12low0d0internalinternal
Agent Sessions--0/12low0d0internalinternal
Auto-Update News--0/12low0d0internalinternal
Auto-Update Runs--0/12low0d0internalinternal
Citation Accuracy--0/12low0d0internalinternal
Citation Content--0/12low0d0internalinternal
Divisions Dashboard--0/12low0d0internalinternal
Entities--0/12low0d0internalinternal
Canonical Facts Dashboard--0/12low0d0internalinternal
Funding Programs Dashboard--0/12low0d0internalinternal
Grants Dashboard--0/12low0d0internalinternal
Groundskeeper Runs--0/12low0d0internalinternal
Hallucination Evals--0/12low0d0internalinternal
Hallucination Risk--0/12low0d0internalinternal
Improve Runs--0/12low0d0internalinternal
KB Fact Verifications--0/12low0d0internalinternal
Page Changes--0/12low0d0internalinternal
Pages--0/12low0d0internalinternal
People Coverage--0/12low0d0internalinternal
Website Consistency Audit (February 2026)--0/13medium0d2.1k-reports
Session Insights--0/12low0d0internalinternal
Suggested Pages--0/12low0d0internalinternal
System Health--0/12low0d0internalinternal
Update Schedule--0/12low0d0internalinternal
Content Quality Dashboard--0/13low0d282-dashboard
LongtermWiki Project--0/13low0d190-project
Entity Coverage--0/12low0d0-kb
Facts Explorer--0/12low0d0-kb
KB Data Overview--0/12low0d0-kb
Properties Explorer--0/12low0d0-kb
Records Explorer--0/12low0d0-kb
Publications--0/12low0d0-sources
Resources--0/12low0d0-sources
Sources--0/12low0d0-sources
Future Projections--1/13low0d114-future-projections
Metrics & Indicators--1/13low0d65-metrics
Fast Takeoff0811/13low0d7conceptmodels
Adversarial Robustness0671/13low0d7conceptresponses
AI Executive Order0661/13medium0d7policyresponses
AI Safety Summit0661/13medium0d7historicalresponses
Benchmarking0881/13low0d7conceptresponses
AI Content Moderation0661/13low0d7conceptresponses
Natural Abstractions0841/13low0d7conceptresponses
Prosaic Alignment0681/13low0d7safety-agendaresponses
AI Value Learning071/13low0d7safety-agendaresponses
Autonomous Replication0921/13medium0d7riskrisks
Bio Risk0161/13medium0d7riskrisks
Cyber Offense0161/13medium0d7riskrisks
Dual-Use AI Technology0771/13low0d7conceptrisks
Anthropic Pages Refactor Notes-91/13medium0d466internalinternal
Internal--1/13low0d118internalinternal
Research: Adaptive Page Length & Summary Systems-101/13medium0d1.9kinternalinternal
Internal Reports--1/13low0d171-reports
Server Communication Investigation-151/13medium0d3.5kinternalinternal
Wiki-Server Environment Architecture--1/13medium0d1.6kinternalinternal
PR Dashboard--1/12low0d0internalinternal
AI Capabilities--2/13low0d196-capabilities
Key Cruxes--2/13low0d223-cruxes
Key Debates--2/13low0d173-debates
Knowledge Base--2/13medium0d349-knowledge-base
Intelligence Paradigms--2/13low0d191-intelligence-paradigms
Analytical Models--2/13low0d205-models
Transformative AI0932/13low0d7conceptmodels
Organizations--2/13low0d144-organizations
Open Philanthropy-312/13medium1d76organizationorganizations
People--2/13low0d175-people
AI Safety Field Building0672/13low0d7cruxresponses
Safety Responses--2/13low0d246-responses
AI Risks--2/13low0d211-risks
Data System Authority Rules--2/13medium0d736internalinternal
Wiki Gap Analysis — February 2026-132/13medium0d1.1kinternalinternal
Insight Hunting--2/13medium0d326-insight-hunting
Database Schema Overview--2/12medium0d1.0kinternalinternal
Forecasting--3/13low0d229-forecasting
History--3/13medium0d320-history
Worldviews--3/13low0d286-worldviews
System Architecture-103/13medium0d2.5kinternalinternal
Common Writing Principles0743/13high0d1.6kinternalinternal
GitHub Integrations for Multi-Agent Coordination-153/13medium0d2.8k-reports
Entity Type Reference-93/13medium0d2.4k-schema
Schema Documentation--3/13medium0d376-schema
Table Candidates093/13medium0d47-insight-hunting
Documentation Maintenance-104/13medium0d636internalinternal
Schema Diagrams-114/13medium0d667-schema