Pages
Admin overview of 676 wiki pages. Use preset buttons to switch between views (overview, coverage, quality, citations, updates) or toggle individual columns. Hover column headers for descriptions.
607 rated (avg quality 54), 159 high hallucination risk.
676 pages
Page title | Quality score (0–100) | Reader importance (0–100) | Coverage: passing items out of 13 (5 bool + 8 numeric) | Hallucination risk level | Time since last update | Word count | Entity type (person, org, risk, etc.) | Page category |
|---|---|---|---|---|---|---|---|---|
| AI Timelines | 95 | 93 | 6/13 | medium | 0d | 6.5k | concept | models |
| Superintelligence | 92 | 95 | 3/13 | medium | 1d | 1.6k | concept | risks |
| Existential Risk from AI | 92 | 95 | 4/13 | medium | 1d | 1.2k | concept | risks |
| AI Scaling Laws | 92 | 93 | 6/13 | medium | 1d | 2.5k | concept | models |
| US AI Safety Institute | 91 | 32 | 4/13 | high | 1d | 4.8k | organization | organizations |
| Voluntary Industry Commitments | 91 | 50 | 5/13 | medium | 1d | 4.6k | policy | responses |
| Multipolar Trap (AI Development) | 91 | 84 | 5/13 | medium | 1d | 3.9k | risk | risks |
| International Coordination Mechanisms | 91 | 24 | 6/13 | medium | 1d | 4.1k | policy | responses |
| AI Distributional Shift | 91 | 17 | 6/13 | medium | 0d | 3.6k | risk | risks |
| Reward Hacking | 91 | 16 | 6/13 | medium | 0d | 4.0k | risk | risks |
| Long-Timelines Technical Worldview | 91 | 15 | 6/13 | medium | 1d | 4.7k | concept | worldviews |
| Deepfake Detection | 91 | 22 | 7/13 | low | 0d | 2.9k | approach | responses |
| Eliciting Latent Knowledge (ELK) | 91 | 24 | 7/13 | low | 1d | 2.5k | approach | responses |
| Pause Advocacy | 91 | 52 | 7/13 | medium | 1d | 5.3k | approach | responses |
| AI Safety Cases | 91 | 51 | 7/13 | low | 0d | 4.1k | approach | responses |
| Sandboxing / Containment | 91 | 58 | 7/13 | low | 0d | 4.3k | approach | responses |
| Sparse Autoencoders (SAEs) | 91 | 20 | 7/13 | low | 0d | 3.2k | approach | responses |
| Structured Access / API-Only | 91 | 79 | 7/13 | low | 0d | 3.5k | approach | responses |
| Compute Thresholds | 91 | 56 | 7/13 | medium | 0d | 4.0k | policy | responses |
| US Executive Order on Safe, Secure, and Trustworthy AI | 91 | 57 | 7/13 | medium | 1d | 4.5k | policy | responses |
| Weak-to-Strong Generalization | 91 | 20 | 7/13 | medium | 0d | 2.9k | approach | responses |
| Cyberweapons | 91 | 83 | 7/13 | medium | 0d | 4.2k | risk | risks |
| Optimistic Alignment Worldview | 91 | 83 | 7/13 | medium | 1d | 4.4k | concept | worldviews |
| Capability Elicitation | 91 | 50 | 8/13 | low | 0d | 3.5k | approach | responses |
| Scheming & Deception Detection | 91 | 58 | 8/13 | low | 0d | 3.3k | approach | responses |
| Tool-Use Restrictions | 91 | 58 | 8/13 | medium | 0d | 3.9k | approach | responses |
| Authoritarian Tools | 91 | 18 | 8/13 | medium | 0d | 2.9k | risk | risks |
| Bioweapons | 91 | 63 | 8/13 | medium | 1d | 10.8k | risk | risks |
| AI Governance Coordination Technologies | 91 | 70 | 9/13 | low | 0d | 2.9k | approach | responses |
| AI-Human Hybrid Systems | 91 | 63 | 9/13 | medium | 0d | 2.4k | approach | responses |
| AI-Induced Enfeeblement | 91 | 77 | 9/13 | medium | 1d | 2.4k | risk | risks |
| Erosion of Human Agency | 91 | 19 | 9/13 | medium | 0d | 1.8k | risk | risks |
| Scientific Knowledge Corruption | 91 | 38 | 9/13 | medium | 0d | 1.9k | risk | risks |
| AI Model Steganography | 91 | 70 | 9/13 | medium | 1d | 2.4k | risk | risks |
| AI Alignment | 91 | 95 | 10/13 | medium | 0d | 5.7k | approach | responses |
| AI Safety Intervention Portfolio | 91 | 61 | 10/13 | low | 1d | 2.8k | approach | responses |
| AI-Enabled Untraceable Misuse | 88 | 48 | 5/13 | medium | 0d | 2.8k | risk | risks |
| OpenAI Foundation | 87 | 87 | 7/13 | medium | 1d | 9.0k | organization | organizations |
| EA Epistemic Failures in the FTX Era | 84 | 62 | 5/13 | medium | 1d | 4.9k | - | history |
| AI Compute Scaling Metrics | 78 | 82 | 5/13 | medium | 0d | 3.5k | analysis | models |
| Centre for Effective Altruism | 78 | 42 | 5/13 | high | 1d | 2.0k | organization | organizations |
| FTX Collapse: Lessons for EA Funding Resilience | 78 | 65 | 6/13 | high | 1d | 5.7k | concept | organizations |
| Sleeper Agents: Training Deceptive LLMs | 78 | 17 | 6/13 | medium | 0d | 1.8k | risk | risks |
| Redwood Research | 78 | 32 | 7/13 | medium | 1d | 1.5k | organization | organizations |
| FAR AI | 76 | 85 | 8/13 | high | 0d | 3.3k | organization | organizations |
| State Capacity and AI Governance | 75 | 72 | 5/13 | medium | 1d | 2.2k | concept | responses |
| OpenAI Foundation Governance Paradox | 75 | 40 | 6/13 | medium | 1d | 2.6k | analysis | organizations |
| AI Control | 75 | 69 | 8/13 | low | 0d | 3.1k | safety-agenda | responses |
| Deceptive Alignment | 75 | 19 | 9/13 | medium | 1d | 2.0k | risk | risks |
| OpenClaw Matplotlib Incident (2026) | 74 | 52 | 4/13 | medium | 0d | 3.5k | - | incidents |
| Scheming | 74 | 71 | 4/13 | medium | 1d | 5.1k | risk | risks |
| Relative Longtermist Value Comparisons | 74 | 68 | 6/13 | medium | 1d | 2.4k | analysis | models |
| Anthropic | 74 | 52 | 8/13 | high | 1d | 5.1k | organization | organizations |
| FTX (cryptocurrency exchange) | 74 | 62 | 8/13 | high | 0d | 3.1k | organization | organizations |
| Philip Tetlock (Forecasting Pioneer) | 73 | 61 | 4/13 | medium | 0d | 2.7k | person | people |
| AI-Driven Institutional Decision Capture | 73 | 39 | 5/13 | medium | 0d | 7.7k | risk | risks |
| California SB 53 | 73 | 72 | 6/13 | medium | 0d | 2.5k | policy | responses |
| New York RAISE Act | 73 | 38 | 6/13 | medium | 0d | 2.7k | policy | responses |
| AI Chip Export Controls | 73 | 88 | 7/13 | medium | 0d | 4.1k | policy | responses |
| Capabilities-to-Safety Pipeline Model | 73 | 46 | 8/13 | medium | 0d | 1.3k | analysis | models |
| Leading the Future super PAC | 73 | 80 | 8/13 | medium | 1d | 2.3k | organization | organizations |
| Intervention Effectiveness Matrix | 73 | 90 | 9/13 | medium | 0d | 4.2k | analysis | models |
| Projecting Compute Spending | 72 | 72 | 7/13 | medium | 0d | 6.0k | analysis | models |
| Representation Engineering | 72 | 62 | 7/13 | medium | 0d | 1.8k | approach | responses |
| Capability Threshold Model | 72 | 47 | 8/13 | medium | 0d | 1.3k | analysis | models |
| Evals & Red-teaming | 72 | 26 | 8/13 | medium | 0d | 2.7k | safety-agenda | responses |
| AI Evaluation | 72 | 79 | 8/13 | medium | 0d | 1.7k | approach | responses |
| Pause / Moratorium | 72 | 79 | 8/13 | medium | 1d | 2.0k | policy | responses |
| AI Development Racing Dynamics | 72 | 20 | 8/13 | medium | 1d | 2.7k | risk | risks |
| Intervention Timing Windows | 72 | 90 | 9/13 | medium | 0d | 4.4k | analysis | models |
| Anthropic Valuation Analysis | 72 | 34 | 9/13 | medium | 0d | 1.5k | analysis | organizations |
| Reward Hacking Taxonomy and Severity Model | 71 | 45 | 5/13 | medium | 0d | 6.6k | analysis | models |
| AI Safety Solution Cruxes | 71 | 94 | 7/13 | medium | 0d | 6.1k | crux | cruxes |
| AI Risk Critical Uncertainties Model | 71 | 93 | 8/13 | medium | 0d | 2.5k | crux | models |
| Citation Architecture: Current State & Unified Proposal | 70 | 85 | 2/13 | medium | 0d | 2.2k | internal | internal |
| AI Uplift Assessment Model | 70 | 76 | 4/13 | medium | 0d | 4.4k | analysis | models |
| Epistemic & Forecasting Organizations (Overview) | 70 | 87 | 5/13 | low | 0d | 217 | - | organizations |
| Anthropic-Pentagon Standoff (2026) | 70 | 78 | 6/13 | low | 1d | 3.3k | event | incidents |
| Musk v. OpenAI Lawsuit | 70 | 29 | 6/13 | medium | 1d | 1.9k | analysis | organizations |
| Long-Term Benefit Trust (Anthropic) | 70 | 78 | 7/13 | medium | 1d | 2.4k | analysis | organizations |
| AI Safety via Debate | 70 | 71 | 7/13 | medium | 0d | 1.7k | approach | responses |
| Compute Concentration | 70 | 58 | 7/13 | medium | 1d | 2.3k | risk | risks |
| Warning Signs Model | 70 | 43 | 8/13 | medium | 0d | 3.4k | analysis | models |
| Hardware-Enabled Governance | 70 | 23 | 8/13 | medium | 0d | 3.4k | policy | responses |
| US State AI Legislation | 70 | 38 | 8/13 | medium | 0d | 5.1k | policy | responses |
| Constitutional AI | 70 | 24 | 9/13 | medium | 0d | 1.5k | approach | responses |
| AI Safety Training Programs | 70 | 56 | 9/13 | medium | 0d | 2.2k | approach | responses |
| Compute Monitoring | 69 | 63 | 5/13 | medium | 0d | 4.4k | policy | responses |
| AI Safety Institutes | 69 | 64 | 6/13 | medium | 1d | 4.2k | policy | responses |
| AI Alignment Research Agenda Comparison | 69 | 58 | 6/13 | medium | 1d | 4.3k | crux | responses |
| AI-Powered Fraud | 69 | 58 | 6/13 | medium | 0d | 4.5k | risk | risks |
| Self-Improvement and Recursive Enhancement | 69 | 47 | 7/13 | medium | 1d | 5.0k | capability | capabilities |
| AI Standards Bodies | 69 | 83 | 7/13 | medium | 0d | 3.5k | policy | responses |
| Bioweapons Attack Chain Model | 69 | 72 | 8/13 | medium | 0d | 2.0k | analysis | models |
| Defense in Depth Model | 69 | 61 | 8/13 | medium | 1d | 1.6k | analysis | models |
| Sharp Left Turn | 69 | 57 | 8/13 | medium | 1d | 4.3k | risk | risks |
| Agentic AI | 68 | 73 | 5/13 | high | 0d | 8.8k | capability | capabilities |
| Giving Pledge | 68 | 37 | 5/13 | high | 1d | 2.3k | organization | organizations |
| Scalable Oversight | 68 | 52 | 5/13 | medium | 1d | 5.7k | safety-agenda | responses |
| Scientific Research Capabilities | 68 | 72 | 6/13 | medium | 0d | 5.8k | capability | capabilities |
| Evaluation Awareness | 68 | 42 | 6/13 | low | 0d | 3.5k | approach | responses |
| Reducing Hallucinations in AI-Generated Wiki Content | 68 | 55 | 6/13 | low | 0d | 4.2k | approach | responses |
| Model Registries | 68 | 21 | 7/13 | medium | 0d | 1.7k | policy | responses |
| Multi-Agent Safety | 68 | 21 | 7/13 | low | 0d | 3.6k | approach | responses |
| Corporate AI Safety Responses | 68 | 70 | 8/13 | medium | 0d | 1.3k | approach | responses |
| Goodfire | 68 | 86 | 9/13 | medium | 0d | 2.4k | organization | organizations |
| International Compute Regimes | 67 | 63 | 4/13 | medium | 1d | 5.4k | policy | responses |
| AI Capability Sandbagging | 67 | 39 | 6/13 | medium | 0d | 2.7k | risk | risks |
| Governance-Focused Worldview | 67 | 67 | 6/13 | medium | 1d | 3.9k | concept | worldviews |
| AI Safety Talent Supply/Demand Gap Model | 67 | 44 | 7/13 | medium | 0d | 2.6k | analysis | models |
| Treacherous Turn | 67 | 17 | 7/13 | medium | 1d | 4.0k | risk | risks |
| Situational Awareness | 67 | 92 | 8/13 | medium | 0d | 3.6k | capability | capabilities |
| Tool Use and Computer Use | 67 | 92 | 8/13 | medium | 0d | 3.8k | capability | capabilities |
| Risk Cascade Pathways | 67 | 59 | 8/13 | medium | 0d | 1.8k | analysis | models |
| Power-Seeking AI | 67 | 39 | 8/13 | medium | 0d | 3.0k | risk | risks |
| AI Accident Risk Cruxes | 67 | 94 | 9/13 | medium | 1d | 4.1k | crux | cruxes |
| Elon Musk: Track Record | 66 | 26 | 4/13 | medium | 0d | 2.8k | - | people |
| The Case FOR AI Existential Risk | 66 | 53 | 5/13 | medium | 1d | 6.7k | argument | debates |
| Bridgewater AIA Labs | 66 | 46 | 5/13 | high | 0d | 4.0k | organization | organizations |
| California SB 1047 | 66 | 23 | 5/13 | medium | 1d | 3.9k | policy | responses |
| AI Structural Risk Cruxes | 66 | 87 | 7/13 | medium | 0d | 2.0k | crux | cruxes |
| Risk Activation Timeline Model | 66 | 54 | 7/13 | medium | 0d | 2.0k | analysis | models |
| METR | 66 | 84 | 7/13 | high | 1d | 4.4k | organization | organizations |
| Corporate Influence on AI Policy | 66 | 23 | 7/13 | medium | 1d | 3.3k | crux | responses |
| AI Governance and Policy | 66 | 65 | 7/13 | medium | 1d | 3.1k | crux | responses |
| Technical AI Safety Research | 66 | 86 | 7/13 | medium | 0d | 3.8k | crux | responses |
| Mechanistic Interpretability | 66 | 41 | 8/13 | low | 0d | 3.7k | safety-agenda | responses |
| Sleeper Agent Detection | 66 | 51 | 8/13 | low | 1d | 4.3k | approach | responses |
| Evals-Based Deployment Gates | 66 | 42 | 9/13 | medium | 0d | 4.1k | policy | responses |
| Content Verification Tiers | 65 | 70 | 2/13 | medium | 0d | 1.9k | internal | internal |
| SecureBio | 65 | 29 | 4/13 | high | 0d | 1.3k | organization | organizations |
| The Sequences by Eliezer Yudkowsky | 65 | 31 | 4/13 | high | 1d | 1.9k | organization | organizations |
| David Sacks (White House AI Czar) | 65 | 26 | 4/13 | medium | 1d | 2.3k | person | people |
| Council of Europe Framework Convention on Artificial Intelligence | 65 | 72 | 4/13 | medium | 0d | 2.6k | policy | responses |
| Page Type System | 65 | 11 | 4/13 | medium | 0d | 1.5k | internal | internal |
| Eval Saturation & The Evals Gap | 65 | 23 | 5/13 | low | 1d | 4.6k | approach | responses |
| Carlsmith's Six-Premise Argument | 65 | 38 | 6/13 | medium | 0d | 2.2k | analysis | models |
| Electoral Impact Assessment Model | 65 | 50 | 6/13 | medium | 0d | 3.5k | analysis | models |
| Anthropic (Funder) | 65 | 33 | 6/13 | medium | 1d | 7.1k | analysis | organizations |
| Scalable Eval Approaches | 65 | 40 | 6/13 | low | 0d | 3.5k | approach | responses |
| Model Organisms of Misalignment | 65 | 73 | 7/13 | medium | 1d | 2.2k | analysis | models |
| Safety Culture Equilibrium | 65 | 88 | 7/13 | medium | 0d | 2.1k | analysis | models |
| Safety Research Allocation Model | 65 | 89 | 7/13 | medium | 0d | 1.4k | analysis | models |
| MacArthur Foundation | 65 | 29 | 7/13 | high | 0d | 3.5k | organization | organizations |
| Cooperative IRL (CIRL) | 65 | 25 | 7/13 | medium | 1d | 1.9k | approach | responses |
| AI Safety Field Building Analysis | 65 | 41 | 7/13 | medium | 1d | 3.6k | approach | responses |
| Formal Verification (AI Safety) | 65 | 43 | 7/13 | medium | 0d | 2.1k | approach | responses |
| Process Supervision | 65 | 49 | 7/13 | medium | 0d | 1.7k | approach | responses |
| Provably Safe AI (davidad agenda) | 65 | 50 | 7/13 | medium | 0d | 2.2k | approach | responses |
| Reasoning and Planning | 65 | 92 | 8/13 | medium | 0d | 4.9k | capability | capabilities |
| AI Proliferation Risk Model | 65 | 85 | 8/13 | medium | 0d | 1.9k | analysis | models |
| Anthropic IPO | 65 | 33 | 8/13 | medium | 1d | 3.8k | analysis | organizations |
| Palisade Research | 65 | 88 | 8/13 | high | 1d | 2.0k | organization | organizations |
| Alignment Evaluations | 65 | 65 | 8/13 | medium | 0d | 3.8k | approach | responses |
| Capability Unlearning / Removal | 65 | 66 | 8/13 | medium | 0d | 1.7k | approach | responses |
| AI-Driven Concentration of Power | 65 | 39 | 8/13 | medium | 0d | 1.2k | risk | risks |
| AI-Induced Expertise Atrophy | 65 | 91 | 8/13 | high | 0d | 915 | risk | risks |
| Long-Horizon Autonomous Tasks | 65 | 55 | 9/13 | medium | 0d | 2.7k | capability | capabilities |
| AI Misuse Risk Cruxes | 65 | 82 | 9/13 | medium | 0d | 2.1k | crux | cruxes |
| Risk Interaction Matrix Model | 65 | 85 | 9/13 | medium | 0d | 2.6k | analysis | models |
| Red Teaming | 65 | 39 | 9/13 | medium | 0d | 1.4k | approach | responses |
| Sycophancy | 65 | 15 | 9/13 | medium | 0d | 766 | risk | risks |
| US Government Authority Over Commercial AI Infrastructure | 64 | 62 | 4/13 | medium | 0d | 2.1k | policy | responses |
| Concentrated Compute as a Cybersecurity Risk | 64 | 63 | 4/13 | medium | 0d | 2.0k | risk | risks |
| Similar Projects to LongtermWiki: Research Report | 64 | 9 | 4/13 | medium | 1d | 2.1k | - | project |
| AI Epistemic Cruxes | 64 | 82 | 5/13 | medium | 0d | 1.3k | crux | cruxes |
| Responsible Scaling Policies | 64 | 63 | 5/13 | medium | 0d | 4.5k | policy | responses |
| Mass Surveillance | 64 | 17 | 5/13 | medium | 0d | 4.4k | risk | risks |
| Safety-Capability Tradeoff Model | 64 | 86 | 6/13 | medium | 1d | 5.8k | analysis | models |
| AI Flash Dynamics | 64 | 68 | 6/13 | medium | 0d | 3.3k | risk | risks |
| AI-Induced Irreversibility | 64 | 77 | 6/13 | medium | 1d | 3.5k | risk | risks |
| Provable / Guaranteed Safe AI | 64 | 89 | 7/13 | low | 1d | 2.5k | concept | intelligence-paradigms |
| AI Surveillance and Regime Durability Model | 64 | 43 | 7/13 | medium | 0d | 3.3k | analysis | models |
| Circuit Breakers / Inference Interventions | 64 | 43 | 7/13 | low | 0d | 3.2k | approach | responses |
| AI-Powered Consensus Manufacturing | 64 | 16 | 7/13 | medium | 0d | 3.4k | risk | risks |
| AI Value Lock-in | 64 | 16 | 7/13 | medium | 1d | 3.5k | risk | risks |
| Alignment Robustness Trajectory | 64 | 87 | 8/13 | medium | 0d | 3.2k | analysis | models |
| Risk Interaction Network | 64 | 44 | 8/13 | medium | 0d | 1.9k | analysis | models |
| Dangerous Capability Evaluations | 64 | 71 | 8/13 | low | 1d | 3.6k | approach | responses |
| Policy Effectiveness Assessment | 64 | 24 | 8/13 | medium | 0d | 3.6k | analysis | responses |
| Third-Party Model Auditing | 64 | 77 | 8/13 | low | 1d | 3.8k | approach | responses |
| Instrumental Convergence | 64 | 64 | 8/13 | medium | 1d | 5.0k | risk | risks |
| AI Risk Portfolio Analysis | 64 | 47 | 9/13 | medium | 1d | 2.2k | analysis | models |
| Peter Thiel (Funder) | 63 | 45 | 4/13 | medium | 1d | 3.3k | organization | organizations |
| Financial Stability Risks from AI Capital Expenditure | 63 | 58 | 4/13 | medium | 0d | 2.8k | risk | risks |
| Centre for Long-Term Resilience | 63 | 71 | 5/13 | medium | 0d | 2.7k | organization | organizations |
| Elicit (AI Research Tool) | 63 | 83 | 5/13 | high | 0d | 3.1k | organization | organizations |
| Johns Hopkins Center for Health Security | 63 | 33 | 5/13 | medium | 0d | 1.9k | organization | organizations |
| Max Tegmark | 63 | 82 | 5/13 | medium | 1d | 2.6k | person | people |
| International AI Safety Summits | 63 | 67 | 5/13 | medium | 0d | 4.7k | policy | responses |
| AI Welfare and Digital Minds | 63 | 62 | 5/13 | medium | 1d | 2.8k | concept | risks |
| Earning to Give: The EA Strategy and Its Limits | 63 | 52 | 6/13 | medium | 1d | 2.4k | - | history |
| Claude Code Espionage Incident (2025) | 63 | 46 | 6/13 | medium | 0d | 3.2k | - | incidents |
| Vipul Naik | 63 | 24 | 6/13 | high | 1d | 3.0k | person | people |
| AI-Assisted Deliberation Platforms | 63 | 22 | 6/13 | medium | 0d | 3.5k | approach | responses |
| Mesa-Optimization | 63 | 19 | 6/13 | medium | 1d | 4.3k | risk | risks |
| Autonomous Cyber Attack Timeline | 63 | 72 | 7/13 | medium | 0d | 1.7k | analysis | models |
| Power-Seeking Emergence Conditions Model | 63 | 73 | 7/13 | medium | 1d | 2.2k | analysis | models |
| ControlAI | 63 | 42 | 7/13 | high | 1d | 2.2k | organization | organizations |
| NIST and AI Safety | 63 | 77 | 7/13 | high | 1d | 2.8k | organization | organizations |
| AI-Era Epistemic Security | 63 | 67 | 7/13 | medium | 0d | 3.4k | approach | responses |
| AI Output Filtering | 63 | 63 | 7/13 | low | 0d | 2.6k | approach | responses |
| Refusal Training | 63 | 21 | 7/13 | low | 0d | 2.8k | approach | responses |
| Persuasion and Social Manipulation | 63 | 53 | 8/13 | medium | 0d | 2.8k | capability | capabilities |
| Longterm Wiki | 63 | 21 | 8/13 | medium | 0d | 2.2k | project | responses |
| AI Whistleblower Protections | 63 | 48 | 8/13 | medium | 1d | 2.6k | policy | responses |
| Goal Misgeneralization | 63 | 84 | 8/13 | medium | 0d | 3.5k | risk | risks |
| Autonomous Coding | 63 | 53 | 9/13 | medium | 0d | 2.5k | capability | capabilities |
| AI-Assisted Alignment | 63 | 25 | 9/13 | medium | 1d | 1.9k | approach | responses |
| Failed and Stalled AI Policy Proposals | 63 | 41 | 9/13 | medium | 0d | 4.5k | policy | responses |
| RLHF / Constitutional AI | 63 | 23 | 9/13 | medium | 0d | 3.0k | capability | responses |
| Authoritarian Tools Diffusion Model | 62 | 38 | 4/13 | medium | 0d | 7.0k | analysis | models |
| Short Timeline Policy Implications | 62 | 80 | 5/13 | medium | 0d | 1.9k | analysis | models |
| Center for Applied Rationality | 62 | 85 | 6/13 | high | 1d | 3.4k | organization | organizations |
| AI Lab Safety Culture | 62 | 42 | 6/13 | medium | 1d | 4.0k | approach | responses |
| Corrigibility Failure | 62 | 17 | 6/13 | medium | 1d | 3.9k | risk | risks |
| Technical Pathway Decomposition | 62 | 54 | 7/13 | medium | 0d | 2.3k | analysis | models |
| Anthropic Core Views | 62 | 53 | 7/13 | medium | 1d | 3.1k | safety-agenda | responses |
| Preference Optimization Methods | 62 | 49 | 7/13 | medium | 0d | 2.8k | approach | responses |
| Autonomous Weapons Escalation Model | 62 | 45 | 8/13 | medium | 0d | 2.6k | analysis | models |
| Corrigibility Failure Pathways | 62 | 73 | 8/13 | medium | 0d | 1.9k | analysis | models |
| Giving What We Can | 62 | 23 | 8/13 | high | 1d | 1.7k | organization | organizations |
| Open Source AI Safety | 62 | 49 | 8/13 | medium | 0d | 2.0k | approach | responses |
| Responsible Scaling Policies | 62 | 51 | 8/13 | medium | 1d | 3.4k | policy | responses |
| Large Language Models | 62 | 90 | 9/13 | medium | 0d | 3.7k | concept | capabilities |
| Capability-Alignment Race Model | 62 | 76 | 9/13 | medium | 0d | 1.8k | analysis | models |
| Deceptive Alignment Decomposition Model | 62 | 85 | 9/13 | medium | 0d | 2.1k | analysis | models |
| Worldview-Intervention Mapping | 62 | 52 | 9/13 | medium | 0d | 2.2k | analysis | models |
| OpenAI | 62 | 72 | 10/13 | high | 1d | 3.8k | organization | organizations |
| Eliezer Yudkowsky: Track Record | 61 | 26 | 4/13 | medium | 1d | 4.2k | - | people |
| Leopold Aschenbrenner | 61 | 27 | 4/13 | high | 1d | 2.6k | person | people |
| Why Alignment Might Be Hard | 61 | 95 | 6/13 | high | 0d | 7.5k | argument | debates |
| Racing Dynamics Impact Model | 61 | 79 | 7/13 | medium | 0d | 1.6k | analysis | models |
| Samotsvety | 61 | 46 | 7/13 | high | 1d | 2.3k | organization | organizations |
| Goal Misgeneralization Probability Model | 61 | 87 | 8/13 | medium | 0d | 1.7k | analysis | models |
| Mesa-Optimization Risk Analysis | 61 | 54 | 8/13 | medium | 0d | 1.6k | analysis | models |
| Multipolar Trap Dynamics Model | 61 | 59 | 8/13 | medium | 0d | 1.4k | analysis | models |
| Scheming Likelihood Assessment | 61 | 80 | 8/13 | medium | 0d | 1.5k | analysis | models |
| AI-Enabled Authoritarian Takeover | 61 | 78 | 8/13 | medium | 0d | 4.0k | risk | risks |
| Emergent Capabilities | 61 | 58 | 8/13 | medium | 1d | 3.0k | risk | risks |
| Epic Page Conventions | 60 | 50 | 1/13 | medium | 0d | 439 | internal | internal |
| LAWS Proliferation Model | 60 | 73 | 4/13 | medium | 0d | 5.3k | analysis | models |
| 1Day Sooner | 60 | 33 | 4/13 | high | 0d | 1.7k | organization | organizations |
| NTI | bio (Nuclear Threat Initiative - Biological Program) | 60 | 66 | 4/13 | high | 0d | 1.8k | organization | organizations |
| Schmidt Futures | 60 | 46 | 4/13 | medium | 1d | 3.0k | organization | organizations |
| Yann LeCun: Track Record | 60 | 24 | 4/13 | medium | 1d | 2.8k | - | people |
| Jeffrey Epstein's Connections to AI Researchers | 60 | 42 | 5/13 | medium | 0d | 2.9k | - | history |
| Blueprint Biosecurity | 60 | 33 | 5/13 | high | 0d | 1.1k | organization | organizations |
| Nick Beckstead | 60 | 58 | 5/13 | medium | 1d | 1.9k | person | people |
| Sam Altman: Track Record | 60 | 64 | 5/13 | medium | 0d | 1.9k | - | people |
| NIST AI Risk Management Framework | 60 | 40 | 5/13 | medium | 0d | 4.7k | policy | responses |
| Recoding America | 60 | 62 | 5/13 | medium | 1d | 1.8k | resource | responses |
| IBBIS (International Biosecurity and Biosafety Initiative for Science) | 60 | 75 | 6/13 | high | 0d | 1.8k | organization | organizations |
| Bletchley Declaration | 60 | 53 | 6/13 | medium | 1d | 2.0k | policy | responses |
| Epistemic Sycophancy | 60 | 68 | 6/13 | medium | 0d | 3.5k | risk | risks |
| Open vs Closed Source AI | 60 | 52 | 7/13 | medium | 1d | 2.2k | crux | debates |
| Instrumental Convergence Framework | 60 | 54 | 7/13 | medium | 0d | 2.4k | analysis | models |
| Rethink Priorities | 60 | 88 | 7/13 | high | 0d | 3.7k | organization | organizations |
| SecureDNA | 60 | 29 | 7/13 | high | 0d | 1.1k | organization | organizations |
| Will MacAskill | 60 | 33 | 7/13 | high | 1d | 2.1k | person | people |
| Seoul AI Safety Summit Declaration | 60 | 57 | 7/13 | medium | 1d | 2.8k | policy | responses |
| Anthropic Stakeholders | 60 | 85 | 7/12 | medium | 1d | 952 | table | organizations |
| Compounding Risks Analysis | 60 | 78 | 8/13 | medium | 0d | 1.8k | analysis | models |
| Expected Value of AI Safety Research | 60 | 54 | 8/13 | high | 0d | 1.4k | analysis | models |
| FTX Future Fund | 60 | 72 | 8/13 | medium | 1d | 2.3k | organization | organizations |
| MATS ML Alignment Theory Scholars program | 60 | 32 | 8/13 | high | 1d | 2.5k | organization | organizations |
| Proliferation | 60 | 57 | 8/13 | medium | 0d | 2.4k | risk | risks |
| Large Language Models | 60 | 94 | 9/13 | high | 0d | 6.1k | capability | capabilities |
| EA Shareholder Diversification from Anthropic | 60 | 64 | 10/13 | medium | 1d | 2.2k | concept | organizations |
| Authentication Collapse Timeline Model | 59 | 75 | 4/13 | medium | 0d | 6.3k | analysis | models |
| Situational Awareness LP | 59 | 30 | 5/13 | high | 1d | 2.2k | organization | organizations |
| Flash Dynamics Threshold Model | 59 | 73 | 6/13 | medium | 0d | 2.9k | analysis | models |
| Institutional Adaptation Speed Model | 59 | 78 | 6/13 | medium | 0d | 3.2k | analysis | models |
| Trust Erosion Dynamics Model | 59 | 57 | 6/13 | medium | 0d | 2.5k | analysis | models |
| Pause AI | 59 | 83 | 6/13 | high | 0d | 2.2k | organization | organizations |
| Feedback Loop & Cascade Model | 59 | 36 | 7/13 | medium | 0d | 2.2k | analysis | models |
| AI-Era Epistemic Infrastructure | 59 | 70 | 7/13 | medium | 0d | 2.7k | approach | responses |
| Mechanistic Interpretability | 59 | 40 | 7/13 | medium | 1d | 3.6k | approach | responses |
| International AI Coordination Game | 59 | 36 | 8/13 | medium | 0d | 1.9k | analysis | models |
| Multi-Actor Strategic Landscape | 59 | 36 | 8/13 | medium | 0d | 1.9k | analysis | models |
| Survival and Flourishing Fund (SFF) | 59 | 29 | 8/13 | high | 1d | 4.8k | organization | organizations |
| Agent Foundations | 59 | 26 | 8/13 | medium | 0d | 2.2k | approach | responses |
| Corrigibility Research | 59 | 24 | 9/13 | medium | 1d | 2.4k | safety-agenda | responses |
| AGI Timeline | 59 | 56 | 10/13 | medium | 1d | 2.0k | concept | forecasting |
| Anthropic Pre-IPO DAF Transfers | 58 | 32 | 4/13 | medium | 0d | 3.1k | analysis | organizations |
| Trust Cascade Failure Model | 58 | 76 | 5/13 | medium | 0d | 4.4k | analysis | models |
| CSER (Centre for the Study of Existential Risk) | 58 | 36 | 5/13 | high | 1d | 2.1k | organization | organizations |
| AI-Bioweapons Timeline Model | 58 | 44 | 6/13 | medium | 0d | 2.6k | analysis | models |
| Microsoft AI | 58 | 43 | 6/13 | high | 1d | 8.0k | organization | organizations |
| Marc Andreessen (AI Investor) | 58 | 30 | 6/13 | high | 1d | 3.2k | person | people |
| Compute Governance: AI Chips Export Controls Policy | 58 | 64 | 6/13 | high | 0d | 2.2k | policy | responses |
| Adversarial Training | 58 | 26 | 7/13 | medium | 0d | 1.8k | approach | responses |
| AI Content Authentication | 58 | 22 | 7/13 | medium | 0d | 2.4k | approach | responses |
| Goal Misgeneralization Research | 58 | 43 | 7/13 | medium | 0d | 2.0k | approach | responses |
| The Case AGAINST AI Existential Risk | 58 | 90 | 8/13 | medium | 1d | 1.7k | argument | debates |
| Dense Transformers | 58 | 80 | 8/13 | medium | 1d | 3.4k | concept | intelligence-paradigms |
| Eli Lifland | 58 | 27 | 8/13 | high | 1d | 1.1k | person | people |
| Apollo Research | 58 | 41 | 9/13 | high | 0d | 2.9k | organization | organizations |
| Frontier Model Forum | 58 | 84 | 9/13 | medium | 1d | 2.9k | organization | organizations |
| Expertise Atrophy Cascade Model | 57 | 73 | 5/13 | medium | 0d | 4.2k | analysis | models |
| Irreversibility Threshold Model | 57 | 54 | 5/13 | medium | 0d | 3.1k | analysis | models |
| Winner-Take-All Concentration Model | 57 | 35 | 5/13 | medium | 0d | 3.1k | analysis | models |
| Cyber Offense-Defense Balance Model | 57 | 59 | 6/13 | medium | 0d | 2.7k | analysis | models |
| Societal Response & Adaptation Model | 57 | 78 | 7/13 | medium | 0d | 1.9k | analysis | models |
| ARC (Alignment Research Center) | 57 | 39 | 7/13 | medium | 1d | 3.7k | organization | organizations |
| China AI Regulations | 57 | 72 | 7/13 | medium | 0d | 3.3k | policy | responses |
| Heavy Scaffolding / Agentic Systems | 57 | 37 | 8/13 | medium | 0d | 2.8k | concept | intelligence-paradigms |
| Authentication Collapse | 57 | 57 | 8/13 | medium | 0d | 1.9k | risk | risks |
| Critical Insights | 56 | 12 | 3/13 | medium | 0d | 1.2k | - | project |
| Whistleblower Dynamics Model | 56 | 71 | 4/13 | medium | 0d | 6.4k | analysis | models |
| Automation Bias (AI Systems) | 56 | 16 | 5/13 | medium | 0d | 2.9k | risk | risks |
| Regulatory Capacity Threshold Model | 56 | 58 | 6/13 | medium | 0d | 1.4k | analysis | models |
| Wikipedia and AI Content | 56 | 43 | 6/13 | medium | 0d | 1.8k | concept | responses |
| Collective Intelligence / Coordination | 56 | 80 | 7/13 | medium | 0d | 2.7k | capability | intelligence-paradigms |
| Long-Term Future Fund (LTFF) | 56 | 31 | 7/13 | high | 1d | 4.8k | organization | organizations |
| Prediction Markets (AI Forecasting) | 56 | 21 | 7/13 | medium | 0d | 1.4k | approach | responses |
| Autonomous Weapons | 56 | 17 | 7/13 | medium | 0d | 2.9k | risk | risks |
| Fact System Strategy | 55 | 10 | 1/13 | medium | 0d | 2.3k | internal | internal |
| Red Queen Bio | 55 | 35 | 3/13 | high | 0d | 1.5k | organization | organizations |
| Wiki Generation Architecture: Multi-Agent Multi-Pass Design | 55 | 75 | 3/13 | medium | 0d | 5.0k | internal | internal |
| Biosecurity Organizations (Overview) | 55 | 66 | 4/13 | medium | 0d | 1.1k | - | organizations |
| AI Trust Cascade Failure | 55 | 18 | 4/13 | high | 0d | 3.2k | risk | risks |
| AI Surveillance and US Democratic Erosion | 55 | 85 | 4/13 | medium | 0d | 2.6k | risk | risks |
| Controlled Vocabulary for Longtermist Analysis | 55 | 13 | 4/13 | medium | 0d | 1.1k | - | reports |
| AI Revenue Sources | 55 | 67 | 5/13 | high | 0d | 3.1k | organization | organizations |
| Ajeya Cotra | 55 | 55 | 5/13 | high | 1d | 1.9k | person | people |
| Neuromorphic Hardware | 55 | 37 | 6/13 | medium | 0d | 4.5k | capability | intelligence-paradigms |
| Disinformation Detection Arms Race Model | 55 | 89 | 6/13 | medium | 0d | 2.7k | analysis | models |
| LongtermWiki Impact Model | 55 | 34 | 6/13 | medium | 0d | 2.1k | analysis | models |
| Forecasting Research Institute | 55 | 36 | 6/13 | high | 1d | 3.9k | organization | organizations |
| Turion | 55 | 30 | 6/13 | high | 0d | 509 | organization | organizations |
| Probing / Linear Probes | 55 | 21 | 6/13 | medium | 0d | 2.7k | approach | responses |
| Texas TRAIGA Responsible AI Governance Act | 55 | 18 | 6/13 | medium | 0d | 2.2k | policy | responses |
| Rogue AI Scenarios | 55 | 39 | 6/13 | high | 0d | 4.0k | risk | risks |
| About This Wiki | 55 | 12 | 6/13 | medium | 0d | 1.1k | internal | internal |
| Neuro-Symbolic Hybrid Systems | 55 | 74 | 7/13 | medium | 0d | 2.9k | capability | intelligence-paradigms |
| Sparse / MoE Transformers | 55 | 39 | 7/13 | medium | 0d | 2.7k | capability | intelligence-paradigms |
| Anthropic Impact Assessment Model | 55 | 50 | 7/13 | medium | 0d | 1.7k | analysis | models |
| Sam Bankman-Fried | 55 | 68 | 7/13 | medium | 1d | 3.2k | person | people |
| EU AI Act | 55 | 42 | 7/13 | medium | 0d | 3.5k | policy | responses |
| MAIM (Mutually Assured AI Malfunction) | 55 | 48 | 7/13 | medium | 0d | 1.4k | policy | responses |
| Reward Modeling | 55 | 20 | 7/13 | medium | 0d | 1.9k | approach | responses |
| Planning for Frontier Lab Scaling | 55 | 6 | 8/13 | medium | 0d | 3.3k | analysis | models |
| Safety Spending at Scale | 55 | 6 | 8/13 | medium | 0d | 2.6k | analysis | models |
| Coefficient Giving | 55 | 36 | 8/13 | high | 1d | 3.9k | organization | organizations |
| William and Flora Hewlett Foundation | 55 | 81 | 8/13 | medium | 0d | 3.0k | organization | organizations |
| AI for Human Reasoning Fellowship | 55 | 25 | 8/13 | medium | 1d | 2.2k | approach | responses |
| Cooperative AI | 55 | 81 | 8/13 | medium | 0d | 2.0k | approach | responses |
| Is EA Biosecurity Work Limited to Restricting LLM Biological Use? | 55 | 40 | 8/13 | medium | 1d | 2.0k | analysis | responses |
| AI-Driven Trust Decline | 55 | 62 | 8/13 | medium | 0d | 1.5k | risk | risks |
| Pre-TAI Capital Deployment: $100B-$300B+ Spending Analysis | 55 | 6 | 9/13 | medium | 0d | 3.1k | analysis | models |
| AI Preference Manipulation | 55 | 91 | 9/13 | medium | 0d | 969 | risk | risks |
| Cross-Link Automation Proposal | 54 | 10 | 3/13 | medium | 0d | 634 | - | reports |
| Surveillance Chilling Effects Model | 54 | 75 | 4/13 | medium | 0d | 2.3k | analysis | models |
| AI-Augmented Forecasting | 54 | 66 | 5/13 | medium | 0d | 2.5k | approach | responses |
| X Community Notes | 54 | 22 | 6/13 | medium | 0d | 1.8k | project | responses |
| XPT (Existential Risk Persuasion Tournament) | 54 | 56 | 6/13 | medium | 0d | 2.0k | project | responses |
| Disinformation | 54 | 66 | 6/13 | medium | 0d | 3.0k | risk | risks |
| Biological / Organoid Computing | 54 | 81 | 7/13 | medium | 0d | 2.6k | capability | intelligence-paradigms |
| State-Space Models / Mamba | 54 | 34 | 7/13 | medium | 0d | 3.5k | capability | intelligence-paradigms |
| AI Winner-Take-All Dynamics | 54 | 77 | 7/13 | medium | 0d | 1.5k | risk | risks |
| Government Regulation vs Industry Self-Governance | 54 | 75 | 8/13 | medium | 0d | 1.7k | crux | debates |
| World Models + Planning | 54 | 75 | 8/13 | medium | 1d | 2.2k | capability | intelligence-paradigms |
| Deepfakes Authentication Crisis Model | 53 | 64 | 4/13 | medium | 0d | 4.7k | analysis | models |
| Media-Policy Feedback Loop Model | 53 | 84 | 4/13 | medium | 0d | 2.8k | analysis | models |
| Robin Hanson | 53 | 25 | 4/13 | high | 0d | 2.9k | person | people |
| EA Institutions' Response to the FTX Collapse | 53 | 62 | 5/13 | medium | 1d | 4.2k | - | history |
| Sycophancy Feedback Loop Model | 53 | 76 | 5/13 | medium | 0d | 3.2k | analysis | models |
| Coalition for Epidemic Preparedness Innovations | 53 | 37 | 5/13 | high | 0d | 2.2k | organization | organizations |
| Why Alignment Might Be Easy | 53 | 52 | 6/13 | medium | 1d | 4.1k | argument | debates |
| EA and Longtermist Wins and Losses | 53 | 50 | 6/13 | high | 1d | 7.7k | - | history |
| FTX Red Flags: Pre-Collapse Warning Signs That Were Overlooked | 53 | 52 | 7/13 | medium | 0d | 3.7k | - | history |
| Light Scaffolding | 53 | 75 | 7/13 | medium | 0d | 2.0k | capability | intelligence-paradigms |
| ForecastBench | 53 | 20 | 7/13 | medium | 0d | 1.9k | project | responses |
| Epistemic Learned Helplessness | 53 | 62 | 7/13 | medium | 0d | 1.5k | risk | risks |
| Risk Pages Style Guide | 53 | 13 | 7/13 | medium | 0d | 425 | internal | internal |
| Novel / Unknown Approaches | 53 | 44 | 8/13 | medium | 0d | 3.3k | capability | intelligence-paradigms |
| AI Impacts | 53 | 89 | 8/13 | high | 0d | 1.5k | organization | organizations |
| Colorado AI Act (SB 205) | 53 | 23 | 8/13 | medium | 0d | 3.5k | policy | responses |
| Frontier Lab Cost Structure | 53 | 6 | 9/13 | medium | 0d | 3.1k | analysis | models |
| Jaan Tallinn | 53 | 28 | 10/13 | high | 1d | 1.1k | person | people |
| Expertise Atrophy Progression Model | 52 | 76 | 4/13 | medium | 0d | 2.5k | analysis | models |
| Donations List Website | 52 | 61 | 4/13 | medium | 1d | 3.5k | project | responses |
| Automation Bias Cascade Model | 52 | 52 | 5/13 | medium | 0d | 3.7k | analysis | models |
| Frontier AI Company Comparison (2026) | 52 | 67 | 5/13 | medium | 1d | 3.7k | - | organizations |
| Post-Incident Recovery Model | 52 | 43 | 6/13 | medium | 0d | 1.9k | analysis | models |
| Gwern Branwen | 52 | 27 | 6/13 | high | 1d | 2.7k | person | people |
| AI Megaproject Infrastructure | 52 | 7 | 7/13 | medium | 0d | 2.7k | analysis | models |
| UK AI Safety Institute | 52 | 32 | 7/13 | high | 1d | 3.6k | organization | organizations |
| AGI Development | 52 | 50 | 8/13 | medium | 1d | 2.3k | - | forecasting |
| Minimal Scaffolding | 52 | 79 | 8/13 | medium | 0d | 2.5k | capability | intelligence-paradigms |
| AI Talent Market Dynamics | 52 | 6 | 8/13 | medium | 0d | 3.7k | analysis | models |
| LongtermWiki Vision Document | 51 | 9 | 4/13 | medium | 0d | 1.2k | - | project |
| Fraud Sophistication Curve Model | 51 | 43 | 5/13 | medium | 0d | 3.5k | analysis | models |
| Parameter Interaction Network | 51 | 73 | 6/13 | medium | 0d | 1.3k | analysis | models |
| Genetic Enhancement / Selection | 51 | 79 | 7/13 | medium | 1d | 3.6k | capability | intelligence-paradigms |
| Epoch AI | 51 | 88 | 7/13 | high | 1d | 4.6k | organization | organizations |
| Future of Humanity Institute (FHI) | 51 | 51 | 8/13 | high | 1d | 4.2k | organization | organizations |
| Meta AI (FAIR) | 51 | 30 | 9/13 | high | 1d | 3.0k | organization | organizations |
| AI Risk Public Education | 51 | 62 | 9/13 | medium | 0d | 2.0k | approach | responses |
| Content Pipeline Architecture: Faster Page Creation | 50 | 80 | 1/13 | medium | 0d | 2.4k | internal | internal |
| Reasoning Traces: Making Every Claim's Derivation Auditable | 50 | 90 | 1/13 | medium | 0d | 2.4k | internal | internal |
| Knowledge Base Architecture | 50 | 10 | 2/13 | medium | 0d | 1.6k | internal | internal |
| Canonical Facts & Calc Usage Guide | 50 | 12 | 3/13 | medium | 0d | 1.0k | internal | internal |
| Cooperate-Bot | 50 | 45 | 4/13 | medium | 0d | 1.5k | concept | models |
| Value Aligned Research Advisors | 50 | 30 | 4/13 | high | 1d | 1.7k | organization | organizations |
| FTX Collapse and EA's Public Credibility | 50 | 62 | 5/13 | medium | 0d | 2.2k | - | history |
| Longtermism's Philosophical Credibility After FTX | 50 | 58 | 5/13 | medium | 1d | 3.7k | - | history |
| Autonomous Cooperative Agents | 50 | 55 | 5/13 | medium | 0d | 1.3k | concept | models |
| AI Futures Project | 50 | 83 | 5/13 | high | 0d | 2.4k | organization | organizations |
| Chan Zuckerberg Initiative | 50 | 33 | 5/13 | high | 1d | 4.8k | organization | organizations |
| Founders Fund | 50 | 48 | 5/13 | high | 0d | 3.0k | organization | organizations |
| Global Partnership on Artificial Intelligence (GPAI) | 50 | 49 | 5/13 | high | 0d | 2.5k | organization | organizations |
| Biosecurity Interventions (Overview) | 50 | 47 | 5/13 | medium | 0d | 627 | - | responses |
| AI Acceleration Tradeoff Model | 50 | 73 | 6/13 | medium | 0d | 3.5k | analysis | models |
| Epistemic Collapse Threshold Model | 50 | 44 | 6/13 | medium | 0d | 1.4k | analysis | models |
| Swift Centre | 50 | 46 | 6/13 | high | 0d | 2.3k | organization | organizations |
| Grokipedia | 50 | 29 | 6/13 | medium | 1d | 1.2k | project | responses |
| Arb Research | 50 | 41 | 7/13 | high | 0d | 1.7k | organization | organizations |
| FutureSearch | 50 | 74 | 7/13 | high | 0d | 1.7k | organization | organizations |
| Good Judgment (Forecasting) | 50 | 65 | 7/13 | high | 1d | 3.7k | organization | organizations |
| Manifund | 50 | 31 | 7/13 | high | 1d | 3.8k | organization | organizations |
| Metaculus | 50 | 31 | 7/13 | high | 0d | 4.6k | organization | organizations |
| MIRI (Machine Intelligence Research Institute) | 50 | 32 | 7/13 | high | 1d | 1.9k | organization | organizations |
| Nuño Sempere | 50 | 83 | 7/13 | high | 1d | 2.6k | person | people |
| AI Model Specifications | 50 | 40 | 7/13 | medium | 0d | 2.7k | policy | responses |
| Deepfakes | 50 | 16 | 7/13 | medium | 0d | 1.5k | risk | risks |
| Lionheart Ventures | 50 | 65 | 8/13 | high | 0d | 2.2k | organization | organizations |
| Manifest (Forecasting Conference) | 50 | 75 | 8/13 | high | 0d | 991 | organization | organizations |
| AI Knowledge Monopoly | 50 | 15 | 8/13 | medium | 0d | 1.9k | risk | risks |
| Research-First Page Creation Pipeline | 49 | 10 | 3/13 | medium | 0d | 1.1k | - | reports |
| Tools & Platforms (Overview) | 49 | 39 | 4/13 | medium | 1d | 842 | - | responses |
| Epistemic Collapse | 49 | 86 | 5/13 | medium | 0d | 779 | risk | risks |
| Is Interpretability Sufficient for Safety? | 49 | 50 | 6/13 | medium | 1d | 2.0k | crux | debates |
| Brain-Computer Interfaces | 49 | 37 | 7/13 | medium | 0d | 3.0k | capability | intelligence-paradigms |
| Dustin Moskovitz (AI Safety Funder) | 49 | 28 | 7/13 | high | 1d | 4.8k | person | people |
| LongtermWiki Strategy Brainstorm | 48 | 9 | 2/13 | medium | 0d | 2.1k | - | project |
| AI Safety Organizations (Overview) | 48 | 52 | 4/13 | medium | 0d | 952 | - | organizations |
| xAI | 48 | 30 | 5/13 | high | 1d | 2.1k | organization | organizations |
| Design Sketches for Collective Epistemics | 48 | 70 | 5/13 | medium | 0d | 1.4k | approach | responses |
| Public Opinion Evolution Model | 48 | 71 | 6/13 | medium | 0d | 2.8k | analysis | models |
| QURI (Quantified Uncertainty Research Institute) | 48 | 37 | 6/13 | high | 1d | 4.4k | organization | organizations |
| Rating System | 48 | 11 | 6/13 | medium | 0d | 874 | internal | internal |
| Whole Brain Emulation | 48 | 47 | 7/13 | medium | 1d | 3.5k | capability | intelligence-paradigms |
| AI-Assisted Knowledge Management | 48 | 29 | 7/13 | medium | 0d | 2.2k | concept | responses |
| Secure AI Project | 47 | 82 | 6/13 | high | 1d | 1.6k | organization | organizations |
| Should We Pause AI Development? | 47 | 46 | 8/13 | medium | 1d | 1.2k | crux | debates |
| AI-Assisted Research Workflows: Best Practices | 46 | 11 | 4/13 | medium | 0d | 2.2k | - | reports |
| Causal Diagram Visualization: Tools & Best Practices | 46 | 12 | 4/13 | medium | 0d | 1.7k | - | reports |
| Future of Life Institute (FLI) | 46 | 76 | 7/13 | high | 1d | 6.1k | organization | organizations |
| Canada AIDA | 46 | 70 | 8/13 | medium | 0d | 3.3k | policy | responses |
| Knowledge Graph Ontology: Design & Implementation Status | 45 | 85 | 1/13 | medium | 0d | 3.7k | internal | internal |
| Anthropic Founder Pledges: Interventions to Increase Follow-Through | 45 | 36 | 4/13 | medium | 1d | 4.0k | analysis | models |
| Safe Superintelligence Inc (SSI) | 45 | 32 | 4/13 | high | 1d | 2.5k | organization | organizations |
| Community Notes for Everything | 45 | 41 | 4/13 | medium | 0d | 1.6k | approach | responses |
| AI Content Provenance Tracing | 45 | 49 | 4/13 | medium | 0d | 2.7k | approach | responses |
| Key Near-Term AI Risks | 45 | 80 | 4/13 | medium | 0d | 2.9k | risk | risks |
| Cooperative Funding Mechanisms | 45 | 40 | 5/13 | medium | 0d | 1.6k | concept | models |
| Seldon Lab | 45 | 86 | 5/13 | high | 1d | 2.8k | organization | organizations |
| AI-Assisted Rhetoric Highlighting | 45 | 18 | 5/13 | medium | 0d | 2.4k | approach | responses |
| Timelines Wiki | 45 | 78 | 5/13 | medium | 1d | 1.3k | project | responses |
| AI System Reliability Tracking | 45 | 62 | 6/13 | medium | 1d | 2.6k | approach | responses |
| Singapore Consensus on AI Safety Research Priorities | 45 | 5 | 6/13 | medium | 0d | 1.2k | policy | responses |
| Stampy / AISafety.info | 45 | 19 | 6/13 | medium | 0d | 1.3k | project | responses |
| 80,000 Hours | 45 | 51 | 7/13 | high | 1d | 3.8k | organization | organizations |
| Elon Musk (Funder) | 45 | 33 | 7/13 | medium | 1d | 1.6k | analysis | organizations |
| Longview Philanthropy | 45 | 46 | 7/13 | high | 0d | 3.5k | organization | organizations |
| Vitalik Buterin (Funder) | 45 | 29 | 7/13 | high | 0d | 1.3k | organization | organizations |
| Issa Rice | 45 | 26 | 7/13 | high | 1d | 1.9k | person | people |
| Epistemic Virtue Evals | 45 | 22 | 7/13 | medium | 0d | 1.5k | approach | responses |
| Demis Hassabis | 45 | 29 | 9/13 | high | 1d | 3.2k | person | people |
| AI Governance & Policy (Overview) | 44 | 72 | 4/13 | high | 0d | 519 | - | responses |
| Cause-Effect Diagram Style Guide | 44 | 10 | 4/13 | medium | 0d | 867 | internal | internal |
| Content Database System | 44 | 11 | 5/13 | medium | 0d | 746 | internal | internal |
| Deep Learning Revolution (2012-2020) | 44 | 91 | 6/13 | high | 1d | 9.1k | historical | history |
| LessWrong | 44 | 33 | 6/13 | high | 1d | 1.9k | organization | organizations |
| Helen Toner | 43 | 27 | 4/13 | high | 1d | 5.5k | person | people |
| Evan Hubinger | 43 | 76 | 5/13 | high | 1d | 4.4k | person | people |
| GovAI | 43 | 51 | 6/13 | high | 0d | 1.7k | organization | organizations |
| Manifold (Prediction Market) | 43 | 65 | 6/13 | high | 1d | 4.1k | organization | organizations |
| AI-Driven Legal Evidence Crisis | 43 | 70 | 6/13 | medium | 0d | 1.1k | risk | risks |
| Historical Revisionism | 43 | 15 | 7/13 | medium | 0d | 1.3k | risk | risks |
| CSET (Center for Security and Emerging Technology) | 43 | 34 | 8/13 | high | 1d | 3.8k | organization | organizations |
| Accident Risks (Overview) | 42 | 73 | 4/13 | high | 0d | 452 | - | risks |
| Cause-Effect Graph Demo | 42 | 74 | 4/13 | low | 0d | 275 | - | guides |
| Mainstream Era (2020-Present) | 42 | 47 | 6/13 | high | 1d | 4.3k | historical | history |
| CAIS (Center for AI Safety) | 42 | 89 | 7/13 | high | 1d | 2.9k | organization | organizations |
| Is Scaling All You Need? | 42 | 55 | 8/13 | medium | 0d | 1.0k | crux | debates |
| Geoffrey Hinton | 42 | 28 | 8/13 | high | 1d | 2.0k | person | people |
| AI-Driven Economic Disruption | 42 | 57 | 9/13 | medium | 0d | 1.7k | risk | risks |
| Automation Tools | 41 | 10 | 4/13 | medium | 0d | 1.3k | internal | internal |
| Government AI Safety Organizations (Overview) | 41 | 52 | 5/13 | medium | 0d | 333 | - | organizations |
| Yann LeCun | 41 | 62 | 5/13 | high | 1d | 4.4k | person | people |
| AI Forecasting Benchmark Tournament | 41 | 23 | 6/13 | medium | 0d | 1.7k | project | responses |
| Squiggle | 41 | 16 | 6/13 | medium | 0d | 1.9k | project | responses |
| Toby Ord | 41 | 26 | 7/13 | high | 1d | 2.5k | person | people |
| Dario Amodei | 41 | 31 | 8/13 | high | 0d | 2.6k | person | people |
| Page Coverage Guide | 40 | 10 | 2/13 | medium | 0d | 1.1k | internal | internal |
| AI-Powered Investigation | 40 | 6 | 3/13 | medium | 0d | 2.8k | capability | capabilities |
| AI for Accountability and Anti-Corruption | 40 | 7 | 3/13 | medium | 0d | 2.0k | approach | responses |
| AI-Powered Deanonymization | 40 | 6 | 3/13 | medium | 0d | 1.9k | risk | risks |
| Importance Ranking System | 40 | 7 | 3/13 | medium | 0d | 573 | internal | internal |
| Misuse Risks (Overview) | 40 | 59 | 4/13 | high | 0d | 366 | - | risks |
| Lighthaven (Event Venue) | 40 | 31 | 7/13 | high | 0d | 2.5k | organization | organizations |
| Sam Altman | 40 | 27 | 7/13 | high | 1d | 6.7k | person | people |
| MIT AI Risk Repository | 40 | 53 | 7/13 | medium | 0d | 1.1k | project | responses |
| AI-Powered Investigation Risks | 40 | 7 | 7/13 | medium | 0d | 2.3k | risk | risks |
| Holden Karnofsky | 40 | 30 | 8/13 | high | 1d | 1.8k | person | people |
| Paul Christiano | 39 | 28 | 7/13 | high | 1d | 1.1k | person | people |
| Yoshua Bengio | 39 | 27 | 8/13 | high | 1d | 1.8k | person | people |
| Sentinel (Catastrophic Risk Foresight) | 39 | 29 | 10/13 | high | 1d | 2.1k | organization | organizations |
| Wikipedia Views | 38 | 14 | 4/13 | medium | 1d | 3.9k | project | responses |
| EA Global | 38 | 78 | 5/13 | high | 1d | 3.4k | organization | organizations |
| Elon Musk (AI Industry) | 38 | 28 | 6/13 | high | 1d | 4.8k | person | people |
| AI Doomer Worldview | 38 | 21 | 6/13 | high | 1d | 2.2k | concept | worldviews |
| Models Style Guide | 38 | 45 | 6/13 | high | 0d | 1.0k | internal | internal |
| Council on Strategic Risks | 38 | 42 | 7/13 | high | 0d | 1.9k | organization | organizations |
| Lightning Rod Labs | 38 | 65 | 7/13 | high | 0d | 1.9k | organization | organizations |
| Vidur Kapur | 38 | 25 | 7/13 | high | 1d | 1.3k | person | people |
| Epistemic Risks (Overview) | 37 | 58 | 4/13 | high | 0d | 409 | - | risks |
| Structural Risks (Overview) | 37 | 58 | 4/13 | high | 0d | 432 | - | risks |
| SquiggleAI | 37 | 15 | 5/13 | medium | 0d | 1.6k | project | responses |
| AI-Induced Cyber Psychosis | 37 | 79 | 5/13 | high | 0d | 935 | risk | risks |
| Mermaid Diagram Style Guide | 37 | 12 | 5/13 | medium | 0d | 422 | internal | internal |
| CHAI (Center for Human-Compatible AI) | 37 | 69 | 7/13 | high | 1d | 1.2k | organization | organizations |
| Conjecture | 37 | 36 | 7/13 | high | 1d | 1.6k | organization | organizations |
| Google DeepMind | 37 | 35 | 8/13 | high | 1d | 2.7k | organization | organizations |
| Frontier AI Labs (Overview) | 36 | 52 | 4/13 | high | 0d | 398 | - | organizations |
| Community Building Organizations (Overview) | 35 | 29 | 5/13 | high | 1d | 326 | - | organizations |
| AI Labor Transition & Economic Resilience | 35 | 38 | 6/13 | medium | 0d | 1.6k | approach | responses |
| Metaforecast | 35 | 48 | 6/13 | medium | 0d | 1.6k | project | responses |
| Eliezer Yudkowsky | 35 | 82 | 8/13 | high | 1d | 3.2k | person | people |
| RoastMyPost | 35 | 17 | 9/13 | medium | 0d | 677 | project | responses |
| Knowledge Base Style Guide | 34 | 12 | 5/13 | high | 0d | 596 | internal | internal |
| Response Pages Style Guide | 34 | 9 | 6/13 | medium | 0d | 274 | internal | internal |
| Polymarket | 33 | 28 | 3/13 | high | 0d | 2.8k | organization | organizations |
| When Will AGI Arrive? | 33 | 92 | 4/13 | medium | 0d | 1.0k | crux | debates |
| Evaluation & Detection (Overview) | 32 | 63 | 3/13 | medium | 0d | 99 | - | responses |
| Track Records (Overview) | 32 | 33 | 4/13 | medium | 0d | 182 | - | people |
| Model Style Guide | 32 | 12 | 6/13 | high | 0d | 2.6k | internal | internal |
| Factor Diagram Naming: Research Report | 31 | 11 | 4/13 | high | 0d | 1.6k | - | reports |
| Early Warnings (1950s-2000) | 31 | 80 | 5/13 | high | 0d | 5.7k | historical | history |
| The MIRI Era (2000-2015) | 31 | 86 | 6/13 | high | 0d | 5.2k | historical | history |
| Approaches (Overview) | 30 | 39 | 3/13 | medium | 0d | 129 | - | responses |
| Stuart Russell | 30 | 27 | 5/13 | high | 1d | 4.1k | person | people |
| Astralis Foundation | 30 | 6 | 8/13 | high | 0d | 971 | organization | organizations |
| Project Roadmap | 29 | 14 | 3/13 | high | 0d | 520 | internal | internal |
| Gap Analysis | 28 | 14 | 2/13 | medium | 0d | 181 | - | insight-hunting |
| Quantitative Claims | 28 | 13 | 2/13 | high | 0d | 304 | - | insight-hunting |
| AI-Accelerated Reality Fragmentation | 28 | 15 | 5/13 | high | 0d | 750 | risk | risks |
| Training Methods (Overview) | 27 | 62 | 3/13 | medium | 0d | 88 | - | responses |
| Chris Olah | 27 | 79 | 6/13 | high | 1d | 3.4k | person | people |
| Jan Leike | 27 | 82 | 6/13 | high | 1d | 2.6k | person | people |
| Ilya Sutskever | 26 | 34 | 5/13 | high | 1d | 3.3k | person | people |
| Neel Nanda | 26 | 85 | 6/13 | high | 1d | 644 | person | people |
| Gratified | 25 | 14 | 5/13 | high | 0d | 1.3k | organization | organizations |
| Nick Bostrom | 25 | 82 | 5/13 | high | 1d | 1.2k | person | people |
| Kalshi (Prediction Market) | 25 | 37 | 7/13 | high | 0d | 3.5k | organization | organizations |
| AI Watch | 23 | 23 | 5/13 | high | 1d | 1.7k | project | responses |
| Org Watch | 23 | 20 | 6/13 | high | 1d | 1.1k | project | responses |
| Theoretical Foundations (Overview) | 22 | 42 | 3/13 | medium | 0d | 93 | - | responses |
| Deployment & Control (Overview) | 21 | 42 | 3/13 | medium | 0d | 62 | - | responses |
| Interpretability (Overview) | 21 | 52 | 3/13 | medium | 0d | 62 | - | responses |
| Policy & Governance (Overview) | 21 | 42 | 3/13 | medium | 0d | 45 | - | responses |
| Daniela Amodei | 21 | 28 | 6/13 | high | 1d | 2.8k | person | people |
| Architecture Scenarios Table | 20 | 43 | 3/12 | low | 0d | 0 | - | knowledge-base |
| Deployment Architectures Table | 20 | 35 | 3/12 | low | 0d | 0 | - | knowledge-base |
| Evaluation Types Table | 20 | 35 | 3/12 | low | 0d | 0 | - | models |
| Safety Approaches Table | 20 | 19 | 3/12 | low | 0d | 0 | - | responses |
| Safety Generalizability Table | 20 | 19 | 3/12 | low | 0d | 0 | - | responses |
| Accident Risks Table | 20 | 18 | 3/12 | low | 0d | 0 | - | risks |
| Research Report Style Guide | 20 | 13 | 4/13 | high | 0d | 846 | internal | internal |
| X.com Platform Epistemics | 20 | 18 | 6/13 | medium | 0d | 2.2k | approach | responses |
| Stub Pages Style Guide | 19 | 14 | 3/13 | medium | 0d | 172 | internal | internal |
| Connor Leahy | 19 | 48 | 4/13 | high | 1d | 2.9k | person | people |
| Dan Hendrycks | 19 | 87 | 5/13 | high | 1d | 2.7k | person | people |
| Entity Relationship Graph | 18 | 49 | 2/12 | low | 0d | 223 | - | dashboard |
| Insights Index | 14 | 13 | 3/12 | low | 0d | 130 | - | insight-hunting |
| Is AI Existential Risk Real? | 12 | 94 | 4/13 | medium | 0d | 32 | crux | debates |
| Browse by Tag | 10 | 74 | 1/13 | medium | 0d | 25 | - | tools |
| Interactive Views & Tables | 8 | 56 | 4/13 | medium | 0d | 156 | - | guides |
| External Resources | 4 | 76 | 1/13 | medium | 0d | 27 | - | tools |
| LongtermWiki Strategy Brainstorm | 4 | 48 | 4/13 | high | 0d | 2.1k | internal | internal |
| LongtermWiki Value Proposition | 4 | 12 | 4/13 | high | 1d | 4.5k | internal | internal |
| Venture Capital (Overview) | 3 | 86 | 4/13 | medium | 0d | 178 | - | organizations |
| The Foundation Layer | 3 | 5 | 5/13 | high | 0d | 1.2k | organization | organizations |
| Longtermist Funders (Overview) | 3 | 89 | 6/13 | high | 0d | 1.3k | - | organizations |
| Parameters Strategy | 3 | 39 | 6/13 | high | 0d | 1.4k | internal | internal |
| EA Funding Absorption Capacity | 3 | 41 | 7/13 | medium | 0d | 2.0k | concept | organizations |
| LongtermWiki Vision | 2 | 59 | 5/13 | high | 0d | 938 | internal | internal |
| Concepts Directory | - | 64 | 0/13 | low | 0d | 25 | - | knowledge-base |
| Incidents | - | - | 0/13 | low | 0d | 202 | - | incidents |
| Active Agents | - | - | 0/12 | low | 0d | 0 | internal | internal |
| Agent Sessions | - | - | 0/12 | low | 0d | 0 | internal | internal |
| Auto-Update News | - | - | 0/12 | low | 0d | 0 | internal | internal |
| Auto-Update Runs | - | - | 0/12 | low | 0d | 0 | internal | internal |
| Citation Accuracy | - | - | 0/12 | low | 0d | 0 | internal | internal |
| Citation Content | - | - | 0/12 | low | 0d | 0 | internal | internal |
| Divisions Dashboard | - | - | 0/12 | low | 0d | 0 | internal | internal |
| Entities | - | - | 0/12 | low | 0d | 0 | internal | internal |
| Canonical Facts Dashboard | - | - | 0/12 | low | 0d | 0 | internal | internal |
| Funding Programs Dashboard | - | - | 0/12 | low | 0d | 0 | internal | internal |
| Grants Dashboard | - | - | 0/12 | low | 0d | 0 | internal | internal |
| Groundskeeper Runs | - | - | 0/12 | low | 0d | 0 | internal | internal |
| Hallucination Evals | - | - | 0/12 | low | 0d | 0 | internal | internal |
| Hallucination Risk | - | - | 0/12 | low | 0d | 0 | internal | internal |
| Improve Runs | - | - | 0/12 | low | 0d | 0 | internal | internal |
| KB Fact Verifications | - | - | 0/12 | low | 0d | 0 | internal | internal |
| Page Changes | - | - | 0/12 | low | 0d | 0 | internal | internal |
| Pages | - | - | 0/12 | low | 0d | 0 | internal | internal |
| People Coverage | - | - | 0/12 | low | 0d | 0 | internal | internal |
| Website Consistency Audit (February 2026) | - | - | 0/13 | medium | 0d | 2.1k | - | reports |
| Session Insights | - | - | 0/12 | low | 0d | 0 | internal | internal |
| Suggested Pages | - | - | 0/12 | low | 0d | 0 | internal | internal |
| System Health | - | - | 0/12 | low | 0d | 0 | internal | internal |
| Update Schedule | - | - | 0/12 | low | 0d | 0 | internal | internal |
| Content Quality Dashboard | - | - | 0/13 | low | 0d | 282 | - | dashboard |
| LongtermWiki Project | - | - | 0/13 | low | 0d | 190 | - | project |
| Entity Coverage | - | - | 0/12 | low | 0d | 0 | - | kb |
| Facts Explorer | - | - | 0/12 | low | 0d | 0 | - | kb |
| KB Data Overview | - | - | 0/12 | low | 0d | 0 | - | kb |
| Properties Explorer | - | - | 0/12 | low | 0d | 0 | - | kb |
| Records Explorer | - | - | 0/12 | low | 0d | 0 | - | kb |
| Publications | - | - | 0/12 | low | 0d | 0 | - | sources |
| Resources | - | - | 0/12 | low | 0d | 0 | - | sources |
| Sources | - | - | 0/12 | low | 0d | 0 | - | sources |
| Future Projections | - | - | 1/13 | low | 0d | 114 | - | future-projections |
| Metrics & Indicators | - | - | 1/13 | low | 0d | 65 | - | metrics |
| Fast Takeoff | 0 | 81 | 1/13 | low | 0d | 7 | concept | models |
| Adversarial Robustness | 0 | 67 | 1/13 | low | 0d | 7 | concept | responses |
| AI Executive Order | 0 | 66 | 1/13 | medium | 0d | 7 | policy | responses |
| AI Safety Summit | 0 | 66 | 1/13 | medium | 0d | 7 | historical | responses |
| Benchmarking | 0 | 88 | 1/13 | low | 0d | 7 | concept | responses |
| AI Content Moderation | 0 | 66 | 1/13 | low | 0d | 7 | concept | responses |
| Natural Abstractions | 0 | 84 | 1/13 | low | 0d | 7 | concept | responses |
| Prosaic Alignment | 0 | 68 | 1/13 | low | 0d | 7 | safety-agenda | responses |
| AI Value Learning | 0 | 7 | 1/13 | low | 0d | 7 | safety-agenda | responses |
| Autonomous Replication | 0 | 92 | 1/13 | medium | 0d | 7 | risk | risks |
| Bio Risk | 0 | 16 | 1/13 | medium | 0d | 7 | risk | risks |
| Cyber Offense | 0 | 16 | 1/13 | medium | 0d | 7 | risk | risks |
| Dual-Use AI Technology | 0 | 77 | 1/13 | low | 0d | 7 | concept | risks |
| Anthropic Pages Refactor Notes | - | 9 | 1/13 | medium | 0d | 466 | internal | internal |
| Internal | - | - | 1/13 | low | 0d | 118 | internal | internal |
| Research: Adaptive Page Length & Summary Systems | - | 10 | 1/13 | medium | 0d | 1.9k | internal | internal |
| Internal Reports | - | - | 1/13 | low | 0d | 171 | - | reports |
| Server Communication Investigation | - | 15 | 1/13 | medium | 0d | 3.5k | internal | internal |
| Wiki-Server Environment Architecture | - | - | 1/13 | medium | 0d | 1.6k | internal | internal |
| PR Dashboard | - | - | 1/12 | low | 0d | 0 | internal | internal |
| AI Capabilities | - | - | 2/13 | low | 0d | 196 | - | capabilities |
| Key Cruxes | - | - | 2/13 | low | 0d | 223 | - | cruxes |
| Key Debates | - | - | 2/13 | low | 0d | 173 | - | debates |
| Knowledge Base | - | - | 2/13 | medium | 0d | 349 | - | knowledge-base |
| Intelligence Paradigms | - | - | 2/13 | low | 0d | 191 | - | intelligence-paradigms |
| Analytical Models | - | - | 2/13 | low | 0d | 205 | - | models |
| Transformative AI | 0 | 93 | 2/13 | low | 0d | 7 | concept | models |
| Organizations | - | - | 2/13 | low | 0d | 144 | - | organizations |
| Open Philanthropy | - | 31 | 2/13 | medium | 1d | 76 | organization | organizations |
| People | - | - | 2/13 | low | 0d | 175 | - | people |
| AI Safety Field Building | 0 | 67 | 2/13 | low | 0d | 7 | crux | responses |
| Safety Responses | - | - | 2/13 | low | 0d | 246 | - | responses |
| AI Risks | - | - | 2/13 | low | 0d | 211 | - | risks |
| Data System Authority Rules | - | - | 2/13 | medium | 0d | 736 | internal | internal |
| Wiki Gap Analysis — February 2026 | - | 13 | 2/13 | medium | 0d | 1.1k | internal | internal |
| Insight Hunting | - | - | 2/13 | medium | 0d | 326 | - | insight-hunting |
| Database Schema Overview | - | - | 2/12 | medium | 0d | 1.0k | internal | internal |
| Forecasting | - | - | 3/13 | low | 0d | 229 | - | forecasting |
| History | - | - | 3/13 | medium | 0d | 320 | - | history |
| Worldviews | - | - | 3/13 | low | 0d | 286 | - | worldviews |
| System Architecture | - | 10 | 3/13 | medium | 0d | 2.5k | internal | internal |
| Common Writing Principles | 0 | 74 | 3/13 | high | 0d | 1.6k | internal | internal |
| GitHub Integrations for Multi-Agent Coordination | - | 15 | 3/13 | medium | 0d | 2.8k | - | reports |
| Entity Type Reference | - | 9 | 3/13 | medium | 0d | 2.4k | - | schema |
| Schema Documentation | - | - | 3/13 | medium | 0d | 376 | - | schema |
| Table Candidates | 0 | 9 | 3/13 | medium | 0d | 47 | - | insight-hunting |
| Documentation Maintenance | - | 10 | 4/13 | medium | 0d | 636 | internal | internal |
| Schema Diagrams | - | 11 | 4/13 | medium | 0d | 667 | - | schema |