Insights Index
This page collects discrete insights from across the project, calibrated for AI safety researchers/experts.
| Dimension | Question | Scale |
|---|---|---|
| Surprising | Would this update an informed AI safety researcher? | 1-5 |
| Important | Does this affect high-stakes decisions or research priorities? | 1-5 |
| Actionable | Does this suggest concrete work, research, or interventions? | 1-5 |
| Neglected | Is this getting less attention than it deserves? | 1-5 |
Types: claim (factual), research-gap, counterintuitive, quantitative, disagreement, neglected
See the Critical Insights framework for the theoretical basis.
Adding Insights
Insights are stored in src/data/insights.yaml. Be harsh on surprising - most well-known AI safety facts should be 1-2 for experts.
- id: "XXX"
insight: "Your insight here - a compact, specific claim."
source: /path/to/source-page
tags: [relevant, tags]
type: claim # or: research-gap, counterintuitive, quantitative, disagreement, neglected
surprising: 2.5 # Would update an expert? (most should be 1-3)
important: 4.2
actionable: 3.5
neglected: 3.0
compact: 4.0
added: "2025-01-21"
Prioritize finding: counterintuitive findings, research gaps, specific quantitative claims, and neglected topics.