Content Quality Dashboard
This dashboard provides an overview of content quality, staleness tracking, and entity relationship health across the knowledge base.
Summary Statistics
Section titled “Summary Statistics”380
Total Entities
61
Risks
38
Responses
78
Models
3.4
Avg Quality
149
Gaps Found
Quality Distribution
Section titled “Quality Distribution”65
0
Unrated
11
1
Stub
45
2
Draft
151
3
Adequate
165
4
Good
33
5
Excellent
Low Quality (1-2)
56
Needs significant improvement
Adequate (3)
151
Meets basic standards
High Quality (4-5)
198
Well-developed content
Average Quality Score3.4 / 5.0
Content by Type
Section titled “Content by Type”| Type | Count |
|---|---|
| model | 78 |
| ai-transition-model-subitem | 57 |
| risk | 54 |
| researcher | 28 |
| ai-transition-model-parameter | 22 |
| concept | 21 |
| policy | 21 |
| crux | 12 |
| capability | 11 |
| organization | 11 |
| ai-transition-model-metric | 10 |
| ai-transition-model-scenario | 10 |
| intervention | 9 |
| safety-agenda | 8 |
| ai-transition-model-factor | 7 |
| lab-research | 6 |
| historical | 5 |
| argument | 4 |
| lab | 4 |
| analysis | 1 |
| lab-academic | 1 |
Recently Updated
Section titled “Recently Updated”| Entity | Last Updated |
|---|---|
| Misalignment Potential | 2026-01 |
| Misuse Potential | 2026-01 |
| AI Capabilities | 2026-01 |
| AI Uses | 2026-01 |
| AI Ownership | 2026-01 |
| Civilizational Competence | 2026-01 |
| Transition Turbulence | 2026-01 |
| Existential Catastrophe | 2026-01 |
| Long-term Trajectory | 2026-01 |
| Compute | 2026-01 |
Entity Gaps
Section titled “Entity Gaps”Risks Without Responses
Section titled “Risks Without Responses”These risks don’t have any response pages linking to them:
- Misalignment Potential (ai-transition-model-factor)
- Misuse Potential (ai-transition-model-factor)
- AI Capabilities (ai-transition-model-factor)
- AI Uses (ai-transition-model-factor)
- AI Ownership (ai-transition-model-factor)
- Civilizational Competence (ai-transition-model-factor)
- Transition Turbulence (ai-transition-model-factor)
- AI Authoritarian Tools (risk)
- Autonomous Weapons (risk)
- Cyber Psychosis (risk)
- ...and 14 more
Responses Without Risk Links
Section titled “Responses Without Risk Links”These response pages don’t link to any risks:
- AI Safety Institutes (AISIs) (policy)
- Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (policy)
- Artificial Intelligence and Data Act (AIDA) (policy)
- China AI Regulatory Framework (policy)
- Colorado Artificial Intelligence Act (policy)
- Compute Thresholds (policy)
- Compute Monitoring (policy)
- International Compute Regimes (policy)
- EU AI Act (policy)
- US AI Chip Export Controls (policy)
- ...and 14 more
Orphaned Entities
Section titled “Orphaned Entities”These entities have no incoming or outgoing links:
- Compute Forecast Model Sketch (ai-transition-model-subitem)
- Existential Catastrophe (ai-transition-model-subitem)
- Long-term Trajectory (ai-transition-model-subitem)
- Societal Adaptability (ai-transition-model-subitem)
- AI Control Concentration (ai-transition-model-subitem)
- Alignment Robustness (ai-transition-model-subitem)
- Biological Threat Exposure (ai-transition-model-subitem)
- Coordination Capacity (ai-transition-model-subitem)
- Cyber Threat Exposure (ai-transition-model-subitem)
- Economic Stability (ai-transition-model-subitem)
- ...and 91 more
Enhancement Queue
Section titled “Enhancement Queue”Pages prioritized for improvement based on (importance - quality × 10). High importance + low quality = priority target for enhancement.
| Page | Quality | Importance | Gap | Category |
|---|---|---|---|---|
| Safe Superintelligence Inc (SSI) | 45 | 75 | 30 | organizations |
| Expected Value of AI Safety Research | 58 | 82 | 24 | models |
| Reality Fragmentation | 28 | 52 | 24 | risks |
| EU AI Act | 55 | 78 | 23 | responses |
| Large Language Models | 60 | 82 | 22 | capabilities |
| Regulatory Capacity Threshold Model | 50 | 72 | 22 | models |
| When Will AGI Arrive? | 33 | 54 | 21 | debates |
| The Case FOR AI Existential Risk | 66 | 87 | 21 | debates |
| Flash Dynamics Threshold Model | 55 | 76 | 21 | models |
| Instrumental Convergence Framework | 58 | 78 | 20 | models |
| Scheming Likelihood Assessment | 63 | 83 | 20 | models |
| Dangerous Capability Evaluations | 64 | 84 | 20 | responses |
| RLHF / Constitutional AI | 63 | 83 | 20 | responses |
| Corrigibility Failure | 62 | 82 | 20 | risks |
| Power-Seeking AI | 67 | 87 | 20 | risks |
To improve these pages, use the /improving-pages skill or run:
npm run crux -- content improve <page-id>Link Health
Section titled “Link Health”69
Total Links
65
Valid Links
0
Broken Links
94%
Health Score
To get current link health status, run:
npm run crux -- validate refsRunning Validation
Section titled “Running Validation”To run the full validation suite locally:
npm run validateIndividual validators:
npm run crux -- validate templates # Template compliancenpm run crux -- validate data # Entity data integritynpm run crux -- validate refs # Internal link validationnpm run crux -- validate unified # Formatting and escaping rules