Skip to content

Metrics & Indicators

This section documents metrics and indicators that can be tracked to monitor AI safety progress and risk dynamics. Quantitative measurement helps move beyond intuition toward evidence-based assessment.

Measures of AI system capabilities:

  • Benchmark performance trends
  • Capability emergence timing
  • Task completion rates

Infrastructure indicators:

  • Training compute trends
  • Chip production and access
  • Energy consumption

Progress on alignment and safety:

  • Publication rates and citations
  • Researcher headcount
  • Funding levels

Technical alignment indicators:

  • Interpretability coverage
  • Evaluation robustness
  • Safety case development

How labs are operating:

  • Pre-deployment testing time
  • Safety team size ratios
  • RSP adoption rates

Policy and institutional indicators:

  • Regulatory progress
  • International coordination
  • Standards development

Perception and discourse:

  • Public awareness surveys
  • Expert probability estimates
  • Media coverage trends

Socioeconomic impacts:

  • Automation displacement rates
  • AI investment levels
  • Productivity changes

Good metrics help:

  • Detect early warning signs of increasing risk
  • Evaluate intervention effectiveness over time
  • Enable accountability for labs and governments
  • Ground debates in data rather than speculation

See individual metric pages for current values, historical trends, and data sources.