Meta & Structural Indicators
- DebateThere is a massive 72% to 8% public preference for slowing versus speeding AI development, creating a large democratic deficit as AI development is primarily shaped by optimistic technologists rather than risk-concerned publics.S:4.0I:4.5A:4.5
- GapNear-miss reporting for AI safety has overwhelming industry support (76% strongly agree) but virtually no actual implementation, representing a critical gap compared to aviation safety culture.S:3.5I:4.0A:5.0
- Quant.Policy responses to major AI developments lag significantly, with the EU AI Act taking 29 months from GPT-4 release to enforceable provisions and averaging 1-3 years across jurisdictions for major risks.S:3.5I:4.5A:4.0
These metrics assess the structural and meta-level conditions that determine societyβs ability to navigate AI development safely. Unlike direct capability or safety metrics, these measure the quality of the broader systemsβgovernance institutions, information environments, coordination mechanismsβthat mediate AIβs societal impact.
Overview
Section titled βOverviewβStructural indicators help answer: Is society equipped to handle AI risks? They track whether institutions can make good decisions, whether information environments support informed debate, and whether coordination mechanisms can address collective action problems.
Key distinctions:
- Direct metrics (capabilities, safety research) β What AI can do and how safe it is
- Structural metrics (these) β Whether society can govern AI effectively
Many of these metrics are conceptual or partially measuredβthey represent important dimensions we should track, even if comprehensive data doesnβt yet exist.
1. Information Environment Quality
Section titled β1. Information Environment QualityβMeasured Indicators
Section titled βMeasured IndicatorsβFreedom House βFreedom on the Netβ Score
- Latest (2025): United States remains βFreeβ but with declining scores
- Concerns about misinformation ahead of 2024 elections contributing to βunreliable information environmentβ
- United Kingdom saw decline due to false information leading to riots in summer 2024
- Interpretation: Score based on internet freedom, content controls, and usersβ rights
- Source: Freedom on the Net 2025βπ webβ β β β βFreedom HouseFreedom on the Net 2025Source βNotes
RSF World Press Freedom Index
- 2025 Global Average: Economic indicator at βunprecedented, critical lowβ
- Global press freedom now classified as βdifficult situationβ for first time in Index history
- Disinformation prevalence: 138/180 countries report political actors involved in disinformation campaigns
- 31 countries report βsystematicβ disinformation involvement
- 2024 US-specific: Press freedom violations increased to 49 arrests/charges and 80 assaults on journalists (vs 15 and 45 in 2023)
- Source: RSF World Press Freedom Index 2025βπ webRSF World Press Freedom Index 2025The 2025 RSF World Press Freedom Index reveals a critical economic threat to journalism worldwide, with media outlets struggling financially and losing independence in most coun...Source βNotes
Trust in Institutions (Edelman Trust Barometer 2024)
- Business: 63% trust (only trusted institution)
- Government: Low trust, 42% trust government leaders
- Media: Actively distrusted
- Mass/Elite divide on AI: 16-point gap in US (43% high-income vs 27% low-income trust AI)
- Innovation management: 2:1 margin believe innovation is poorly managed
- Source: 2024 Edelman Trust Barometerβπ webβ β β ββEdelman2024 Edelman Trust BarometerSource βNotes
Conceptual Indicators (Limited Direct Measurement)
Section titled βConceptual Indicators (Limited Direct Measurement)βAI-Specific Misinformation Prevalence
- Conceptual metric: % of AI-related claims in public discourse that are false or misleading
- Proxy data: 62% of voters primarily concerned (vs 21% excited) about AI (AIPI polling)
- Elite/public gap: βLarge disconnect between elite discourse and what American public wantsβ - AI Policy Institute
- Challenge: No systematic tracking of AI misinformation rates
- Source: AI Policy Institute Pollingβπ webAI Policy Institute PollingA YouGov survey shows strong public support for AI regulation, with most voters worried about potential catastrophic risks and preferring a cautious approach to AI development.Source βNotes
2. Institutional Decision-Making Quality
Section titled β2. Institutional Decision-Making QualityβMeasured Proxies
Section titled βMeasured ProxiesβWorld Bank Worldwide Governance Indicators (WGI)
- Government Effectiveness dimension: Quality of public services, bureaucracy competence, civil service independence, policy credibility
- Scale: -2.5 to +2.5 (normalized with mean ~0), also mapped to 0-100 scale
- Latest: 2024 methodology update covering 214 economies, 1996-2023 data
- Data sources: 35 cross-country sources including household surveys, firm surveys, expert assessments
- Limitation: βInputsβ focused (institutional capacity) rather than βoutputsβ (decision quality)
- Source: World Bank WGI 2024βπ webWorld Bank WGI 2024The World Bank's Worldwide Governance Indicators (WGI) measure six key governance dimensions using perception data from multiple sources. The 2025 edition introduces methodologi...Source βNotes
V-Dem Digital Society Index
- Coverage: Measures government internet censorship, social media monitoring, online media fractionalization
- Note: 2024 specific data on information environment not retrieved, but framework exists
- Source: V-Dem Institute (v-dem.net)
Conceptual Indicators
Section titled βConceptual IndicatorsβAI Policy Quality Index
- Conceptual metric: Expert assessment of whether AI policies address actual risks proportionately
- Current status: No standardized index exists
- Proxy: Mixed signalsβEU AI Act implemented, US executive order, but critiques of regulatory lag
Evidence-Based Policy Rate for AI
- Conceptual metric: % of major AI policy decisions informed by rigorous evidence
- Challenge: Would require systematic policy analysis across jurisdictions
- Current: Anecdotal evidence suggests variable quality
3. Elite vs Public Opinion Divergence on AI
Section titled β3. Elite vs Public Opinion Divergence on AIβMeasured Divergence
Section titled βMeasured DivergenceβExpert vs Public Trust Gap (Pew Research 2024)
- Finding: βExperts are far more positive and enthusiastic about AI than the publicβ
- Methodology: 5,410 US adults (Aug 2024) vs 1,013 AI experts (Aug-Oct 2024)
- Experts: Identified via authors/presenters at 21 AI conferences in 2023-2024
- Source: Pew Research: Public and AI Expertsβπ webβ β β β βPew Research CenterPew Research: Public and AI ExpertsA comprehensive study comparing perspectives of U.S. adults and AI experts on artificial intelligence's future, highlighting differences in optimism, job impacts, and regulatory...Source βNotes
AI Policy Institute Polling (2024)
- Development pace preference: 72% prefer slowing AI development vs 8% prefer speeding up
- Risk vs excitement: 62% primarily concerned vs 21% primarily excited
- Catastrophic risk belief: 86% believe AI could accidentally cause catastrophic event
- Liability: 73% believe AI companies should be held liable for harm
- Regulation preference: 67% think AI modelsβ power should be restricted
- Elite disconnect quote: βLarge disconnect between elite discourse or discourse in labs and what American public wantsβ - Daniel Colson, AIPI Executive Director
- Source: AIPI Pollingβπ webAI Policy Institute PollingA YouGov survey shows strong public support for AI regulation, with most voters worried about potential catastrophic risks and preferring a cautious approach to AI development.Source βNotes
Trust Gap in AI Companies (Edelman 2024)
- Technology sector vs AI innovation: 26-point gap (76% trust tech sector vs 50% trust AI)
- AI company trust decline: From 62% (5 years ago) to 54% (2024)
- Rejection willingness: 43% will actively reject AI products if innovation poorly managed
- Source: Edelman Trust Barometer 2024 - AI Insightsβπ reportβ β β ββEdelmanEdelman Trust Barometer 2024 - AI InsightsSource βNotes
Interpretation
Section titled βInterpretationβMagnitude: Large and growing gap between expert optimism and public concern
Direction: Public more risk-focused; experts more capability-focused
Policy implication: Democratic deficit if AI development primarily shaped by technologists
4. Time from AI Risk Identification to Policy Response
Section titled β4. Time from AI Risk Identification to Policy ResponseβMeasured Cases
Section titled βMeasured CasesβEU AI Act Timeline (Response to GPT-class models)
- GPT-3 release: June 2020
- EU AI Act proposal: April 2021 (10 months after GPT-3)
- GPT-4 release: March 2023
- EU AI Act agreement: December 2023 (9 months after GPT-4)
- AI Act signed: June 2024
- Entered force: August 2024
- GPAI provisions applicable: August 2025 (29 months after GPT-4)
- Full applicability: August 2026
- Interpretation: ~2.5 years from GPT-4 to enforceable rules on GPAI models
- Source: EU AI Act Implementation Timelineβπ webEU AI Act Implementation TimelineThe EU AI Act implementation follows a gradual rollout with key dates from 2024 to 2031, establishing progressive regulatory milestones for AI systems and governance.Source βNotes
US Executive Order on AI
- GPT-4 release: March 2023
- Executive Order 14110: October 30, 2023 (7 months after GPT-4)
- Limitation: Executive order, not legislation; limited enforceability
- Source: Biden Administration AI Executive Order
AI Safety Institutes
- UK AISI announced: November 2023 (Bletchley Park AI Safety Summit)
- US AISI operational: Early 2024
- AISI Network launched: May 2024 (Seoul AI Summit)
- First AISI Network meeting: November 2024 (San Francisco)
- Lag interpretation: ~8-20 months from GPT-4 to safety institute operations
- Source: AISI International Networkβπ webβ β β β βOECDAISI International NetworkThe AISI Network, launched in May 2024, seeks to promote safe and trustworthy AI development through international collaboration, knowledge sharing, and coordinated governance a...Source βNotes
Conceptual Metric
Section titled βConceptual MetricβAverage Policy Lag Time
- Conceptual metric: Median time from risk becoming evident to enforceable policy
- Challenge: Defining βrisk becomes evidentβ vs βrisk existsβ
- Current estimate: 1-3 years for major risks based on available cases
- Comparison: Aviation safety regulations often follow major accidents within months
5. Coordination Failure Rate on AI Governance
Section titled β5. Coordination Failure Rate on AI GovernanceβMeasured Indicators
Section titled βMeasured IndicatorsβG7 Hiroshima AI Process Code of Conduct
- Status: Adopted but βprovides little guidanceβ on implementation
- Critique: βStaffed by diplomats who lack depth of in-house technical expertiseβ
- Implementation gap: Code instructs to βidentify, evaluate, mitigate risksβ without how-to guidance
- Source: CSIS: G7 Hiroshima AI Processβπ webβ β β β βCSISCSIS: G7 Hiroshima AI ProcessThe report examines the G7's emerging approach to AI governance, highlighting potential enhancements for international cooperation on AI development and regulation.Source βNotes
OECD AI Principles (2019, updated 2024)
- Adherents: 47 countries including EU
- Compliance mechanism: None (non-binding)
- Monitoring: AI Policy Observatory tracks implementation but no enforcement
- Implementation rate: Variableβno systematic tracking of adherence
- Source: OECD AI Principles 2024 Updateβπ webβ β β β βOECDOECD AI Principles 2024 UpdateThe OECD has updated its AI Principles to address emerging challenges in AI technology, focusing on safety, ethics, and international cooperation across 47 jurisdictions.Source βNotes
International AI Safety Institute Network
- Members (Nov 2024): 10 countries/regions (Australia, Canada, EU, France, Japan, Kenya, Korea, Singapore, UK, US)
- Challenges identified:
- Confidentiality and security concerns
- Legal incompatibilities between national mandates
- Varying technical capacities
- Global South institutes risk becoming βtoken membersβ
- Most institutes still hiring/setting priorities as of 2024
- Coordination body: None yet (recommended but not established)
- Success metric: Too early to assess
- Source: AISI Network Analysisβπ webAISI Network AnalysisSumaya Nur Adan (2024)The document outlines a proposed structure for the International Network of AI Safety Institutes, focusing on prioritizing standards, information sharing, and safety evaluations...Source βNotes
Conceptual Indicators
Section titled βConceptual IndicatorsβCoordination Success Rate
- Conceptual metric: % of identified coordination problems that achieve multilateral solutions
- Current status: Low coordination success on binding agreements
- Examples of failure:
- No binding international compute governance
- No global model registry
- Fragmented incident reporting systems
- Limited cross-border enforcement
- Examples of partial success:
- AISI Network formation
- OECD Principles (soft coordination)
- G7/G20 discussions ongoing
Race-to-the-Bottom Index
- Conceptual metric: Evidence of jurisdictions weakening standards to attract AI companies
- Current: Anecdotal concerns but no systematic measurement
- Source: International Governance of AIβπ webβ β β β βSpringer (peer-reviewed)International Governance of AIThe article explores various governance strategies for transformative AI, analyzing potential approaches from subnational norms to international regimes. It highlights the uniqu...Source βNotes
6. Democratic vs Authoritarian AI Adoption Rates
Section titled β6. Democratic vs Authoritarian AI Adoption RatesβMeasured Data
Section titled βMeasured DataβAI Surveillance Adoption
- Chinaβs market dominance: Exports AI surveillance to βnearly twice as many countries as United Statesβ
- Chinese surveillance camera market: Hikvision + Dahua = 34% global market share (2024)
- Global reach: PRC-sourced AI surveillance in 80+ countries (authoritarian and democratic)
- Chinaβs domestic deployment: Over half the worldβs 1 billion surveillance cameras located in China
- Source: Global Expansion of AI Surveillanceβπ webβ β β β βCarnegie EndowmentGlobal Expansion of AI SurveillanceA comprehensive study reveals the widespread adoption of AI surveillance technologies worldwide, with Chinese companies playing a major role in supplying these systems to govern...Source βNotes
Export Patterns
- Chinaβs bias: βSignificant bias in exporting to autocratic regimesβ
- Huawei βSafe Cityβ agreements (2009-2018): 70%+ involved countries rated βpartly freeβ or βnot freeβ by Freedom House
- Nuance: βChina is exporting surveillance tech to liberal democracies as much as targeting authoritarian marketsβ
- Impact finding: Mature democracies did not experience erosion when importing surveillance AI; weak democracies exhibited backsliding regardless of supplier
- Source: Data-Centric Authoritarianismβπ webData-Centric AuthoritarianismThe report examines how China is developing advanced technologies like AI surveillance, neurotechnologies, quantum computing, and digital currencies that enable unprecedented da...Source βNotes
Authoritarian Advantage Factors
- Chinaβs structural advantages for AI surveillance:
- Lax data privacy laws
- Government involvement in production/research
- Large population for training data
- Societal acceptance of state surveillance
- Strong AI industrial sectors
- Source: AI and Authoritarian Governmentsβπ webAI and Authoritarian GovernmentsThe source explores how AI technologies, particularly in China, are being used for extensive surveillance and population control. It highlights the potential threats to individu...Source βNotes
Conceptual Indicator
Section titled βConceptual IndicatorβDemocratic vs Authoritarian AI Capability Gap
- Conceptual metric: Relative AI capability development in democracies vs autocracies
- Proxy: US vs China capability race
- US: 40 notable AI models (2024) vs China: 15 models
- US private investment: $109.1B vs China: $9.3B
- But Chinaβs DeepSeek/Qwen/Kimi βclosing the gap on reasoning and codingβ
- Interpretation: US maintains edge but China rapidly improving
- Source: State of AI Report 2025βπ webState of AI Report 2025The annual State of AI Report examines key developments in AI research, industry, politics, and safety for 2025, featuring insights from a large-scale practitioner survey.Source βNotes
7. Concentration of AI Capability (Herfindahl Index)
Section titled β7. Concentration of AI Capability (Herfindahl Index)βMeasured Market Concentration
Section titled βMeasured Market ConcentrationβEnterprise LLM Market Share (2024-2025)
- Anthropic: 32% usage share, 40% revenue share
- OpenAI: 25% usage share, 27% revenue share (down from 50% in 2023)
- Google: 20% usage share
- Meta (Llama): 9%
- DeepSeek: 1%
- Approximate HHI: ~2,500 (0.32Β² + 0.25Β² + 0.20Β² + 0.09Β² + 0.01Β²) Γ 10,000 β 2,050-2,500
- Interpretation: βModerate concentrationβ (HHI 1,500-2,500); top 3 control ~77%
- Source: 2025 State of Generative AI in Enterprise - Menlo Venturesβπ web2025 State of Generative AI in Enterprise - Menlo VenturesMenlo Ventures (2025)A market analysis report examining the current state and future trajectory of generative AI technologies in enterprise settings, highlighting adoption trends and economic implic...Source βNotes
Frontier Model Development Concentration
- US dominance: 40 notable models (2024) vs China: 15, Europe: 3
- Competition assessment: βOpenAI retains narrow lead at frontier, but competition intensifiedβ
- China status: βCredible #2β with DeepSeek, Qwen, Kimi
- Source: Stanford AI Index 2025βπ webβ β β β βStanford HAIStanford AI Index 2025The 2025 AI Index Report from Stanford HAI offers a detailed analysis of AI's technological, economic, and social developments. It highlights key trends in performance, investme...Source βNotes
Investment/Funding Concentration
- Foundation model funding (2025): $80B (40% of all global AI funding)
- OpenAI + Anthropic: 14% of all global venture investment across all sectors
- Big Tech backing: βInterconnected web of 90+ partnershipsβ among Google, Apple, Microsoft, Meta, Amazon, Nvidia
- Regulatory concern: UK CMA and US FTC investigating concentration via partnerships/investments
- Source: Big Techβs Cloud Oligopolyβπ webBig Tech's Cloud OligopolyA detailed analysis reveals how major tech companies like Microsoft, Amazon, and Google are dominating the AI and cloud computing markets through strategic investments and infra...Source βNotes
Conceptual Extensions
Section titled βConceptual ExtensionsβCompute Concentration
- Conceptual metric: HHI for GPU/training compute access
- Challenge: Private compute capacity not publicly reported
- Known: Nvidia dominance in AI chips; hyperscaler concentration (AWS, Azure, GCP)
- Implication: Capability concentration may exceed market share concentration
Talent Concentration
- Conceptual metric: % of top AI researchers at small number of organizations
- Challenge: Defining βtop researchersβ and tracking mobility
- Proxy: Conference authorship concentration, hiring trends
8. Societal Resilience to AI Disruption
Section titled β8. Societal Resilience to AI DisruptionβConceptual Framework
Section titled βConceptual FrameworkβWEF Global Risks Report 2024 - Resilience Assessment
- Key finding: βWeakened economies and societies may only require smallest shock to edge past tipping point of resilienceβ
- Current crises eroding resilience: COVID-19 aftermath, Russia-Ukraine war βexposed cracks in societiesβ
- Long-term erosion: βDecades of investment in human development slowly being chipped awayβ
- Conflict risk: βCorroding societal resilience risk creating conflict contagionβ
- Source: WEF Global Risks Report 2024βπ webβ β β β βWorld Economic ForumWEF Global Risks Report 2024Source βNotes
Measured Proxies
Section titled βMeasured ProxiesβEconomic Disruption Preparedness
- Social safety nets: Vary widely by country (unemployment insurance, retraining programs)
- Financial instruments: Insurance, catastrophe bonds, public risk pools
- Challenge: No unified βAI disruption resilienceβ score exists
Digital Literacy and Misinformation Resilience
- Recommendation: βDigital literacy campaigns on misinformation and disinformationβ
- Current: No systematic measurement of population-level AI/digital literacy
- Proxy: General digital skills indices exist but not AI-specific
Institutional Adaptive Capacity
- Indicators: R&D investment in climate modeling/energy transition (analogous to AI preparedness)
- Infrastructure resilience: Building codes, disaster preparedness
- Limitation: No AI-specific resilience metrics
Conceptual Indicators
Section titled βConceptual IndicatorsβLabor Market Adaptability Index
- Conceptual metric: How quickly workers can reskill/transition as AI automates tasks
- Proxy data: Historical adjustment rates to automation, education system responsiveness
- Challenge: AI may disrupt faster than historical automation
Democratic Resilience to AI-Driven Polarization
- Conceptual metric: Ability of democratic institutions to function under AI-amplified disinformation
- Current concerns: Misinformation in 2024 elections (US, UK)
- No systematic tracking: Would require longitudinal study
9. Rate of AI-Caused Incidents/Accidents
Section titled β9. Rate of AI-Caused Incidents/AccidentsβMeasured Incident Data
Section titled βMeasured Incident DataβAI Incident Database (AIID)
- Total incidents: 2,000+ documented incidents (as of 2024)
- Coverage: βIntelligent systems causing safety, fairness, or other real-world problemsβ
- Growth: From 1,200+ reports to 2,000+ (rapid increase)
- Limitation: Voluntary reporting, variable severity, unclear baseline
- Source: AI Incident Databaseβπ webAI Incident DatabaseThe AI Incident Database is a comprehensive collection of documented incidents revealing AI system failures across various domains, highlighting potential risks and learning opp...Source βNotes
AIAAIC Repository
- Start date: June 2019
- Coverage: βIncidents and controversies driven by AI, algorithms, automationβ
- Goal: βSystematically documenting incidents where AI systems cause or contribute to harmsβ
- Scope: Broader than AIIDβincludes technical failures and social impacts
- Source: AIAAIC Repositoryβπ webAIAAIC RepositoryAn independent, grassroots initiative documenting AI incidents and controversies. Provides a comprehensive taxonomy for identifying and classifying AI-related harms and ethical ...Source βNotes
OECD AI Incidents Monitor (AIM)
- Launch: Part of OECD AI Policy Observatory
- Focus: Policy-relevant cases aligned with governance interests
- Collaboration: Partnership on AI, Center for Advancement of Trustworthy AI
- Limitation: More selective than AIAAIC (policy focus vs comprehensive coverage)
- Source: OECD AIMβπ webβ β β β βOECDOECD AIMAn independent public repository documenting AI-related incidents, controversies, and risks. The tool provides transparent insights into potential challenges with AI systems and...Source βNotes
Interpretation Challenges
Section titled βInterpretation ChallengesβIncident Rate per AI System
- Conceptual metric: Incidents per 1,000 or 10,000 deployed AI systems
- Challenge: Unknown denominatorβno comprehensive count of deployed systems
- Current: Absolute incident counts rising, but unclear if rate rising
Severity Distribution
- Available: Incident databases categorize by harm type (safety, fairness, rights)
- Missing: Standardized severity scales across databases
- Incompatibility: βBoth databases have vastly different and incompatible structuresβ
- Source: Standardised Schema for AI Incident Databasesβπ paperβ β β ββarXivStandardised Schema for AI Incident DatabasesAvinash Agarwal, Manisha J. Nene (2025)Source βNotes
Baseline Comparison
- Question: Are AI incident rates high compared to other technologies at similar maturity?
- Challenge: No established baseline or reference class
- Aviation analogy: Aviation incident rates well-tracked, declining over timeβAI lacks comparable infrastructure
10. Near-Miss Reporting Rate
Section titled β10. Near-Miss Reporting RateβIndustry Position
Section titled βIndustry PositionβAI Lab Support for Near-Miss Reporting
- Strong agreement: 76% strongly agree, 20% somewhat agree
- Statement: βAGI labs should report accidents and near misses to appropriate state actors and other AGI labsβ
- Source mechanism: AI incident database
- Source: EA Forum: Incident Reporting for AI Safetyβπ webβ β β ββEA ForumEA Forum: Incident Reporting for AI SafetyZach Stein-Perlman, SeLo, stepanlos et al. (2023)The document argues for developing a comprehensive incident reporting system for AI, emphasizing the importance of sharing information about AI system failures, near-misses, and...Source βNotes
Regulatory Frameworks Emerging
Section titled βRegulatory Frameworks EmergingβUS Executive Order 14110
- Provision: Addressed βsafetyβ and βrightsβ protections
- Limitation: Not comprehensive near-miss framework
- State-level: New York State bill would require incident reporting to Attorney General (safety incidents only)
- Source: Designing Incident Reporting Systemsβπ paperβ β β ββarXivDesigning Incident Reporting SystemsKevin Wei, Lennart Heim (2025)Source βNotes
EU AI Act Incident Reporting
- Requirement: Single incident reporting requirement
- Definition: Includes both βrights incidentsβ and βsafety incidentsβ
- Limitation: Does not explicitly distinguish near-misses from harms
- Source: EU AI Act
Proposed Framework Properties (Shrishak 2023)
- Voluntary reporting: Essential for capturing near-misses not covered by mandatory serious incident reporting
- Non-punitive: Consensus that self-reporting should not lead to punishment since no harm occurred
- Accessible: Low barriers to submission
- Actionable: Information useful for other developers
Current Reporting Rate
Section titled βCurrent Reporting RateβActual Near-Miss Reporting Rate
- Conceptual metric: % of near-miss events that get reported to databases or regulators
- Current estimate: Unknown, likely very low
- Challenge: βCurrent systems fail to capture numerous near-miss incidents that narrowly avoid accidentsβ
- Comparison: Aviation near-miss reporting well-established; AI has no equivalent system yet
- Source: Developing Near-Miss Reporting Systemβπ reportDeveloping Near-Miss Reporting SystemA multi-pronged research project investigated near-miss reporting systems for roadside responders, examining existing platforms, stakeholder perspectives, and barriers to report...Source βNotes
Culture Gap
- Aviation standard: Open, non-punitive reporting is norm
- AI current state: βLack of comprehensive and reliable data regarding frequency and characteristicsβ
- Needed shift: βBuilding culture of safety for AI requires understanding failure modes, which starts with reporting past incidentsβ
Data Quality Summary
Section titled βData Quality Summaryβ| Metric | Status | Data Quality | Update Frequency |
|---|---|---|---|
| Information environment quality | Measured | High (Freedom House, RSF) | Annual |
| Institutional decision-making | Proxy | Medium (WGI covers general governance, not AI-specific) | Annual |
| Elite/public opinion divergence | Measured | Medium (multiple polls, varying methods) | Quarterly-Annual |
| Policy response time | Measured | High (specific cases documented) | Case-by-case |
| Coordination failure rate | Conceptual | Low (qualitative assessments only) | Ad hoc |
| Democratic vs authoritarian adoption | Measured | Medium (surveillance tech tracked, general AI capabilities less clear) | Annual |
| AI capability concentration (HHI) | Measured | Medium (market share known, compute concentration estimated) | Quarterly-Annual |
| Societal resilience | Conceptual | Low (framework exists, no AI-specific index) | Annual (WEF) |
| AI incident rate | Measured | Medium (absolute counts good, rates unclear due to denominator problem) | Continuous |
| Near-miss reporting rate | Conceptual | Very low (frameworks proposed, actual reporting minimal) | Not measured |
Key Gaps and Limitations
Section titled βKey Gaps and LimitationsβMeasurement Challenges
Section titled βMeasurement Challengesβ- Denominator problems: Incident rates require knowing # of deployed systems (unknown)
- Counterfactuals: Measuring βcoordination failure rateβ requires knowing what coordination was possible
- Lag indicators: Most metrics (incidents, trust, governance quality) are lagging, not leading
- Attribution: Hard to isolate AIβs contribution to institutional quality or societal resilience
- Standardization: Different databases use incompatible schemas (incidents, governance)
Conceptual Gaps
Section titled βConceptual Gapsβ- No unified resilience metric: Individual components exist but no composite βAI disruption resilience scoreβ
- Weak coordination metrics: Qualitative assessments dominate; no quantitative coordination success rate
- Missing baselines: Few comparisons to other technologies at similar development stages
- Democratic processes: No metrics for how democratic institutions specifically handle AI (vs general governance)
Research Priorities
Section titled βResearch PrioritiesβHigh-value additions:
- Standardized AI incident severity scale
- Near-miss reporting infrastructure and culture-building
- Democratic resilience to AI-specific challenges (not just general governance)
- Coordination success metrics (track multilateral agreements, implementation rates)
- AI-specific institutional capacity assessment (beyond general WGI)
Interpretation Guidance
Section titled βInterpretation GuidanceβUsing These Metrics
Section titled βUsing These MetricsβFor Risk Assessment:
- Low trust + weak institutions + high elite/public gap = governance failure more likely
- Rising incidents + low near-miss reporting = learning from failures inadequate
- High concentration + weak coordination = race dynamics and power concentration risks
For Forecasting:
- Policy lag times (1-3 years) inform timeline expectations for future risks
- Trust trends predict regulatory pressure and public backlash likelihood
- Coordination challenges suggest multilateral solutions face high barriers
For Intervention:
- Improving near-miss reporting culture = high-leverage, low-cost
- Building institutional AI literacy = addresses decision-making quality
- Bridging elite/public gap = essential for democratic legitimacy
Cautions
Section titled βCautionsβ- Correlation β causation: Weak governance may cause AI risks OR AI risks may weaken governance
- Selection effects: Reported incidents overrepresent visible, Western, English-language cases
- Gaming: Once metrics are targets, they can be manipulated (Goodhartβs Law)
- Aggregation: Composite indices hide important variation across dimensions
Sources
Section titled βSourcesβPrimary Data Sources
Section titled βPrimary Data Sourcesβ- Freedom House - Freedom on the Netβπ webβ β β β βFreedom HouseFreedom on the Net 2025Source βNotes
- Reporters Without Borders - World Press Freedom Index 2025βπ webRSF World Press Freedom Index 2025The 2025 RSF World Press Freedom Index reveals a critical economic threat to journalism worldwide, with media outlets struggling financially and losing independence in most coun...Source βNotes
- Edelman Trust Barometer 2024βπ webβ β β ββEdelman2024 Edelman Trust BarometerSource βNotes
- World Bank Worldwide Governance Indicatorsβπ webWorld Bank WGI 2024The World Bank's Worldwide Governance Indicators (WGI) measure six key governance dimensions using perception data from multiple sources. The 2025 edition introduces methodologi...Source βNotes
- AI Policy Institute Pollingβπ webAI Policy Institute PollingA YouGov survey shows strong public support for AI regulation, with most voters worried about potential catastrophic risks and preferring a cautious approach to AI development.Source βNotes
- Pew Research: Public and AI Expertsβπ webβ β β β βPew Research CenterPew Research: Public and AI ExpertsA comprehensive study comparing perspectives of U.S. adults and AI experts on artificial intelligence's future, highlighting differences in optimism, job impacts, and regulatory...Source βNotes
- EU AI Act Implementation Timelineβπ webEU AI Act Implementation TimelineThe EU AI Act implementation follows a gradual rollout with key dates from 2024 to 2031, establishing progressive regulatory milestones for AI systems and governance.Source βNotes
- OECD AI Principles 2024βπ webβ β β β βOECDOECD AI Principles 2024 UpdateThe OECD has updated its AI Principles to address emerging challenges in AI technology, focusing on safety, ethics, and international cooperation across 47 jurisdictions.Source βNotes
- AI Incident Database (AIID)βπ webAI Incident DatabaseThe AI Incident Database is a comprehensive collection of documented incidents revealing AI system failures across various domains, highlighting potential risks and learning opp...Source βNotes
- Partnership on AI - AI Incident Databaseβπ webPartnership on AI - AI Incident DatabasePartnership on AI created the AI Incident Database to collect and learn from AI system failures across different domains. The database allows researchers, engineers, and product...Source βNotes
- AIAAIC Repositoryβπ webAIAAIC RepositoryAn independent, grassroots initiative documenting AI incidents and controversies. Provides a comprehensive taxonomy for identifying and classifying AI-related harms and ethical ...Source βNotes
Analysis and Research
Section titled βAnalysis and Researchβ- Stanford AI Index 2025βπ webβ β β β βStanford HAIStanford AI Index 2025The 2025 AI Index Report from Stanford HAI offers a detailed analysis of AI's technological, economic, and social developments. It highlights key trends in performance, investme...Source βNotes
- State of AI Report 2025βπ webState of AI Report 2025The annual State of AI Report examines key developments in AI research, industry, politics, and safety for 2025, featuring insights from a large-scale practitioner survey.Source βNotes
- Menlo Ventures: State of Generative AI in Enterprise 2025βπ web2025 State of Generative AI in Enterprise - Menlo VenturesMenlo Ventures (2025)A market analysis report examining the current state and future trajectory of generative AI technologies in enterprise settings, highlighting adoption trends and economic implic...Source βNotes
- WEF Global Risks Report 2024βπ webβ β β β βWorld Economic ForumWEF Global Risks Report 2024Source βNotes
- Carnegie Endowment: Global Expansion of AI Surveillanceβπ webβ β β β βCarnegie EndowmentGlobal Expansion of AI SurveillanceA comprehensive study reveals the widespread adoption of AI surveillance technologies worldwide, with Chinese companies playing a major role in supplying these systems to govern...Source βNotes
- NED: Data-Centric Authoritarianismβπ webData-Centric AuthoritarianismThe report examines how China is developing advanced technologies like AI surveillance, neurotechnologies, quantum computing, and digital currencies that enable unprecedented da...Source βNotes
- CSIS: G7 Hiroshima AI Processβπ webβ β β β βCSISCSIS: G7 Hiroshima AI ProcessThe report examines the G7's emerging approach to AI governance, highlighting potential enhancements for international cooperation on AI development and regulation.Source βNotes
- IAPS: International Network of AI Safety Institutesβπ webAISI Network AnalysisSumaya Nur Adan (2024)The document outlines a proposed structure for the International Network of AI Safety Institutes, focusing on prioritizing standards, information sharing, and safety evaluations...Source βNotes
- OECD: AI Safety Institute Network Roleβπ webβ β β β βOECDAISI International NetworkThe AISI Network, launched in May 2024, seeks to promote safe and trustworthy AI development through international collaboration, knowledge sharing, and coordinated governance a...Source βNotes
- Future of Life Institute: AI Safety Index 2024βπ webβ β β ββFuture of Life InstituteFuture of Life Institute: AI Safety Index 2024The Future of Life Institute's AI Safety Index 2024 evaluates six leading AI companies across 42 safety indicators, highlighting major concerns about risk management and potenti...Source βNotes
- EA Forum: Incident Reporting for AI Safetyβπ webβ β β ββEA ForumEA Forum: Incident Reporting for AI SafetyZach Stein-Perlman, SeLo, stepanlos et al. (2023)The document argues for developing a comprehensive incident reporting system for AI, emphasizing the importance of sharing information about AI system failures, near-misses, and...Source βNotes
- arXiv: Designing Incident Reporting Systemsβπ paperβ β β ββarXivDesigning Incident Reporting SystemsKevin Wei, Lennart Heim (2025)Source βNotes
- arXiv: Standardised Schema for AI Incident Databasesβπ paperβ β β ββarXivStandardised Schema for AI Incident DatabasesAvinash Agarwal, Manisha J. Nene (2025)Source βNotes
Regulatory and Policy Documents
Section titled βRegulatory and Policy Documentsβ- Biden Administration AI Executive Order 14110βποΈ governmentβ β β β βWhite HouseBiden Administration AI Executive Order 14110Source βNotes
- European Commission: EU AI Actβπ webβ β β β βEuropean UnionEuropean Commission: EU AI ActThe EU AI Act is a pioneering legal framework classifying AI systems by risk levels and setting strict rules for high-risk and potentially harmful AI applications to protect fun...Source βNotes
- G7 Hiroshima AI Process Code of Conductβπ webβ β β β βCSISCSIS: G7 Hiroshima AI ProcessThe report examines the G7's emerging approach to AI governance, highlighting potential enhancements for international cooperation on AI development and regulation.Source βNotes