Skip to content
Longterm Wiki
Updated 2026-04-12HistoryData
Page StatusContent
Edited today3.8k words
Content2/13
SummaryScheduleEntityEdit historyOverview
Tables12/ ~15Diagrams0/ ~2Int. links21/ ~31Ext. links0/ ~19Footnotes0/ ~11References0/ ~11Quotes0Accuracy0

Actor Power Scorecard

Concept

Actor Power Scorecard

A structured scorecard rating ~40 AI-era actors across governments, labs, compute providers, military bodies, and civil society on likelihood, magnitude, and expected impact; useful as a living reference for tracking power dynamics in AI development, though scores are inherently subjective snapshots and the methodology conflates capability with intent. The self-aware Methodological Caveats section substantially mitigates concerns about false precision.

3.8k words

The Actor Power Scorecard is a structured reference table complementing the AI Safety Multi-Actor Strategic Landscape analysis. It scores roughly forty specific actors—national governments, AI laboratories, capital providers, compute infrastructure suppliers, military commands, industry bodies, philanthropists, and think tanks—on three primary dimensions: the probability that the actor will actively exert power over AI development trajectories (likelihood, 0–1), the magnitude of that power if exercised (magnitude, 1–5), and the expected impact computed as their product. Each row also records a primary mechanism, an indicative time horizon, and brief evidence notes. The scorecard is intended as a living document; the changelog at the bottom tracks substantive revisions.

For conceptual framing of why actor power matters for AI safety, see the companion article on AI-Driven Concentration of Power and the associated Concentration of Power Systems Model.


Methodology

Scoring Dimensions

Likelihood (0–1) estimates the probability that the actor will make a consequential move—regulatory, financial, technical, or military—affecting frontier AI development within its stated time horizon. A score of 1.0 indicates near-certainty of ongoing, high-salience action; 0.1 indicates an actor with latent capability that rarely exercises it.

Magnitude (1–5) estimates the scale of impact conditional on the actor acting. The scale is roughly logarithmic:

ScoreInterpretation
1Marginal: affects a single product line or research programme
2Moderate: affects a company's strategy or a national research budget
3Significant: shapes industry norms, standards, or multi-country policy
4Major: shifts global supply chains, geopolitical posture, or frontier capabilities
5Transformative: could restructure international power balances or AI development trajectories on a civilisational scale

Expected Impact = Likelihood × Magnitude. This is an ordinal product useful for relative comparison, not a calibrated probability estimate. Scores above 3.0 warrant close monitoring; scores above 4.0 represent actors whose decisions are plausibly trajectory-altering.

Time Horizon is categorised as:

  • Short — 0–2 years
  • Medium — 2–7 years
  • Long — 7+ years

Limitations

Scores reflect a specific snapshot of actor capabilities and incentives and will become stale. Magnitude scores are especially uncertain for actors whose power rests on novel or untested mechanisms (e.g., a newly capitalised lab or an emerging regulatory framework). Likelihood scores conflate willingness and capability; the two can diverge sharply. Users should treat this table as a structured starting point for deliberation rather than a definitive ranking.


Quick Assessment

The five highest expected-impact actors across all categories are:

ActorLikelihoodMagnitudeExpected Impact
US Federal Government0.9554.75
China State (CCP/State Council)0.9554.75
NVIDIA0.9054.50
OpenAI0.9043.60
Google DeepMind0.9043.60

Main Scorecard Table

The table is sorted by Expected Impact descending. All scores are current best estimates and subject to revision.

Governments

ActorLikelihoodMagnitudeExpected ImpactTime HorizonPrimary MechanismEvidence / Notes
US Federal Government0.9554.75Short–LongExport controls, compute policy, federal R&D funding, national security directives, standards via NISTMaintains world's largest AI R&D ecosystem; export controls on advanced chips (BIS rules) constrain global supply; Executive Orders have repeatedly reshaped lab obligations; DoD and intelligence community are major procurement drivers
China State (CCP/State Council)0.9554.75Short–LongState-directed industrial policy, domestic chip development, regulatory mandates, military-civil fusionNational AI strategy targets self-sufficiency; domestically mandated deployment across public sector; PLA integration; DeepSeek demonstrates frontier capability emerging from state-adjacent ecosystem
EU Commission0.8543.40Short–MediumBinding regulation (EU AI Act), GDPR enforcement, market access conditionsEU AI Act creates tiered obligations for "general-purpose" and "high-risk" AI; Brussels Effect means standards propagate globally; enforcement credibility being tested in 2025–2026
UK Government0.7032.10Short–MediumFrontier AI Safety Institute (now DSIT), voluntary commitments, Bletchley Process diplomacyPunches above weight via convening power (Bletchley, Seoul, Paris summits); AI Safety Institute produces evaluations but lacks binding authority; modest domestic compute investment
India Government0.6531.95Medium–LongIndiaAI Mission compute procurement, data localisation, standards diplomacyIndiaAI Mission targets $1B+ state investment in GPU clusters; large domestic market creates leverage; regulatory posture still forming; significant talent export to frontier labs
UAE (TII / ADMO)0.6031.80MediumSovereign capital deployment (G42, MGX), Falcon model family, compute diplomacySovereign wealth channels large compute purchases; active in brokering US–Gulf AI partnerships; Falcon series demonstrates frontier ambition; geopolitical positioning between US and China creates uncertainty
Japan Government0.5531.65MediumG7 Hiroshima AI Process, compute subsidies, AIST researchHosts major multilateral process; domestic investment in semiconductor resurgence (Rapidus); limited frontier lab presence reduces direct capability leverage
France Government0.5531.65Short–MediumMistral investment, EU negotiating position, Bletchley/Paris summitsMistral partly reflects French strategic interest in European AI sovereignty; Macron government publicly champions AI investment; shapes EU AI Act implementation
South Korea Government0.5031.50MediumSamsung/SK Hynix HBM supply, domestic AI chip policyControls critical HBM memory supply chain (Samsung, SK Hynix); government export-control alignment with US determines supply availability for AI accelerators

AI Laboratories

ActorLikelihoodMagnitudeExpected ImpactTime HorizonPrimary MechanismEvidence / Notes
OpenAI0.9043.60Short–MediumFrontier model deployment, API ecosystem lock-in, policy lobbying, safety norm-settingLargest deployed user base; GPT series sets de facto capability benchmarks; $157B valuation (late 2024); safety commitments (RSP) influence industry norms; Microsoft dependency creates structural tension
Google DeepMind0.9043.60Short–MediumGemini deployment at Google scale, TPU infrastructure, research publications, safety researchAlphabet integration gives unmatched distribution; TPU supply chain partially insulates from NVIDIA dependency; AlphaFold/AlphaMissense illustrate scientific leverage; strong safety research output
Anthropic0.8543.40Short–MediumClaude deployment, Constitutional AI research, policy engagement, RSP standardWell-capitalised (Amazon $4B+ commitment); Claude series competitive at frontier; Constitutional AI and RSP have influenced safety norm development; Dario Amodei is prominent public voice on risk
Meta AI0.8543.40Short–MediumOpen-weights Llama release, scale of compute, social-media distributionLlama releases structurally shift the open/closed frontier debate; Meta's $65B+ 2025 capex plan expands compute; open-weights releases propagate capabilities widely, raising dual-use concerns
xAI (Elon Musk)0.7543.00Short–MediumGrok deployment, Colossus cluster, Twitter/X distribution, regulatory interferenceColossus reportedly among largest single training clusters; X platform provides data and distribution; Musk's regulatory relationships and public statements affect policy climate; internal safety culture largely uncharted
Microsoft (as lab via Azure AI)0.8043.20Short–MediumAzure compute, OpenAI integration, enterprise deployment, GitHub CopilotMicrosoft's cloud position means most OpenAI capacity runs on Azure; enterprise deployment at scale; GitHub Copilot dominates developer tooling; regulatory exposure in EU/UK from CMA review
DeepSeek0.7042.80Short–MediumOpen-weight frontier models, chip-efficient training, geopolitical signalR1 and V3 releases demonstrated competitive frontier performance at significantly lower reported compute cost; open-weight releases expand global access; China-based creates national-security scrutiny in Western markets
Mistral0.6531.95Short–MediumOpen-weight European models, EU regulatory positioningBacked by EU-friendly investors; Mistral Large competitive in enterprise; open-weight releases support EU AI sovereignty narrative; constrained by capital and compute relative to US hyperscalers
Apple0.6031.80Short–MediumOn-device AI (Apple Intelligence), App Store policy, chip design (M-series)2B+ device installed base means on-device deployment at scale; privacy-preserving inference approach; App Store policies shape which AI tools reach consumers; M-series chips reduce NVIDIA dependency for inference

Investors and Capital Providers

ActorLikelihoodMagnitudeExpected ImpactTime HorizonPrimary MechanismEvidence / Notes
Microsoft (as investor)0.9043.60Short$13B+ OpenAI commitment, Azure exclusivity, board observer seatStructural dependency means Microsoft decisions shape OpenAI's roadmap and safety posture; exclusivity constrains OpenAI's multi-cloud options
Amazon (as investor)0.8043.20Short–Medium$4B+ Anthropic commitment, AWS compute credits, Bedrock distributionAWS is primary compute provider for Anthropic; Bedrock positions Anthropic models for enterprise; investment structure gives Amazon indirect influence over Anthropic priorities
SoftBank (Vision Fund)0.6531.95Short–MediumArm Holdings stake, $100B Stargate pledge, portfolio AI companiesArm's architecture underpins most mobile and increasingly data-centre chips; Stargate commitments if executed would represent one of largest single AI infrastructure investments
Open Philanthropy0.8032.40Short–MediumGrants to safety labs, policy organisations, field-buildingDominant funder of AI safety research ecosystem; grants to Anthropic, MIRI, CAIS, academic programmes; shapes research agendas and talent pipelines
Sequoia Capital0.6031.80Short–MediumAI startup investment, founder relationships, narrative influenceMajor positions in multiple frontier-adjacent companies; partner commentary shapes investor and founder sentiment; less direct than hyperscaler investments
Jaan Tallinn (as philanthropist)0.6521.30Short–MediumCSER, Future of Life Institute, Survival and Flourishing Fund grantsCo-founded CSER and FLI; grantmaking shapes safety research priorities; smaller absolute capital than Open Philanthropy but high strategic focus on existential risk
Coefficient Giving (and aligned donors)0.5521.10MediumCoordinated philanthropic grants to safety ecosystemEmerging donor coordination vehicle targeting AI safety; smaller than Open Philanthropy; influence primarily through grant recipient selection

Compute Providers and Hardware

ActorLikelihoodMagnitudeExpected ImpactTime HorizonPrimary MechanismEvidence / Notes
NVIDIA0.9054.50Short–MediumGPU supply (H100/B200 series), CUDA software moat, pricing power≈80%+ data-centre AI chip market share; CUDA ecosystem creates deep switching costs; supply constraints directly gate frontier training runs; US export controls executed through NVIDIA SKU differentiation
TSMC0.8554.25Short–LongLeading-edge fab (3nm/2nm), sole manufacturer for most frontier chipsProduces chips for NVIDIA, Apple, AMD, and most AI accelerator designers; geographic concentration in Taiwan creates geopolitical single-point-of-failure; no credible near-term alternative for sub-3nm
ASML0.8054.00Medium–LongEUV lithography monopoly, export license chokepointOnly manufacturer of EUV machines required for sub-7nm; Dutch export licensing is a US-aligned control mechanism; long lead times mean supply effects manifest over years not months
AWS (Amazon cloud)0.8543.40ShortTrainium/Inferentia chips, data-centre capacity, BedrockSecond-largest cloud; custom silicon reduces NVIDIA dependency for inference; Anthropic agreement makes AWS a structural part of safety-relevant lab infrastructure
Google Cloud / TPU0.8043.20Short–MediumTPU v4/v5 supply, GCP capacity, internal DeepMind computeTPUs provide partial NVIDIA independence; internal compute advantage for DeepMind; GCP availability shapes which external researchers can access frontier compute
Microsoft Azure0.8043.20ShortOpenAI exclusivity, ND H100 clusters, enterprise cloudDominant compute provider for OpenAI; ND H100 clusters among largest commercially available; Azure's availability terms effectively gate OpenAI deployment capacity
SK Hynix / Samsung0.6542.60Short–MediumHBM memory supply (HBM3/HBM3E), DRAM supply chainHBM is required for high-throughput AI accelerators; SK Hynix leads HBM3E supply to NVIDIA; South Korean government policy affects export commitments
Intel0.4531.35Medium–LongGaudi accelerators, IFS foundry ambitions, x86 server baseGaudi 3 a credible but distant second to NVIDIA H100 in some workloads; IFS foundry could provide Western TSMC alternative but is years from frontier-node readiness

Military and Intelligence

ActorLikelihoodMagnitudeExpected ImpactTime HorizonPrimary MechanismEvidence / Notes
US Department of Defense (DoD)0.8554.25Short–LongProcurement ($billions in AI contracts), DARPA research, export-control enforcement, autonomous weapons policyMajor AI procurement customer; DARPA funds long-horizon research; Joint AI Center (JAIC / CDAO) shapes military adoption norms; DoD policy on autonomous weapons has international norm implications
People's Liberation Army (PLA)0.8054.00Medium–LongMilitary-civil fusion, autonomous systems development, information warfareMilitary-civil fusion blurs boundary between commercial and defence AI in China; PLA investment in autonomous systems and AI-enabled ISR; cyber and information operations use AI at scale
US Intelligence Community (IC)0.7543.00Short–MediumClassified AI deployment, supply-chain security reviews, export-control inputsIC shapes export control policy through national-security assessments; operates classified AI at scale (Palantir, Booz Allen contracts); CFIUS reviews constrain foreign AI investment
GCHQ / UK NCSC0.5531.65Short–MediumCyber AI, safety standard inputs, Five Eyes intelligence sharingNCSC publishes AI security guidance; Five Eyes coordination shapes allied approaches to AI risk; smaller footprint than NSA but influential in standards

Industry Consortia and Standards Bodies

ActorLikelihoodMagnitudeExpected ImpactTime HorizonPrimary MechanismEvidence / Notes
Frontier Model Forum (FMF)0.7532.25Short–MediumVoluntary safety commitments, red-teaming standards, policy lobbyingFounded by Anthropic, Google, Microsoft, OpenAI; sets voluntary norms that can pre-empt harder regulation; membership signals reputational commitment; lacks enforcement mechanism
NIST (AI Safety Institute)0.7032.10Short–MediumAI Risk Management Framework, AISI evaluations, standards developmentNIST RMF is de facto US standard; AISI evaluations could become procurement requirements; influence depends on whether voluntary frameworks gain legal traction
ISO/IEC JTC 1/SC 420.5031.50MediumInternational AI standards (ISO 42001), procurement referenceISO 42001 AI management system standard; slow-moving but used in international procurement and EU conformity assessments
Partnership on AI0.4520.90MediumMulti-stakeholder norms, research, civil society liaisonBroad membership but limited enforcement; useful for convening; influence has waned relative to more focused bodies

Philanthropists and Civil Society

ActorLikelihoodMagnitudeExpected ImpactTime HorizonPrimary MechanismEvidence / Notes
Open Philanthropy0.8532.55Short–MediumGrant-making to safety research, policy, field-buildingSee investor section; treated here separately for civil-society role in shaping discourse and talent
Future of Life Institute (FLI)0.7021.40Short–MediumOpen letters, policy advocacy, grants, AI Safety IndexFLI open letters have generated significant public and policy attention; 2025 AI Safety Index scores AGI developers; Tallinn-connected
Center for AI Safety (CAIS)0.7021.40Short–MediumResearch, statement coordination, policy engagement"Statement on AI risk" (2023) signed by prominent researchers; research agenda on catastrophic risk; CAIS grants and fellowships build pipeline
80,000 Hours0.6521.30MediumCareer advice, talent pipeline to safety rolesDirects high-ability individuals toward AI safety careers; influence is slow-moving but structurally important for field composition
Centre for Effective Altruism0.6021.20MediumCommunity infrastructure, funding coordination, narrativeCEA supports EA community infrastructure; AI safety is largest funding recipient within EA ecosystem; reputational risks from FTX collapse affected credibility

Think Tanks and Research Organisations

ActorLikelihoodMagnitudeExpected ImpactTime HorizonPrimary MechanismEvidence / Notes
CSET (Georgetown)0.7032.10Short–MediumPolicy research, congressional briefings, talent pipelineHighly cited by US government staff; produces export-control and compute policy analysis that feeds directly into BIS rulemaking; CSET-trained alumni in key government roles
RAND Corporation0.6531.95Short–MediumDoD-contracted research, AI policy reports, wargamingLong-standing DoD contractor; AI wargaming shapes military doctrine; RAND reports used in congressional testimony
Future of Humanity Institute (FHI)0.4031.20Medium–LongExistential risk research, forecasting, technical safetyFHI closed in 2024; alumni (Bostrom, Ord, Drexler-adjacent community) continue influencing discourse; legacy publications remain widely cited
Epoch AI0.6521.30Short–MediumCompute trend analysis, scaling law tracking, public dataEpoch AI data on training compute and parameter counts is frequently referenced in policy and research contexts; small team, high epistemic leverage
Brookings Institution0.5021.00Short–MediumAI governance reports, media presence, congressional testimonyBroad policy influence but less AI-specific depth than CSET; useful for mainstream political legitimacy

Summary: Top Actors by Expected Impact

RankActorCategoryExpected Impact
1US Federal GovernmentGovernment4.75
1China State (CCP/State Council)Government4.75
3NVIDIACompute4.50
4TSMCCompute4.25
5US DoDMilitary4.25
6ASMLCompute4.00
7PLAMilitary4.00
8OpenAILab3.60
8Google DeepMindLab3.60
8Microsoft (as investor)Investor3.60

Methodological Caveats and Criticisms

Several structural limitations deserve explicit acknowledgement.

Conflation of capability and intent. Likelihood scores blend an actor's capability to exert power with its willingness to do so. These can diverge sharply: ASML has enormous magnitude but its likelihood score depends heavily on Dutch and US government decisions, not ASML's own strategic preferences. Future iterations should separate capability and intent into distinct columns.

Static snapshot problem. The scorecard captures a moment. NVIDIA's dominance could erode rapidly if AMD, Groq, or state-backed alternatives (Huawei Ascend, Google TPU) mature. DeepSeek's emergence as a frontier actor in 2024–2025 was not anticipated in earlier analyses. Users should weight recent evidence heavily and treat any row with a long time horizon as especially uncertain.

Aggregation obscures heterogeneity. "US Federal Government" aggregates the White House, Congress, BIS, NIST, DoD, and IC—entities with sometimes conflicting priorities. Disaggregating these would increase analytical precision at the cost of table manageability.

Missing actors. The table focuses on actors with clear, direct leverage over frontier AI development. It underweights: (a) civil society movements and organised labour (e.g., SAG-AFTRA's AI provisions in the 2023 strike settlement have set precedents for performance data rights); (b) sub-national actors (California AI legislation, Texas compute infrastructure); (c) international organisations (ITU, UN AI Advisory Body, IAEA-parallel proposals); (d) organised research communities (NeurIPS, ICML programme committees) whose publication norms shape what is considered publishable and therefore fundable.

Magnitude scale is ordinal, not cardinal. A magnitude-5 actor is not necessarily five times as impactful as a magnitude-1 actor. The logarithmic intention is imprecisely implemented. Calibrating this against historical case studies (e.g., how much did the H100 export-control rule actually slow Chinese frontier development?) would strengthen the scoring.

Power-seeking feedback loops. The scorecard treats actor scores as independent, but many are deeply entangled. If OpenAI achieves AGI first, its magnitude score would immediately jump, increasing US Federal Government leverage simultaneously. The Power-Seeking AI risk and AI-Driven Concentration of Power analyses address these feedback dynamics more systematically.


Key Uncertainties

  • US export control durability: BIS rules on H100/A100 equivalents have already been revised multiple times. Whether controls tighten, loosen, or fragment under geopolitical pressure is the single largest near-term uncertainty for compute access scores.
  • China's semiconductor self-sufficiency timeline: SMIC progress on sub-7nm, Huawei Ascend 910C performance, and state investment levels will determine whether China's magnitude score should be revised sharply upward within 3–5 years.
  • Open-weights proliferation effect: Meta's Llama releases and DeepSeek's open-weight models mean frontier capabilities are increasingly accessible without going through high-magnitude actors. This could flatten the scorecard's power concentration story over the medium term.
  • AGI discontinuity: Any actor that achieves transformative AI capabilities significantly ahead of others would see its magnitude score become effectively unbounded. The scorecard does not model discontinuous jumps.
  • Regulatory enforcement credibility: EU AI Act magnitude depends entirely on whether the Commission enforces it against large US labs. GDPR enforcement history suggests significant latency between rule-making and material consequences.

Changelog

DateChangeAuthor
2026-04-12Initial table created; 40 actors scored across 8 categoriesLongtermWiki

Submit suggested revisions or new actor nominations via the wiki discussion page.


Sources

All scores and notes in this article are derived from synthesised public sources. No URLs were available in the underlying research data for this article; citations are therefore descriptive.

Related Wiki Pages

Top Related Pages

Risks

AI-Driven Concentration of Power

Analysis

Concentration of Power Systems ModelAI Power and Influence MapAI Proliferation Risk Model

Organizations

Epoch AIOpen PhilanthropyFuture of Humanity InstituteMachine Intelligence Research InstituteCenter for AI Safety80,000 Hours

Other

Dario Amodei