Skip to content
Longterm Wiki
Back

Training Compute Thresholds: Features and Functions in AI Regulation | GovAI

government

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Centre for the Governance of AI

A GovAI policy paper providing technical and regulatory grounding for compute-based AI governance thresholds; directly relevant to understanding the rationale behind the 10^26 FLOP thresholds used in the EU AI Act and US AI Executive Order.

Metadata

Importance: 72/100working paperanalysis

Summary

This paper evaluates training compute as a regulatory metric for identifying high-risk general-purpose AI models, arguing it is currently the best available proxy due to its correlation with capabilities, early measurability, and external verifiability. The authors position compute thresholds as an initial filter to trigger further scrutiny—such as evaluations and risk assessments—rather than as standalone determinants of mitigation requirements. The paper directly informs real-world regulatory frameworks including the EU AI Act and US executive orders.

Key Points

  • Training compute is currently the most suitable metric for regulatory oversight of GPAI models due to its quantifiability, early availability, and external verifiability.
  • Compute thresholds should function as an initial filter triggering further scrutiny (e.g., capability evaluations, risk assessments), not as direct determinants of mitigation measures.
  • Compute is an imperfect risk proxy—some high-risk models may fall below thresholds while some above may pose limited risk—so threshold design requires care.
  • The paper directly engages with how compute thresholds are implemented in US (Executive Order 14110) and EU (AI Act) regulatory frameworks.
  • As algorithmic efficiency improves over time, fixed compute thresholds will need regular recalibration to remain meaningful proxies for risk.

Cited by 2 pages

PageTypeQuality
Pause AdvocacyApproach91.0
Compute ThresholdsConcept91.0

Cached Content Preview

HTTP 200Fetched Apr 9, 20264 KB
Training Compute Thresholds: Features and Functions in AI Regulation | GovAI 

 

 

 About Research Opportunities Team Analysis Alumni Updates Donate About Research Opportunities Team Analysis Alumni Updates Donate Training Compute Thresholds: Features and Functions in AI Regulation

 

 Regulators in the US and EU are using thresholds based on training compute—the number of computational operations used in training—to identify general-purpose artificial intelligence (GPAI) models that may pose risks of large-scale societal harm. We argue that training compute currently is the most suitable metric to identify GPAI models that deserve regulatory oversight and further scrutiny. Training compute correlates with model capabilities and risks, is quantifiable, can be measured early in the AI lifecycle, and can be verified by external actors, among other advantageous features. These features make compute thresholds considerably more suitable than other proposed metrics to serve as an initial filter to trigger additional regulatory requirements and scrutiny. However, training compute is an imperfect proxy for risk. As such, compute thresholds should not be used in isolation to determine appropriate mitigation measures. Instead, they should be used to detect potentially risky GPAI models that warrant regulatory oversight, such as through notification requirements, and further scrutiny, such as via model evaluations and risk assessments, the results of which may inform which mitigation measures are appropriate. In fact, this appears largely consistent with how compute thresholds are used today. As GPAI technology and market structures evolve, regulators should update compute thresholds and complement them with other metrics into regulatory review processes.

 Read paper Read paper Theme

 AI Regulation Date

 August 7, 2024

 author

 s

 Lennart Heim, Leonie Koessler

 Share

 Research Summary

 Footnotes

 Further reading

 Related publications

 AI Regulation

 Requirements for Model Specifications in the EU GPAI Code of Practice

 March 2026

 Policy Brief

 Alan Chan

 The EU GPAI Code of Practice commits Signatories to providing a description of intended model behavior...

 AI Regulation

 Labeling of AI Agent Activity in Article 50 of the EU AI Act

 November 2025

 Policy Brief 

 Alan Chan

 The online activities of AI agents could distort human beliefs and behaviors. For example, humans could mistake...

 AI Regulation

 From Turing to Tomorrow: The UK's Approach to AI Regulation

 July 2025

 Research Paper

 Oliver Ritchie, Markus Anderljung, Tom Rachman

 The UK has pursued a distinctive path in AI regulation: less cautious than the EU
but more willing to address risks than...

 AI Regulation

 Requirements for Model Specifications in the EU GPAI Code of Practice

 March 2026

 Policy Brief

 Alan Chan

 The EU GPAI Code of Practice commits Signatories to providing a description of intended model behavior...

 AI Regulation

 Labeling of

... (truncated, 4 KB total)
Resource ID: d76d92e6cd91fb5d | Stable ID: sid_IFguPvLpss