Skip to content
Longterm Wiki
Back

analysis in AI & Society

paper

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Springer

A 2025 open-access paper in AI & Society relevant to anyone designing or evaluating AI safety policy, highlighting how safety regulations can be co-opted by powerful incumbents rather than serving public interest.

Metadata

Importance: 62/100journal articleanalysis

Summary

This paper argues that AI safety regulation is particularly vulnerable to regulatory capture, where powerful incumbents exploit safety rules for economic or political advantage. It details the specific harms and injustices that captured AI safety regulations could produce, and critically reviews existing proposals to mitigate this risk, cautioning that well-intentioned safety frameworks may be weaponized by dominant industry players.

Key Points

  • AI safety regulation faces high regulatory capture risk due to the technical complexity, high stakes, and concentration of power among a few large AI firms.
  • Captured AI safety regulations could entrench incumbents, stifle competition, and create unjust barriers that harm smaller developers and the public.
  • Despite broad public support (62-79% in surveys) for AI regulation, the design of the regulatory process itself is as important as its goals.
  • Existing proposals to mitigate regulatory capture in AI safety are reviewed and found to be insufficient or inadequately developed.
  • The paper bridges political economy literature on regulatory capture with AI governance, offering a framework for evaluating AI safety policy design.

Cited by 2 pages

Cached Content Preview

HTTP 200Fetched Apr 9, 20261 KB
# AI safety and regulatory capture
Authors: Thomas Metcalf
Journal: AI & SOCIETY
Published: 2025-08-03
DOI: 10.1007/s00146-025-02534-0
## Abstract

Abstract Researchers, politicians, and the general public support safety regulations on the production and use of AI technology. Yet regulations on new technology are susceptible to the harmful phenomenon of regulatory capture, in which organizations and institutions with economic or political power exert that power to use regulations to unjustly enrich themselves. Only a few authors have tried to raise the alarm about regulatory capture in AI safety and even fewer have described the problem and its implications in detail. Therefore, this paper has three related goals. The first goal is to argue for caution: AI safety is a field with enormous potential for such regulatory capture. Second, this paper explores, in detail, a variety of harms and injustices that captured AI-safety regulations are likely to create. The third goal, in the penultimate section, is to review and critique a few proposals that might mitigate the problem of regulatory capture of AI safety.
Resource ID: 6879cecd935a2b0c | Stable ID: sid_9eHEJzoxU0