Back
RAND research on AI regulatory capture
webCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: RAND Corporation
A RAND Corporation research brief relevant to AI governance discussions around regulatory independence; useful for understanding institutional risks in AI oversight design and policy debates about who should set AI safety standards.
Metadata
Importance: 58/100policy briefanalysis
Summary
This RAND research brief examines the risk of regulatory capture in AI governance, where AI developers and industry actors may unduly influence the regulatory bodies meant to oversee them. It analyzes structural vulnerabilities in AI oversight mechanisms and offers policy recommendations to mitigate industry capture of AI safety regulations.
Key Points
- •Regulatory capture occurs when regulated industries gain disproportionate influence over the agencies meant to oversee them, posing particular risks in fast-moving AI sector.
- •The concentration of AI expertise in private industry creates an information asymmetry that can disadvantage regulators and increase capture risk.
- •Revolving door dynamics between government and AI companies may compromise the independence of regulatory bodies.
- •Structural safeguards such as diverse stakeholder representation, transparency requirements, and independent technical capacity can reduce capture risk.
- •Effective AI governance requires proactive institutional design to prevent industry interests from shaping safety standards in self-serving ways.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Governance-Focused Worldview | Concept | 67.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 202616 KB
Managing Industry Influence in U.S. AI Policy | RAND
Jan
FEB
Mar
14
2025
2026
2027
success
fail
About this capture
COLLECTED BY
Collection: Save Page Now Outlinks
TIMESTAMPS
The Wayback Machine - https://web.archive.org/web/20260214181841/https://www.rand.org/pubs/research_briefs/RBA3679-1.html
Skip to page content
Toggle Menu
Site-wide navigation
Topics
Trending
International Economic Relations
Artificial Intelligence
Middle East
Military Recruitment
School Safety
Topics
Children, Families, and Communities
Cyber and Data Sciences
Education and Literacy
Energy and Environment
Health, Health Care, and Aging
Homeland Security and Public Safety
Infrastructure and Transportation
International Affairs
Law and Business
National Security
Science and Technology
Social Equity
Workers and the Workplace
All Topics
Research & Commentary
Experts
About
Research Divisions
RAND's divisions conduct research on a uniquely broad front for clients around the globe.
U.S. research divisions
RAND Army Research Division
RAND Education, Employment, and Infrastructure
RAND Global and Emerging Risks
RAND Health
RAND Homeland Security Research Division
RAND National Security Research Division
RAND Project AIR FORCE
International research divisions
RAND Australia
RAND Europe
Services & Impact
Careers
Graduate School
Subscribe
Give
Cart
Toggle Search
Search termsSubmit
RAND
Research & Commentary
Research Briefs
RB-A3679-1
Managing Industry Influence in U.S. AI Policy
Kevin Wei, Carson Ezell, Nicholas Gabrieli, Chinmay Deshpande
Research SummaryPublished Dec 13, 2024
Download PDF
Share on LinkedIn
Share on X
Share on Facebook
Email
Key Findings
AI companies' policy influence: As of 2024, AI industry actors are attempting to influence U.S. AI policy through many direct and indirect channels, primarily through agenda-setting, advocacy activities, influence in academia and research, and information management. Industry influence could cause regulatory capture when it results in policy outcomes that are detrimental to the public interest.
Recommendations for policymakers: To manage industry influence in U.S. AI policy and prevent regulatory capture, policymakers should
invest in building robust civil society institutions, such as with independent funding streams;
consider procedural and institutional safeguards, including robust ethics requirements;
build technical capacit
... (truncated, 16 KB total)Resource ID:
ac694cfb5ffc2a60 | Stable ID: sid_xAj0A4MUpK