Back
Stanford HAI: 2025 AI Index Report - Policy and Governance
webCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Stanford HAI
This is the policy chapter of Stanford HAI's annual AI Index, a widely-cited empirical report useful for understanding the current landscape of AI regulation and government activity relevant to safety governance discussions.
Metadata
Importance: 62/100organizational reportreference
Summary
The Stanford HAI 2025 AI Index Report's policy chapter tracks the rapid growth of AI-related legislation, national government AI investment strategies, and emerging international frameworks for AI safety collaboration. It provides empirical data on how governments worldwide are responding to AI development through regulatory and institutional mechanisms.
Key Points
- •Documents significant year-over-year growth in AI-related legislation across multiple countries and jurisdictions.
- •Tracks government investment trends in AI research, infrastructure, and safety initiatives at the national level.
- •Covers international AI safety collaboration efforts, including bilateral and multilateral agreements and summits.
- •Provides comparative data on different regulatory approaches across regions (e.g., EU, US, China, UK).
- •Serves as an annual benchmark for the state of AI governance globally.
Review
The Stanford HAI AI Index Report reveals a dramatic acceleration in AI policy and governance efforts worldwide. In 2023, state-level AI legislation in the U.S. surged from just one law in 2016 to 131 in 2024, demonstrating a rapid and expansive regulatory response to AI's growing impact. Governments are simultaneously investing heavily in AI infrastructure, with countries like Canada, China, France, India, and Saudi Arabia committing billions of dollars to AI and semiconductor development, signaling a global recognition of AI's strategic importance.
Particularly notable is the international coordination around AI safety, with multiple countries establishing AI safety institutes following the AI Safety Summit in 2023. The report shows a 21.3% increase in AI mentions in legislative proceedings across 75 countries, underscoring the global policy community's heightened focus on AI governance. The expansion of deepfake regulations and the proliferation of federal AI-related regulations in the U.S. further illustrate the emerging comprehensive approach to managing AI's societal implications, balancing innovation with risk mitigation.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| US Executive Order on Safe, Secure, and Trustworthy AI | Policy | 91.0 |
Cached Content Preview
HTTP 200Fetched Apr 7, 20263 KB
Policy and Governance | The 2025 AI Index Report | Stanford HAI Skip to content Navigate About Events Careers Search Participate Get Involved Support HAI Contact Us Stay Up To Date Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly. Sign Up For Latest News 06 Policy and Governance Regulation, Policy, Governance Download Full Chapter See Chapter 7 All Chapters Back to Overview 01 Research and Development 02 Technical Performance 03 Responsible AI 04 Economy 05 Science and Medicine 06 Policy and Governance 07 Education 08 Public Opinion 1. U.S. states are leading the way on AI legislation amid slow progress at the federal level. In 2016, only one state-level AI-related law was passed, increasing to 49 by 2023. In the past year alone, that number more than doubled to 131. While proposed AI bills at the federal level have also increased, the number passed remains low. 2. Governments across the world invest in AI infrastructure. Canada announced a $2.4 billion AI infrastructure package, while China launched a $47.5 billion fund to boost semiconductor production. France committed $117 billion to AI infrastructure, India pledged $1.25 billion, and Saudi Arabia’s Project Transcendence includes a $100 billion investment in AI. 3. Across the world, mentions of AI in legislative proceedings keep rising. Across 75 major countries, AI mentions in legislative proceedings increased by 21.3% in 2024, rising to 1,889 from 1,557 in 2023. Since 2016, the total number of AI mentions has grown more than ninefold. 4. AI safety institutes expand and coordinate across the globe. In 2024, countries worldwide launched international AI safety institutes. The first emerged in November 2023 in the U.S. and the U.K. following the inaugural AI Safety Summit. At the AI Seoul Summit in May 2024, additional institutes were pledged in Japan, France, Germany, Italy, Singapore, South Korea, Australia, Canada, and the European Union. 5. The number of U.S. AI-related federal regulations skyrockets. In 2024, 59 AI-related regulations were introduced—more than double the 25 recorded in 2023. These regulations came from 42 unique agencies, twice the 21 agencies that issued them in 2023. 6. U.S. states expand deepfake regulations. Before 2024, only five states—California, Michigan, Washington, Texas, and Minnesota—had enacted laws regulating deepfakes in elections. In 2024, 15 more states, including Oregon, New Mexico, and New York, introduced similar measures. Additionally, by 2024, 24 states had passed regulations targeting deepfakes.
Resource ID:
4213de3094dc4264 | Stable ID: sid_3pXYa7C6E3