US-China perspectives on extreme AI risks and global governance
paperAuthors
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: arXiv
Comparative analysis of US and China expert perspectives on extreme AI risks and international governance, providing insights into how leading technical and policy figures in each country conceptualize AI safety challenges and cooperation possibilities.
Paper Details
Metadata
Abstract
The United States and China will play an important role in navigating safety and security challenges relating to advanced artificial intelligence. We sought to better understand how experts in each country describe safety and security threats from advanced artificial intelligence, extreme risks from AI, and the potential for international cooperation. Specifically, we compiled publicly-available statements from major technical and policy leaders in both the United States and China. We focused our analysis on advanced forms of artificial intelligence, such as artificial general intelligence (AGI), that may have the most significant impacts on national and global security. Experts in both countries expressed concern about risks from AGI, risks from intelligence explosions, and risks from AI systems that escape human control. Both countries have also launched early efforts designed to promote international cooperation around safety standards and risk management practices. Notably, our findings only reflect information from publicly available sources. Nonetheless, our findings can inform policymakers and researchers about the state of AI discourse in the US and China. We hope such work can contribute to policy discussions around advanced AI, its global security threats, and potential international dialogues or agreements to mitigate such threats.
Summary
This study analyzes publicly available statements from technical and policy leaders in the United States and China to understand how experts in each country perceive safety and security threats from advanced AI, particularly artificial general intelligence (AGI). The research finds that experts in both countries share concerns about AGI risks, intelligence explosions, and loss of human control over AI systems. Both nations have initiated early efforts toward international cooperation on safety standards and risk management. The findings aim to inform policymakers and researchers about AI safety discourse in these two major powers and support discussions on mitigating global AI security threats through potential international agreements.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| China AI Regulatory Framework | Policy | 57.0 |
Cached Content Preview
US-China perspectives on extreme AI risks and global governance
\UseTblrLibrary
booktabs
{CJK} UTF8gbsn
US-China perspectives on extreme AI risks and global governance
Akash R. Wasil 1 1 1 Author correspondence: aw1404@georgetown.edu
Georgetown University
Tim Durgin
Independent
Abstract
The United States and China will play an important role in navigating safety and security challenges relating to advanced artificial intelligence. We sought to better understand how experts in each country describe safety and security threats from advanced artificial intelligence, extreme risks from AI, and the potential for international cooperation. Specifically, we compiled publicly-available statements from major technical and policy leaders in both the United States and China. We focused our analysis on advanced forms of artificial intelligence, such as artificial general intelligence (AGI), that may have the most significant impacts on national and global security. Experts in both countries expressed concern about risks from AGI, risks from intelligence explosions, and risks from AI systems that escape human control. Both countries have also launched initial efforts designed to promote international cooperation around safety standards and risk management practices. Notably, our findings only reflect information from publicly available sources. Nonetheless, our findings can inform policymakers and researchers about the state of AI discourse in the US and China. We hope such work can contribute to policy discussions around advanced AI, its global security threats, and potential international dialogues or agreements to mitigate such threats.
1 Introduction
Artificial intelligence is a transformative technology with major implications for national and global security. Many AI experts have expressed concerns about AI-related national and global security threats. Examples include risks from AI-enabled biological weapons, risks from autonomous AI systems that escape human control, and risks from AI applied in military operations (Bengio et al., 2024; Hendrycks et al., 2023).
The United States and China are the world’s leaders in AI development. In 2022, the size of the AI market in the US amounted to USD 103.7B (Statista-1, 2024), while the size of the Chinese AI market was approximately USD 40B (Statista-2, 2024). As of 2022, 26 percent of the world’s top AI researchers came from China, while 28 percent came from the United States (Yang 2024). US companies dominate the top of the large language model (LLM) rankings: examples include Open AI’s Chat-GPT, Anthropic’s Claude, and Google’s Gemini (Guinness, 2024). While most Chinese LLMs lag quite a bit behind, there has been progress in recent years. Specifically, Moonshot AI’s Kimi model, under certain conditions, can achieve performance comparable to Chat GPT-4 (Zhang, 2023).
We aimed to acquire a better understa
... (truncated, 29 KB total)99faf15f92366b6f | Stable ID: sid_gOnTl6T6fr