AGILE Index on Global AI Safety Readiness
webA February 2025 index from Chinese AI safety institutions benchmarking 40 nations on AI safety readiness; useful for comparative governance research but reflects a specific institutional and geopolitical perspective.
Metadata
Summary
The GIAIS is a systematic cross-national assessment framework evaluating 40 countries across six pillars: governance environment, national institutions, governance instruments, research status, international participation, and existential safety preemption. Developed by Chinese AI safety institutions under the AGILE framework, it finds that developed countries are better-prepared, international cooperation is nascent, and existential safety planning is lacking globally. It aims to serve as a diagnostic tool to identify gaps and encourage international coordination.
Key Points
- •Covers 40 countries across 6 pillars and 12 dimensions, providing one of the few structured comparative assessments of national AI safety readiness.
- •Key finding: no country scores well on existential safety preemption, highlighting a near-universal gap in long-term AI risk planning.
- •Developed countries generally outperform others, but the global AI safety environment is described as increasingly severe.
- •International AI safety cooperation is forming but lacks broad participation; governance instruments (laws, policies) exist only in select countries.
- •Produced by Beijing-AISI and the Chinese Academy of Sciences, offering a non-Western institutional perspective on global AI safety governance.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Agentic AI | Capability | 68.0 |
Cached Content Preview
Global Index for
AI SafetyAGILE Index on Global AI Safety Readiness
Feb 2025
AI SAFETY
READINESS
GOVERNANCE ENVIRONMENT
• Cybersecurity Status
• AI Safety Incidents
GOVERNANCE INSTRUMENTS
• National Laws & Regulations
related to AI Safety
• Technical & Policy Frameworks
for AI Safety
NATIONAL INSTITUTIONS
• National AI Safety
Institutes/Networks/Labs/Consortiums
RESEARCH STATUS
• AI Safety Publications
• AI Safety Patents
INTERNATIONAL PARTICIPATION
• Government Engagement
• Industry Engagement
• Academia & Civil Society Engagement
EXISTENTIAL SAFETY
PREEMPTION Government Engagement • Industry Engagement •
• Center for Long-term Artificial Intelligence (CLAI)
• Beijing Institute of AI Safety and Governance (Beijing-AISI)
• Beijing Key Laboratory of Safe AI and Superalignment
• Institute of Automation, Chinese Academy of Sciences
-- 1 of 16 --
Towards A Global AI Safety Readiness Assessment
As artificial intelligence (AI) technologies experience explosive growth and proliferate across global
industries, their transformative potential is increasingly accompanied by complex safety and security
risks. From malicious exploitation and deceptive applications to privacy breaches, unintended
consequences, and existential risks, the dual-use nature of AI and its global impact demand
comprehensive safeguards and strengthened international cooperation. Against this backdrop,
understanding how countries navigate AI safety challenges—through policy innovation, technical
safeguards, and multilateral cooperation—has become critical to shaping a safe and sustainable
future.
Developed under the theoretical framework of AI Governance InternationaL Evaluation (AGILE)
Index, this Global Index for AI Safety (GIAIS) provides a systematic assessment of national
capabilities, current status and preparedness in addressing AI safety challenges. The evaluation of
the index covers six pillars: Governance Environment for AI Safety, National Institutions Targeting
AI Safety, Governance Instruments for AI Safety, Research Status on AI Safety, International
Participation on AI Safety, and Existential Safety Preemption. It currently includes 12 dimensions to
depict the governance status of AI safety readiness across 40 countries.
Through this assessment, we can found:
• Developed countries are generally better-prepared in addressing AI safety challenges.
• The global AI safety environment is becoming increasingly severe in recent years.
• National AI safety institutions are rapidly emerging in various forms.
• Related laws, policies, and tools are being implemented, but only in some countries.
• AI safety research has surged, focusing on topics such as alignment and privacy security.
• International AI safety cooperation is forming but needs wider participation.
• AI existential safety preemption and planning are lacking in all countries.
The assessment does not seek to categorize countries as either paragons or laggards. AI’s safety
challenges affect us all, and no country can
... (truncated, 29 KB total)54aec2bd9670c0f4 | Stable ID: sid_QAoML9nJrh