AI Standards Development
emergingInternational and national organizations that develop technical standards for AI systems, including measurement methodologies, safety requirements, and evaluation protocols. Key bodies include ISO/IEC JTC 1/SC 42, NIST, and IEEE, whose standards increasingly inform government AI regulation.
Related Pages
Top Related Pages
NIST AI Risk Management Framework (AI RMF)
US federal voluntary framework for managing AI risks, with 40-60% Fortune 500 adoption and influence on federal policy through Executive Orders, bu...
Large Language Models
Foundation models trained on text that demonstrate emergent capabilities and represent the primary driver of current AI capabilities and risks, wit...
AI Disinformation
AI enables disinformation campaigns at unprecedented scale and sophistication, transforming propaganda operations through automated content generat...
EU AI Act
The world's first comprehensive AI regulation, adopting a risk-based approach to regulate foundation models and general-purpose AI systems
China AI Regulatory Framework
Comprehensive analysis of China's iterative, sector-specific AI regulatory framework, covering 5+ major regulations affecting 50,000+ companies wit...
Approaches
Other
Quick Facts
- Status
- Rapidly developing
Sources
- ISO/IEC JTC 1/SC 42 Artificial IntelligenceISO
- IEEE Ethically Aligned DesignIEEE
- EU AI Act StandardisationEuropean Commission