Back
European Commission: EU AI Act
webCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: European Union
Data Status
Full text fetchedFetched Dec 28, 2025
Summary
The EU AI Act is a pioneering legal framework classifying AI systems by risk levels and setting strict rules for high-risk and potentially harmful AI applications to protect fundamental rights and ensure safety.
Key Points
- •First global comprehensive legal framework regulating AI across risk categories
- •Prohibits eight specific high-risk AI practices that threaten fundamental rights
- •Introduces strict compliance requirements for high-risk AI systems
- •Establishes European AI Office for implementation and enforcement
Review
The European Commission's AI Act represents a landmark global initiative in AI governance, introducing a comprehensive, risk-based regulatory approach to artificial intelligence. By categorizing AI systems into four risk levels - unacceptable, high, transparency, and minimal risk - the Act aims to balance innovation with fundamental rights protection and public safety. The methodology combines proactive prohibition of clearly dangerous AI practices with stringent compliance requirements for high-risk systems, including rigorous risk assessment, dataset quality controls, transparency obligations, and human oversight mechanisms. This nuanced approach sets a precedent for responsible AI development, addressing critical concerns about algorithmic bias, privacy violations, and potential societal harm. While the Act provides a robust framework, its long-term effectiveness will depend on implementation, technological adaptation, and international collaboration in AI governance.
Cited by 5 pages
| Page | Type | Quality |
|---|---|---|
| Should We Pause AI Development? | Crux | 47.0 |
| International AI Coordination Game Model | Analysis | 59.0 |
| AI Policy Effectiveness | Analysis | 64.0 |
| Model Registries | Policy | 68.0 |
| AI-Driven Institutional Decision Capture | Risk | 73.0 |
Cached Content Preview
HTTP 200Fetched Feb 23, 202616 KB
AI Act | Shaping Europe’s digital future
Skip to main content
AI Act
The AI Act is the first-ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally.
The AI Act (Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence) is the first-ever comprehensive legal framework on AI worldwide. The aim of the rules is to foster trustworthy AI in Europe. For any questions on the AI Act , check out the AI Act Single Information platform .
The AI Act sets out a risk-based rules for AI developers and deployers regarding specific uses of AI. The AI Act is part of a wider package of policy measures to support the development of trustworthy AI, which also includes the AI Continent Action Plan , the AI Innovation Package and the launch of AI Factories . Together, these measures guarantee safety, fundamental rights and human-centric AI, and strengthen uptake, investment and innovation in AI across the EU.
To facilitate the transition to the new regulatory framework, the Commission has launched the AI Pact , a voluntary initiative that seeks to support the future implementation, engage with stakeholders and invite AI providers and deployers from Europe and beyond to comply with the key obligations of the AI Act ahead of time. In parallel, the AI Act Service Desk is also providing information and support for a smooth and effective implementation of the AI Act across the EU.
Why do we need rules on AI?
The AI Act ensures that Europeans can trust what AI has to offer. While most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that we must address to avoid undesirable outcomes.
For example, it is often not possible to find out why an AI system has made a decision or prediction and taken a particular action. So, it may become difficult to assess whether someone has been unfairly disadvantaged, such as in a hiring decision or in an application for a public benefit scheme.
Although existing legislation provides some protection, it is insufficient to address the specific challenges AI systems may bring.
A Risk-based Approach
The AI Act defines 4 levels of risk for AI systems:
Unacceptable risk
All AI systems considered a clear threat to the safety, livelihoods and rights of people are banned. The AI Act prohibits eight practices , namely:
harmful AI-based manipulation and deception
harmful AI-based exploitation of vulnerabilities
social scoring
Individual criminal offence risk assessment or prediction
untargeted scraping of the internet or CCTV material to create or expand facial recognition databases
emotion recognition in workplaces and education institutions
biometric categorisation to deduce certain protected characteristics
real-time remote bi
... (truncated, 16 KB total)Resource ID:
acc5ad4063972046 | Stable ID: NDRlMGJiNj