Longterm Wiki
Navigation
Updated 2026-03-13HistoryData
Citations verified71 accurate2 flagged10 unchecked
Page StatusResponse
Edited today3.5k words65 backlinksUpdated weeklyDue in 7 days
55QualityAdequate41.5ImportanceReference69.5ResearchModerate
Summary

Comprehensive overview of the EU AI Act's risk-based regulatory framework, particularly its two-tier approach to foundation models that distinguishes between standard and systemic risk AI systems. The analysis provides valuable implementation details and governance structure but cuts off before addressing key criticisms and global implications.

Content7/13
LLM summaryScheduleEntityEdit historyOverview
Tables5/ ~14Diagrams0/ ~1Int. links8/ ~28Ext. links1/ ~18Footnotes0/ ~11References14/ ~11Quotes74/84Accuracy73/84RatingsN:4 R:7 A:6 C:5Backlinks65
Issues1
Links1 link could use <R> components

EU AI Act

Policy

EU AI Act

Comprehensive overview of the EU AI Act's risk-based regulatory framework, particularly its two-tier approach to foundation models that distinguishes between standard and systemic risk AI systems. The analysis provides valuable implementation details and governance structure but cuts off before addressing key criticisms and global implications.

ScopeRisk-based
TypeBinding Regulation
Related
Policies
Compute GovernanceUK AI Safety Institute
Organizations
GovAI
3.5k words · 65 backlinks

Quick Assessment

DimensionAssessment
TypeComprehensive AI regulation
ScopeEU member states (with extraterritorial reach)
AdoptedMay/June 2024
Entry into ForceAugust 1, 2024
Full ApplicabilityAugust 2, 2026
Key InnovationRisk-based tiered regulation of foundation models
Maximum Penalties€35M or 7% global turnover
Enforcement BodyEuropean AI Office
SourceLink
Official Websiteeuroparl.europa.eu

Overview

The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework regulating artificial intelligence, representing a landmark attempt to govern AI systems based on their potential risks to safety, fundamental rights, and society.12 The regulation adopts a risk-based approach that classifies AI systems into four tiers—unacceptable risk (prohibited), high-risk (strict obligations), limited risk (transparency requirements), and minimal risk (largely unregulated)—with special provisions for foundation models and general-purpose AI (GPAI) systems.34

Originally proposed by the European Commission in April 2021, the Act underwent intense negotiations before political agreement was reached on December 9, 2023.56 The regulation entered into force on August 1, 2024, with phased implementation: prohibitions on unacceptable-risk systems took effect February 2, 2025, GPAI model obligations apply from August 2, 2025, and most high-risk provisions become fully applicable by August 2, 2026.78

The regulation has sparked significant controversy, particularly regarding its approach to foundation models. Critics argue it may stifle European AI innovation while supporters contend it provides necessary safeguards and legal certainty. The Act's two-tiered framework for GPAI models—distinguishing between standard foundation models and those posing "systemic risk"—emerged from contentious negotiations between the European Parliament, Council, and member states including France, Germany, and Italy.910

Legislative History

Proposal and Development (2021-2023)

The European Commission first proposed the AI Act on April 21, 2021, initiating the EU's effort to create comprehensive AI regulation focused on risk-based rules for AI systems.1112 The proposal was driven by concerns over AI's potential harms and aimed to balance innovation with safety through prohibitions on unacceptable risks and obligations for high-risk systems.

Key early milestones included:

  • November 29, 2021: EU Council presidency shared the first compromise text, adjusting rules on social scoring, biometrics, and high-risk AI13
  • December 1, 2021: European Parliament assigned lead negotiators Brando Benifei (S&D, Italy) and Dragoş Tudorache (Renew, Romania)14
  • September 5, 2022: Parliament's JURI committee adopted its opinion on the AI Act15
  • December 6, 2022: EU Council adopted its general approach for negotiations16

Foundation Models Controversy (2023)

Foundation models became highly controversial during the legislative process, marking a significant shift from the initial 2021 Commission proposal, which had focused on risk-based categorization of AI applications rather than regulating the models themselves.17 The emergence of powerful systems like ChatGPT in late 2022 catalyzed intense debate over how to regulate these general-purpose systems.

The European Parliament introduced formal provisions on foundation models in June 2023 when it adopted its negotiating position.18 This set up a fundamental conflict:

  • Parliament's position: Advocated for treating foundation models similar to high-risk systems, with quality management systems, EU database registration, and strict obligations19
  • Council/Commission position: Favored lighter regulation through voluntary codes of conduct, with delayed and less stringent requirements20

France, Germany, and Italy emerged as key opponents of strict foundation model regulation, arguing it would harm European AI competitiveness and innovation.21 Negotiations broke down in November 2023 as these member states opposed tiered rules for high-impact models developed mostly by non-EU firms, threatening to derail the entire Act.22

Political Agreement and Adoption (2023-2024)

After marathon negotiations, political agreement was expected to be reached in Late 2023.232423 The compromise distinguished between:

  1. General GPAI models: Subject to transparency obligations, copyright disclosure requirements, and technical documentation
  2. Systemic risk GPAI models: Additional requirements for risk assessments, incident reporting, evaluations, and cybersecurity measures

Subsequent milestones included:

  • February 13, 2024: Parliament committees approved the draft (71-8 vote); EU member states unanimously endorsed25
  • February 21, 2024: European AI Office launched within the Commission to oversee GPAI implementation26
  • March 13, 2024: European Parliament passed the Act (523 for, 46 against, 49 abstentions)27
  • May 21, 2024: European Council formally adopted the regulation28
  • July 12, 2024: Published in the Official Journal of the European Union29
  • August 1, 2024: Entered into force30

Risk-Based Regulatory Framework

Four-Tier Classification System

The AI Act establishes four risk levels for AI systems, each with corresponding obligations:

Risk LevelExamplesRequirementsTimeline
UnacceptableSocial scoring, subliminal manipulation, real-time biometric ID in public (with exceptions), untargeted facial recognition scrapingComplete prohibitionFebruary 2, 202531
High-RiskCV-scanning tools, critical infrastructure AI, AI in education/employment/law enforcement, product safety systemsMandatory risk assessments, EU database registration, conformity assessment, CE marking, human oversight, lifecycle monitoringAugust 2, 202632
LimitedChatbots, deepfakes, emotion recognition systemsTransparency obligations (label AI-generated content, disclose AI interaction)August 2, 202633
MinimalSpam filters, video games, AI-enabled inventory systemsNo specific obligations beyond existing lawsAlready applicable34

Prohibited AI Practices

The Act bans AI practices deemed to pose unacceptable risks to fundamental rights:3536

  1. Subliminal manipulation techniques that exploit vulnerabilities
  2. Social scoring systems by public authorities
  3. Real-time biometric identification in public spaces (with limited law enforcement exceptions)
  4. Emotion recognition in workplaces and educational institutions
  5. Untargeted scraping of facial images from the internet or CCTV
  6. Inference of sensitive characteristics (race, political opinions, sexual orientation)
  7. Biometric categorization systems that discriminate
  8. AI systems that manipulate human behavior to circumvent free will

These prohibitions took effect on February 2, 2025, making them the first enforceable provisions of the Act.37

Foundation Models and GPAI Regulation

Defining General-Purpose AI

The Act regulates foundation models under the category of "general-purpose AI" (GPAI) models, defined as AI systems trained on large amounts of data that can perform a wide variety of tasks across different applications.3839 This includes large language models like GPT-4, Claude, and Llama, as well as multimodal foundation models.

GPAI models are regulated separately from application-specific AI systems because of their adaptability across multiple downstream uses, some of which may be high-risk even if the model itself was not designed for those purposes.40

Two-Tier GPAI Framework

The Act establishes a two-tier approach for regulating foundation models, distinguishing between standard GPAI and "systemic risk" GPAI:4142

Tier 1: All GPAI Models (Standard Obligations)

Applicable to all general-purpose AI model providers, regardless of size or capability:4344

  • Technical documentation detailing model architecture, training data, and capabilities
  • Transparency to downstream deployers about model limitations and intended uses
  • Information summaries about copyrighted training data content
  • Policy to comply with EU copyright law (Directive (EU) 2019/790)
  • Data governance and quality management practices
  • EU database registration for models integrated into high-risk systems

Tier 2: Systemic Risk GPAI Models (Enhanced Obligations)

Designated based on compute thresholds (>10²⁵ FLOPs for training) or Commission assessment of capabilities, market impact, and potential for widespread harm:4546

  • Model evaluations and adversarial testing to identify systemic risks
  • Assessment and mitigation of risks to health, safety, fundamental rights, environment, democracy, and rule of law
  • Serious incident reporting to the AI Office
  • Cybersecurity measures and monitoring of downstream applications
  • Enhanced quality management systems
  • Regular reporting to the AI Office on risk management measures

Implementation Timeline for GPAI

The EU AI legislation will be made applicable via a phased approach:47484748

  • August 2, 2026: GPAI obligations take effect for all general-purpose AI models
  • August 2, 2027: Existing GPAI models placed on the market before August 2025 must achieve full compliance49

New GPAI models released after August 2, 2025 must comply immediately, while existing models receive a two-year grace period.50

GPAI Code of Practice

To facilitate compliance, the AI Office developed a voluntary GPAI Code of Practice through a multi-stakeholder process involving nearly 1,000 participants.51 The code provides guidance on:

  • Transparency requirements for training data
  • Copyright compliance mechanisms
  • Risk assessment methodologies
  • Governance structures for systemic risk models

The Chairs and Vice-Chairs were a crucial component of the Code of Practice drafting process:52535253

NameRoleBackground
Nuria OliverChairDirector, ELLIS Alicante Foundation; PhD in AI from MIT; IEEE/ACM Fellow
Yoshua BengioChairTuring Award winner; Professor at Université de Montréal; Founder of Mila
Alexander PeukertCo-Chair (Copyright)Professor of Civil/Commercial/Information Law, Goethe University Frankfurt
Marietje SchaakeChairFellow at Stanford Cyber Policy Center & Institute for Human-Centred AI
Daniel PriviteraVice-ChairFounder/Executive Director, KIRA Center; Lead Writer of International Scientific Report on Advanced AI Safety
Markus AnderljungVice-ChairDirector of Policy & Research, Centre for the Governance of AI

The Commission and AI Board have confirmed this code as an adequate voluntary compliance tool that may serve as a mitigating factor when determining fines for violations.54

Governance and Enforcement

European AI Office

The AI Office, established within the European Commission's Directorate-General for Communication Networks, Content and Technology (DG CNECT), serves as the primary enforcement body for GPAI models.5556 Launched on February 21, 2024, it became operational for enforcement purposes on August 2, 2025.57

In general, the AI Act is enforced at EU level by the European Commission mainly in form of its so-called AI Office with regard to general-purpose AI (GPAI) and, otherwise, at the level of the EU Member States.58

  • Lucilla Sioli: Heads the AI Office (former DG CNECT Director for AI)
  • Dragoş Tudorache: Heads the unit on risks of very capable GPAI models (former co-leader of the AI Act in Parliament)
  • Kilian Gross: Leads the unit on compliance, uniform enforcement, and investigations (key EU AI Act negotiator)
  • Juha Heikkilä: AI Office Adviser for International Affairs

The AI Office has significant enforcement powers:5959

  • Requesting information and documentation from GPAI providers
  • Conducting model evaluations and capability assessments
  • Requiring risk mitigation measures
  • Recalling models from the market
  • Imposing fines up to 3% of global annual turnover or €15 million, whichever is higher

Multi-Level Governance Structure

National authorities within the EU will govern the Act more directly, using qualified market surveillance:60616061

  • AI Board: Comprises representatives from EU member states; coordinates national enforcement and provides technical expertise
  • Advisory Forum: Brings together stakeholders from industry, academia, civil society, and social partners
  • Scientific Panel: Independent experts advise the AI Office on evaluating foundation model capabilities and monitoring material safety risks
  • National Authorities: Handle complaints and enforcement for high-risk AI systems not covered by the AI Office
  • European Data Protection Supervisor (EDPS): Has authority to impose fines on EU institutions for non-compliance

Penalty Structure

The Act establishes tiered penalties based on violation severity:62

Violation TypeMaximum Fine
Prohibited AI systems (unacceptable risk)€35 million or 7% of global annual turnover
High-risk system violations€15 million or 3% of global annual turnover
Incorrect information to authorities€7.5 million or 1.5% of global annual turnover

For startups and SMEs, caps are available to prevent disproportionate financial burden.63 The turnover-based calculation means violations by major technology companies could result in penalties exceeding hundreds of millions of euros.

Controversies and Criticisms

Innovation vs. Regulation Tensions

The Act's approach to foundation models has sparked intense debate over whether it strikes the right balance between safety and innovation. France, Germany, and Italy actively opposed strict GPAI regulations during negotiations, arguing they would harm European AI competitiveness against US and Chinese firms.6465 French startup Mistral AI, German company Aleph Alpha, and other European AI developers expressed concerns that compliance burdens would disadvantage them against better-resourced non-EU competitors like OpenAI and Anthropic.

Critics warn the Act's regulatory approach contrasts sharply with the US, where voluntary industry standards and executive orders provide more flexible governance frameworks.66

Conversely, proponents argue the Act creates legal certainty that will ultimately attract investment and protect downstream deployers from compliance burdens, enabling a trustworthy AI ecosystem.6768 Regulatory sandboxes and the AI innovation package (launched January 2024) aim to support European startups and SMEs in developing compliant AI systems.69

Definitional Ambiguities and Scope Concerns

There is sufficient lack of clarity that it is possible that fine-tuning an existing model would constitute bringing a novel foundation model to market:70717071

  • "Systemic risk" definition: The criteria for designating models as posing systemic risk—including compute thresholds, capabilities, market impact, and scalability—lack precision and may become outdated as technology advances
  • "Substantial modification" threshold: Uncertainty about when fine-tuning or adaptation of foundation models triggers full compliance obligations
  • Training data disclosure: Requirements for "summaries" of copyrighted training data content remain poorly defined, creating intellectual property disputes
  • Dual-use potential: Even controlled foundation models retain dual-use capabilities, raising questions about the effectiveness of use-case-based regulation

The Act's broad definition of AI systems, derived from OECD frameworks, applies to virtually all EU organizations using AI, creating significant compliance challenges.72

Epistemic and Methodological Limitations

Academic critics have identified fundamental epistemic gaps in the Act's risk assessment approach.73 Traditional risk assessments fail for probabilistic, socio-technical foundation models, the argument goes, creating false regulatory confidence. Specifically:

  • The Act overlooks deployment contexts, institutional oversight, and governance structures in favor of technical fixes
  • Fixed AI categories risk rapid obsolescence as technology evolves, unlike more adaptive governance frameworks
  • Causal links between model capabilities and societal harms are assumed without robust evidence, ignoring real-world socio-technical factors
  • The Act lacks anticipatory mechanisms for iterative revision based on emerging AI research

Stanford's Center for Research on Foundation Models (CRFM) noted that the compute-based threshold for systemic risk (10²⁵ FLOPs) diverges from their proposal to focus on demonstrated market impact, potentially missing dangerous models that don't meet the compute threshold.74

Missing Safeguards for Advanced AI Safety

AI safety researchers have criticized the Act for insufficient provisions addressing existential risks from advanced AI systems.7576 While the Act mandates evaluations for systemic risks, it includes:

  • No requirements for AI alignment research (ensuring advanced systems' goals match human values)
  • No provisions for third-party researcher access to models for safety evaluations
  • No adverse event reporting mechanisms comparable to pharmaceutical or aviation safety systems
  • No explicit coverage of scenarios involving misaligned superintelligent systems

The focus on immediate deployment risks (manipulation, discrimination, privacy violations) rather than model-level capabilities means the Act may not adequately address risks from future highly capable AI systems. Organizations like the AI Now Institute (in a report by 50+ experts) warned that foundation models have inherent risks in their training data and architectures that use-case regulation cannot fully address.77

Enforcement Challenges and Implementation Delays

Member States will designate very different national competent authorities, which will lead to very different interpretations and enforcement activities:78797879

  • Regulatory capacity: Whether the AI Office and national authorities have sufficient expertise and resources to effectively monitor rapidly evolving foundation models
  • Extraterritorial reach: Questions about enforcing requirements on non-EU providers, especially for open-source models uploaded from outside the EU
  • Compliance burden: Particularly for SMEs and researchers who may lack resources for extensive documentation and risk assessments
  • Loopholes: Compute-intensive models may quickly become outdated as a metric; workarounds for training data disclosure requirements

Proposed amendments have sought to address some concerns by delaying high-risk system obligations to December 2027, but privacy advocates like Max Schrems have criticized provisions allowing AI training on special category data to fix bias as undermining GDPR protections.80

The Act's phased implementation has also faced criticism. While prohibited systems were banned in February 2025, full GPAI compliance doesn't occur until August 2025-2027, creating a window where potentially risky systems can operate under legacy frameworks.81

Relationship to Other Regulatory Frameworks

82

  • GDPR: Data protection requirements for AI training data; controversies over "legitimate interests" basis for processing
  • Digital Services Act (DSA): Enforcement of AI-generated content moderation; recent cases include €120M fines against platform X in December 2025 for failing to control AI-generated sexual content
  • Copyright Directive: Article 50 transparency deadline in 2026 for training data disclosure
  • Proposed Chat Control: Ongoing debates about scanning for child abuse material, with temporary measures extended to April 2026

The January 21, 2026 joint opinion by the European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS) criticized unclear divisions between the AI Office's role for GPAI supervision and EDPS oversight, calling for amendments to avoid governance overlaps.83

Shifting Political Priorities (2025-2026)

Recent political discourse has shown a shift from the Act's original fundamental rights framing toward concerns about over-regulation harming competitiveness.84 As of early 2026:

  • European policymakers increasingly emphasize the risk that strict AI rules may cause Europe to fall further behind the US and China
  • Proposals circulate for simplifying GPAI obligations and extending transition periods
  • The contrast with US deregulation under recent executive orders raises questions about regulatory arbitrage
  • Parliament debates in January 2026 focused on stronger DSA-AI Act synergies for enforcement against deepfakes and illegal AI-generated content

These developments suggest the Act's implementation may evolve significantly based on economic performance and global competitive dynamics.

International Cooperation and Global Impact

Extraterritorial Reach

The AI Act applies not only to providers and deployers operating within the EU, but also to organizations outside the EU whose AI systems target or affect EU users.8586 This extraterritorial scope, similar to GDPR's global reach, means:

  • US, UK, and other non-EU AI companies must comply if their systems are used by EU customers
  • American employers using AI for EU-targeted outputs (e.g., automated hiring tools processing EU applicants) fall under the Act's jurisdiction
  • Multinational organizations must navigate compliance across different regulatory regimes

The Act's global influence extends through its potential to serve as a model for other jurisdictions considering AI regulation, though critics question whether it will inspire adoption or enable regulatory arbitrage as companies shift development to less restrictive environments.87

Bilateral and Multilateral Engagement

The AI Office is leading the Commission's international engagement in the field of AI:8888

  • Bilateral cooperation: Partnerships with Canada, US, India, Japan, South Korea, Singapore, Australia, and UK
  • Multilateral forums: Participation in G7, G20, OECD, and Global Partnership on AI discussions
  • US-EU AI collaboration: January 27, 2023 agreement to conduct joint research on AI applications in extreme weather forecasting, emergency response, health, electric grid optimization, and agriculture89

The AI Pact, initiated in May 2023, fosters voluntary industry commitment to implement AI Act requirements ahead of legal deadlines and serves as a coordination forum across jurisdictions.90

Support for Innovation

The legislators understand that AI tools and systems can be strong drivers of innovation in business, and do not want companies, especially SMEs, to be hamstrung by excessive regulation, or be pressured by industry giants with outsized industry influence.919291

  • Regulatory sandboxes: National authorities establish controlled environments for testing AI before market launch
  • AI innovation package: Launched January 2024 to support European startups and SMEs
  • GenAI4EU initiative: Stimulates generative AI adoption across strategic EU industrial ecosystems
  • AI Factories and Gigafactories: Infrastructure investments for AI development
  • InvestAI Facility: Funding mechanism for trustworthy AI projects
  • AI Skills Academy: Planned educational initiative to build EU AI talent

An AI Observatory tracks AI trends and assesses impacts across specific sectors, while the Apply AI Alliance serves as a coordination forum bringing together AI providers, industry, public sector, academia, and civil society.93

Key Uncertainties

Several major questions remain about the Act's effectiveness and evolution:

  1. Enforcement effectiveness: Will the AI Office and national authorities have sufficient capacity to meaningfully oversee rapidly advancing foundation models, particularly those developed by well-resourced non-EU companies?

  2. Innovation impact: Will the regulatory framework ultimately protect European competitiveness by providing legal certainty, or will compliance burdens drive AI development and deployment to less restrictive jurisdictions?

  3. Risk assessment validity: Can the Act's risk-based approach adequately address the rapidly evolving capabilities of foundation models, or will fixed categories and compute thresholds quickly become obsolete?

  4. Advanced AI safety: Does the Act provide sufficient safeguards for potential risks from highly capable future AI systems, or does its focus on deployment-level harms miss model-level dangers?

  5. Regulatory arbitrage: How will the Act's approach interact with US deregulation and Chinese state-directed AI development? Will global companies develop separate systems for different jurisdictions or push for regulatory harmonization?

  6. Implementation consistency: Will the phased rollout and proposed delays (e.g., extending high-risk obligations to December 2027) undermine the Act's effectiveness, or will they provide necessary flexibility for organizations to adapt?

  7. Systemic risk designation: How will the Commission operationalize its discretion to designate foundation models as posing systemic risk beyond the compute threshold? Will this process be transparent and predictable?

  8. Open-source implications: How will the Act affect open-source foundation models like Llama, Mistral, and others? Will compliance burdens disproportionately affect open development compared to proprietary systems?

Sources

Footnotes

  1. Software Improvement Group - EU AI Act SummarySoftware Improvement Group - EU AI Act Summary

  2. European Commission - Regulatory Framework for AIEuropean Commission - Regulatory Framework for AI

  3. IBM - EU AI Act TopicsIBM - EU AI Act Topics

  4. DNV - Introduction to the EU's AI ActDNV - Introduction to the EU's AI Act

  5. Eyreact - When was EU AI Act passedEyreact - When was EU AI Act passed

  6. Alexander Thamm - EU AI Act TimelineAlexander Thamm - EU AI Act Timeline

  7. Artificial Intelligence Act - Implementation TimelineArtificial Intelligence Act - Implementation Timeline

  8. DataGuard - EU AI Act TimelineDataGuard - EU AI Act Timeline

  9. IAPP - Contentious Areas in the EU AI Act TriloguesIAPP - Contentious Areas in the EU AI Act Trilogues

  10. Time - EU AI Regulation Foundation ModelsTime - EU AI Regulation Foundation Models

  11. Eyreact - When was EU AI Act passedEyreact - When was EU AI Act passed

  12. Alexander Thamm - EU AI Act TimelineAlexander Thamm - EU AI Act Timeline

  13. Artificial Intelligence Act - DevelopmentsArtificial Intelligence Act - Developments

  14. Artificial Intelligence Act - DevelopmentsArtificial Intelligence Act - Developments

  15. Artificial Intelligence Act - DevelopmentsArtificial Intelligence Act - Developments

  16. Alexander Thamm - EU AI Act TimelineAlexander Thamm - EU AI Act Timeline

  17. AI Regulation - Regulating Foundation Models in the AI ActAI Regulation - Regulating Foundation Models in the AI Act

  18. Artificial Intelligence Act - DevelopmentsArtificial Intelligence Act - Developments

  19. IAPP - Contentious Areas in the EU AI Act TriloguesIAPP - Contentious Areas in the EU AI Act Trilogues

  20. IAPP - Contentious Areas in the EU AI Act TriloguesIAPP - Contentious Areas in the EU AI Act Trilogues

  21. Artificial Intelligence Act Newsletter 40Artificial Intelligence Act Newsletter 40

  22. Artificial Intelligence Act Newsletter 40Artificial Intelligence Act Newsletter 40

  23. Eyreact - When was EU AI Act passedEyreact - When was EU AI Act passed 2

  24. IAPP - Contentious Areas in the EU AI Act TriloguesIAPP - Contentious Areas in the EU AI Act Trilogues

  25. Artificial Intelligence Act - DevelopmentsArtificial Intelligence Act - Developments

  26. Artificial Intelligence Act - DevelopmentsArtificial Intelligence Act - Developments

  27. Eyreact - When was EU AI Act passedEyreact - When was EU AI Act passed

  28. Artificial Intelligence Act - DevelopmentsArtificial Intelligence Act - Developments

  29. Artificial Intelligence Act - The ActArtificial Intelligence Act - The Act

  30. Software Improvement Group - EU AI Act SummarySoftware Improvement Group - EU AI Act Summary

  31. Software Improvement Group - EU AI Act SummarySoftware Improvement Group - EU AI Act Summary

  32. Software Improvement Group - EU AI Act SummarySoftware Improvement Group - EU AI Act Summary

  33. IBM - EU AI Act TopicsIBM - EU AI Act Topics

  34. IBM - EU AI Act TopicsIBM - EU AI Act Topics

  35. Software Improvement Group - EU AI Act SummarySoftware Improvement Group - EU AI Act Summary

  36. Artificial Intelligence Act - OverviewArtificial Intelligence Act - Overview

  37. Software Improvement Group - EU AI Act SummarySoftware Improvement Group - EU AI Act Summary

  38. ModelOp - EU AI ActModelOp - EU AI Act

  39. IBM - EU AI Act TopicsIBM - EU AI Act Topics

  40. AI Regulation - Regulating Foundation Models in the AI ActAI Regulation - Regulating Foundation Models in the AI Act

  41. Stanford CRFM - EU AI ActStanford CRFM - EU AI Act

  42. AI Regulation - Regulating Foundation Models in the AI ActAI Regulation - Regulating Foundation Models in the AI Act

  43. IBM - EU AI Act TopicsIBM - EU AI Act Topics

  44. European Commission - GPAI Models FAQEuropean Commission - GPAI Models FAQ

  45. Stanford CRFM - EU AI ActStanford CRFM - EU AI Act

  46. AI Regulation - Regulating Foundation Models in the AI ActAI Regulation - Regulating Foundation Models in the AI Act

  47. Software Improvement Group - EU AI Act SummarySoftware Improvement Group - EU AI Act Summary 2

  48. Artificial Intelligence Act - Implementation TimelineArtificial Intelligence Act - Implementation Timeline 2

  49. DataGuard - EU AI Act TimelineDataGuard - EU AI Act Timeline

  50. DataGuard - EU AI Act TimelineDataGuard - EU AI Act Timeline

  51. European Commission - Navigating AI ActEuropean Commission - Navigating AI Act

  52. European Commission - Meet Chairs Leading GPAI CodeEuropean Commission - Meet Chairs Leading GPAI Code 2

  53. Artificial Intelligence Act - Code of PracticeArtificial Intelligence Act - Code of Practice 2

  54. European Commission - GPAI Models FAQEuropean Commission - GPAI Models FAQ

  55. Freshfields - EU AI Act Unpacked 9Freshfields - EU AI Act Unpacked 9

  56. Euronews - Meet the Europeans Behind AI RegulationEuronews - Meet the Europeans Behind AI Regulation

  57. Artificial Intelligence Act - DevelopmentsArtificial Intelligence Act - Developments

  58. Freshfields - EU AI Act Unpacked 9Freshfields - EU AI Act Unpacked 9

  59. European Commission - GPAI Models FAQEuropean Commission - GPAI Models FAQ 2

  60. AI Regulation - Regulating Foundation Models in the AI ActAI Regulation - Regulating Foundation Models in the AI Act 2

  61. Usercentrics - EU AI RegulationUsercentrics - EU AI Regulation 2

  62. Orrick - EU AI Act 6 StepsOrrick - EU AI Act 6 Steps

  63. Usercentrics - EU AI RegulationUsercentrics - EU AI Regulation

  64. Artificial Intelligence Act Newsletter 40Artificial Intelligence Act Newsletter 40

  65. Time - EU AI Regulation Foundation ModelsTime - EU AI Regulation Foundation Models

  66. DLA Piper - Comparing US AI EO and EU AI ActDLA Piper - Comparing US AI EO and EU AI Act

  67. EY - EU AI Act GuideEY - EU AI Act Guide

  68. Artificial Intelligence Act Newsletter 40Artificial Intelligence Act Newsletter 40

  69. European Commission - European Approach to AIEuropean Commission - European Approach to AI

  70. Kaizenner - Reflections on AI ActKaizenner - Reflections on AI Act 2

  71. EA Forum - EU AI Act Needs Definition of High-Risk FoundationEA Forum - EU AI Act Needs Definition of High-Risk Foundation 2

  72. ModelOp - EU AI ActModelOp - EU AI Act

  73. Tech Policy Press - False Confidence in EU AI ActTech Policy Press - False Confidence in EU AI Act

  74. Stanford CRFM - EU AI ActStanford CRFM - EU AI Act

  75. EY - EU AI Act GuideEY - EU AI Act Guide

  76. Stanford CRFM - EU AI ActStanford CRFM - EU AI Act

  77. Time - EU AI Regulation Foundation ModelsTime - EU AI Regulation Foundation Models

  78. Kaizenner - Reflections on AI ActKaizenner - Reflections on AI Act 2

  79. CFG - EU AI ActCFG - EU AI Act 2

  80. Crowell - EU AI Act GDPR ChangesCrowell - EU AI Act GDPR Changes

  81. BSR - EU AI Act Where Do We Stand in 2025BSR - EU AI Act Where Do We Stand in 2025

  82. European Parliament - Tackling AI DeepfakesEuropean Parliament - Tackling AI Deepfakes

  83. Hunton - EDPB and EDPS OpinionHunton - EDPB and EDPS Opinion

  84. ERA Ideas on Europe - Revisiting What Problems the EU AI Act SolvesERA Ideas on Europe - Revisiting What Problems the EU AI Act Solves

  85. Orrick - EU AI Act 6 StepsOrrick - EU AI Act 6 Steps

  86. Ogletree - EU AI Act for US EmployersOgletree - EU AI Act for US Employers

  87. Tech Policy Press - Expert Predictions 2026Tech Policy Press - Expert Predictions 2026

  88. European Commission - Navigating AI ActEuropean Commission - Navigating AI Act 2

  89. Jones Day - US EU AI CollaborationJones Day - US EU AI Collaboration

  90. European Commission - Navigating AI ActEuropean Commission - Navigating AI Act

  91. Citation rc-0e6e (data unavailable — rebuild with wiki-server access) 2

  92. Usercentrics - EU AI RegulationUsercentrics - EU AI Regulation

  93. European Commission - European Approach to AIEuropean Commission - European Approach to AI

References

2OpenAIOpenAI
★★★★☆
3Gemini 1.0 UltraGoogle DeepMind
★★★★☆
★★★★☆
Claims (1)
The Act's global influence extends through its potential to serve as a model for other jurisdictions considering AI regulation, though critics question whether it will inspire adoption or enable regulatory arbitrage as companies shift development to less restrictive environments.
Accurate100%Feb 22, 2026
We will also see whether emerging frameworks such as the EU AI Act and the new Council of Europe AI convention inspire parallel efforts elsewhere or give way to regulatory arbitrage.
★★★★☆
9Stanford HAIStanford HAI
★★★★☆
★★★★☆
12ClaudeAnthropic
★★★★☆

The EU AI Act establishes a comprehensive regulatory framework for artificial intelligence, classifying AI systems by risk levels and imposing transparency and safety requirements.

Citation verification: 56 verified, 1 flagged, 10 unchecked of 84 total

Related Pages

Top Related Pages

Organizations

ControlAIGovAI

Risks

DeepfakesAI-Driven Institutional Decision Capture

Approaches

Third-Party Model AuditingOpen Source AI Safety

Analysis

Short AI Timeline Policy ImplicationsAI Safety Intervention Effectiveness MatrixMIT AI Risk Repository

Key Debates

AI Governance and PolicyGovernment Regulation vs Industry Self-Governance

Policy

Model RegistriesEvals-Based Deployment GatesAI Whistleblower Protections

Other

Yoshua BengioGeoffrey Hinton

Concepts

Large Language ModelsAgentic AIEa Longtermist Wins Losses

Historical

Mainstream Era