Skip to content

EU AI Act

📋Page Status
Page Type:ResponseStyle Guide →Intervention/response page
Quality:55 (Adequate)
Importance:78 (High)
Last edited:2026-02-01 (today)
Words:4.2k
Backlinks:5
Structure:
📊 4📈 0🔗 0📚 9832%Score: 10/15
LLM Summary:Comprehensive overview of the EU AI Act's risk-based regulatory framework, particularly its two-tier approach to foundation models that distinguishes between standard and systemic risk AI systems. The analysis provides valuable implementation details and governance structure but cuts off before addressing key criticisms and global implications.
Critical Insights (4):
  • Counterint.73% of AI researchers expect compute threshold gaming (training models below 10^25 FLOP to avoid regulatory requirements) to become a significant issue within 2-3 years, potentially undermining the EU AI Act's effectiveness for advanced AI oversight.S:4.0I:4.5A:4.0
  • GapThe EU AI Act's focus remains primarily on near-term harms rather than existential risks, creating a significant regulatory gap for catastrophic AI risks despite establishing infrastructure for advanced AI oversight.S:2.5I:4.5A:4.5
  • Quant.Compliance costs for high-risk AI systems under the EU AI Act range from €200,000 to €2 million per system, with aggregate industry compliance costs estimated at €500M-1B.S:3.5I:3.5A:4.0
Issues (1):
  • Links8 links could use <R> components
AspectDetails
TypeComprehensive AI regulation
ScopeEU member states (with extraterritorial reach)
AdoptedMay/June 2024
Entry into ForceAugust 1, 2024
Full ApplicabilityAugust 2, 2026
Key InnovationRisk-based tiered regulation of foundation models
Maximum Penalties€35M or 7% global turnover
Enforcement BodyEuropean AI Office

The EU AI Act (Regulation (EU) 2024/1689) is the world’s first comprehensive legal framework regulating artificial intelligence, representing a landmark attempt to govern AI systems based on their potential risks to safety, fundamental rights, and society.12 The regulation adopts a risk-based approach that classifies AI systems into four tiers—unacceptable risk (prohibited), high-risk (strict obligations), limited risk (transparency requirements), and minimal risk (largely unregulated)—with special provisions for foundation models and general-purpose AI (GPAI) systems.34

Originally proposed by the European Commission in April 2021, the Act underwent intense negotiations before political agreement was reached on December 9, 2023.56 The regulation entered into force on August 1, 2024, with phased implementation: prohibitions on unacceptable-risk systems took effect February 2, 2025, GPAI model obligations apply from August 2, 2025, and most high-risk provisions become fully applicable by August 2, 2026.78

The regulation has sparked significant controversy, particularly regarding its approach to foundation models. Critics argue it may stifle European AI innovation while supporters contend it provides necessary safeguards and legal certainty. The Act’s two-tiered framework for GPAI models—distinguishing between standard foundation models and those posing “systemic risk”—emerged from contentious negotiations between the European Parliament, Council, and member states including France, Germany, and Italy.910

The European Commission first proposed the AI Act on April 21, 2021, initiating the EU’s effort to create comprehensive AI regulation focused on risk-based rules for AI systems.1112 The proposal was driven by concerns over AI’s potential harms and aimed to balance innovation with safety through prohibitions on unacceptable risks and obligations for high-risk systems.

Key early milestones included:

  • November 29, 2021: EU Council presidency shared the first compromise text, adjusting rules on social scoring, biometrics, and high-risk AI13
  • December 1, 2021: European Parliament assigned lead negotiators Brando Benifei (S&D, Italy) and Dragoş Tudorache (Renew, Romania)14
  • September 2022: Parliament’s JURI committee adopted its opinion on the AI Act15
  • December 6, 2022: EU Council adopted its general approach for negotiations16

Foundation models became highly controversial during the legislative process, marking a significant shift from the initial 2021 Commission proposal, which had focused on risk-based categorization of AI applications rather than regulating the models themselves.17 The emergence of powerful systems like ChatGPT in late 2022 catalyzed intense debate over how to regulate these general-purpose systems.

The European Parliament introduced formal provisions on foundation models in June 2023 when it adopted its negotiating position.18 This set up a fundamental conflict:

  • Parliament’s position: Advocated for treating foundation models similar to high-risk systems, with quality management systems, EU database registration, and strict obligations19
  • Council/Commission position: Favored lighter regulation through voluntary codes of conduct, with delayed and less stringent requirements20

France, Germany, and Italy emerged as key opponents of strict foundation model regulation, arguing it would harm European AI competitiveness and innovation.21 Negotiations broke down in November 2023 as these member states opposed tiered rules for high-impact models developed mostly by non-EU firms, threatening to derail the entire Act.22

Political Agreement and Adoption (2023-2024)

Section titled “Political Agreement and Adoption (2023-2024)”

After marathon negotiations, political agreement was finally reached on December 9, 2023, with a compromise two-tier system for foundation models.2324 The compromise distinguished between:

  1. General GPAI models: Subject to transparency obligations, copyright disclosure requirements, and technical documentation
  2. Systemic risk GPAI models: Additional requirements for risk assessments, incident reporting, evaluations, and cybersecurity measures

Subsequent milestones included:

  • February 13, 2024: Parliament committees approved the draft (71-8 vote); EU member states unanimously endorsed25
  • February 21, 2024: European AI Office launched within the Commission to oversee GPAI implementation26
  • March 13, 2024: European Parliament passed the Act (523 for, 46 against, 49 abstentions)27
  • May 21, 2024: European Council formally adopted the regulation28
  • July 12, 2024: Published in the Official Journal of the European Union29
  • August 1, 2024: Entered into force30

The AI Act establishes four risk levels for AI systems, each with corresponding obligations:

Risk LevelExamplesRequirementsTimeline
UnacceptableSocial scoring, subliminal manipulation, real-time biometric ID in public (with exceptions), untargeted facial recognition scrapingComplete prohibitionFebruary 2, 202531
High-RiskCV-scanning tools, critical infrastructure AI, AI in education/employment/law enforcement, product safety systemsMandatory risk assessments, EU database registration, conformity assessment, CE marking, human oversight, lifecycle monitoringAugust 2, 202632
LimitedChatbots, deepfakes, emotion recognition systemsTransparency obligations (label AI-generated content, disclose AI interaction)August 2, 202633
MinimalSpam filters, video games, AI-enabled inventory systemsNo specific obligations beyond existing lawsAlready applicable34

The Act bans eight specific AI practices deemed to pose unacceptable risks to fundamental rights:3536

  1. Subliminal manipulation techniques that exploit vulnerabilities
  2. Social scoring systems by public authorities
  3. Real-time biometric identification in public spaces (with limited law enforcement exceptions)
  4. Emotion recognition in workplaces and educational institutions
  5. Untargeted scraping of facial images from the internet or CCTV
  6. Inference of sensitive characteristics (race, political opinions, sexual orientation)
  7. Biometric categorization systems that discriminate
  8. AI systems that manipulate human behavior to circumvent free will

These prohibitions took effect on February 2, 2025, making them the first enforceable provisions of the Act.37

The Act regulates foundation models under the category of “general-purpose AI” (GPAI) models, defined as AI systems trained on large amounts of data that can perform a wide variety of tasks across different applications.3839 This includes large language models like GPT-4, Claude, and Llama, as well as multimodal foundation models.

GPAI models are regulated separately from application-specific AI systems because of their adaptability across multiple downstream uses, some of which may be high-risk even if the model itself was not designed for those purposes.40

The Act establishes a two-tier approach for regulating foundation models, distinguishing between standard GPAI and “systemic risk” GPAI:4142

Tier 1: All GPAI Models (Standard Obligations)

Applicable to all general-purpose AI model providers, regardless of size or capability:4344

  • Technical documentation detailing model architecture, training data, and capabilities
  • Transparency to downstream deployers about model limitations and intended uses
  • Information summaries about copyrighted training data content
  • Policy to comply with EU copyright law (Directive (EU) 2019/790)
  • Data governance and quality management practices
  • EU database registration for models integrated into high-risk systems

Tier 2: Systemic Risk GPAI Models (Enhanced Obligations)

Designated based on compute thresholds (>10²⁵ FLOPs for training) or Commission assessment of capabilities, market impact, and potential for widespread harm:4546

  • Model evaluations and adversarial testing to identify systemic risks
  • Assessment and mitigation of risks to health, safety, fundamental rights, environment, democracy, and rule of law
  • Serious incident reporting to the AI Office
  • Cybersecurity measures and monitoring of downstream applications
  • Enhanced quality management systems
  • Regular reporting to the AI Office on risk management measures

Foundation model obligations follow a specific timeline:4748

  • August 2, 2025: GPAI obligations take effect for all general-purpose AI models
  • May 2, 2025: Commission codes of practice ready for voluntary compliance49
  • July 18, 2025: Commission published draft GPAI guidelines for stakeholder consultation50
  • August 2, 2027: Existing GPAI models placed on the market before August 2025 must achieve full compliance51

New GPAI models released after August 2, 2025 must comply immediately, while existing models receive a two-year grace period.52

To facilitate compliance, the AI Office developed a voluntary GPAI Code of Practice through a multi-stakeholder process involving nearly 1,000 participants.53 The code provides guidance on:

  • Transparency requirements for training data
  • Copyright compliance mechanisms
  • Risk assessment methodologies
  • Governance structures for systemic risk models

Chairs and vice-chairs leading the code’s development include:5455

NameRoleBackground
Nuria OliverChairDirector, ELLIS Alicante Foundation; PhD in AI from MIT; IEEE/ACM Fellow
Yoshua BengioChairTuring Award winner; Professor at Université de Montréal; Founder of Mila
Alexander PeukertCo-Chair (Copyright)Professor of Civil/Commercial/Information Law, Goethe University Frankfurt
Marietje SchaakeChairFellow at Stanford Cyber Policy Center & Institute for Human-Centred AI
Daniel PriviteraVice-ChairFounder/Executive Director, KIRA Center; Lead Writer of International Scientific Report on Advanced AI Safety
Markus AnderljungVice-ChairDirector of Policy & Research, Centre for the Governance of AI

The Commission and AI Board have confirmed this code as an adequate voluntary compliance tool that may serve as a mitigating factor when determining fines for violations.56

The AI Office, established within the European Commission’s Directorate-General for Communication Networks, Content and Technology (DG CNECT), serves as the primary enforcement body for GPAI models.5758 Launched on February 21, 2024, it became operational for enforcement purposes on August 2, 2025.59

Key leadership includes:60

  • Lucilla Sioli: Heads the AI Office (former DG CNECT Director for AI)
  • Dragoş Tudorache: Heads the unit on risks of very capable GPAI models (former co-leader of the AI Act in Parliament)
  • Kilian Gross: Leads the unit on compliance, uniform enforcement, and investigations (key EU AI Act negotiator)
  • Juha Heikkilä: AI Office Adviser for International Affairs

The AI Office has significant enforcement powers, including:61

  • Requesting information and documentation from GPAI providers
  • Conducting model evaluations and capability assessments
  • Requiring risk mitigation measures
  • Recalling models from the market
  • Imposing fines up to 3% of global annual turnover or €15 million, whichever is higher

The Act establishes several governance bodies:6263

  • AI Board: Comprises representatives from EU member states; coordinates national enforcement and provides technical expertise
  • Advisory Forum: Brings together stakeholders from industry, academia, civil society, and social partners
  • Scientific Panel: Independent experts advise the AI Office on evaluating foundation model capabilities and monitoring material safety risks
  • National Authorities: Handle complaints and enforcement for high-risk AI systems not covered by the AI Office
  • European Data Protection Supervisor (EDPS): Has authority to impose fines on EU institutions for non-compliance

The Act establishes tiered penalties based on violation severity:6465

Violation TypeMaximum Fine
Prohibited AI systems (unacceptable risk)€35 million or 7% of global annual turnover
High-risk system violations€15 million or 3% of global annual turnover
Incorrect information to authorities€7.5 million or 1.5% of global annual turnover

For startups and SMEs, caps are available to prevent disproportionate financial burden.66 The turnover-based calculation means violations by major technology companies could result in penalties exceeding hundreds of millions of euros.

The Act’s approach to foundation models has sparked intense debate over whether it strikes the right balance between safety and innovation. France, Germany, and Italy actively opposed strict GPAI regulations during negotiations, arguing they would harm European AI competitiveness against US and Chinese firms.6768 French startup Mistral AI, German company Aleph Alpha, and other European AI developers expressed concerns that compliance burdens would disadvantage them against better-resourced non-EU competitors like OpenAI and Anthropic.

Critics point to Europe’s existing lag in AI development—trailing US leaders in compute resources, funding, and technical talent—and warn the Act may widen this gap.69 The regulatory approach contrasts sharply with the US, where voluntary industry standards and executive orders provide more flexible governance frameworks.70

Conversely, proponents argue the Act creates legal certainty that will ultimately attract investment and protect downstream deployers from compliance burdens, enabling a trustworthy AI ecosystem.7172 Regulatory sandboxes and the AI innovation package (launched January 2024) aim to support European startups and SMEs in developing compliant AI systems.73

Definitional Ambiguities and Scope Concerns

Section titled “Definitional Ambiguities and Scope Concerns”

Multiple aspects of the GPAI framework have been criticized for vagueness:7475

  • “Systemic risk” definition: The criteria for designating models as posing systemic risk—including compute thresholds, capabilities, market impact, and scalability—lack precision and may become outdated as technology advances
  • “Substantial modification” threshold: Uncertainty about when fine-tuning or adaptation of foundation models triggers full compliance obligations
  • Training data disclosure: Requirements for “summaries” of copyrighted training data content remain poorly defined, creating intellectual property disputes
  • Dual-use potential: Even controlled foundation models retain dual-use capabilities, raising questions about the effectiveness of use-case-based regulation

The Act’s broad definition of AI systems, derived from OECD frameworks, applies to virtually all EU organizations using AI, creating significant compliance challenges.76

Academic critics have identified fundamental epistemic gaps in the Act’s risk assessment approach.77 Traditional risk assessments fail for probabilistic, socio-technical foundation models, the argument goes, creating false regulatory confidence. Specifically:

  • The Act overlooks deployment contexts, institutional oversight, and governance structures in favor of technical fixes
  • Fixed AI categories risk rapid obsolescence as technology evolves, unlike more adaptive governance frameworks
  • Causal links between model capabilities and societal harms are assumed without robust evidence, ignoring real-world socio-technical factors
  • The Act lacks anticipatory mechanisms for iterative revision based on emerging AI research

Stanford’s Center for Research on Foundation Models (CRFM) noted that the compute-based threshold for systemic risk (10²⁵ FLOPs) diverges from their proposal to focus on demonstrated market impact, potentially missing dangerous models that don’t meet the compute threshold.78

AI safety researchers have criticized the Act for insufficient provisions addressing existential risks from advanced AI systems.7980 While the Act mandates evaluations for systemic risks, it includes:

  • No requirements for AI alignment research (ensuring advanced systems’ goals match human values)
  • No provisions for third-party researcher access to models for safety evaluations
  • No adverse event reporting mechanisms comparable to pharmaceutical or aviation safety systems
  • No explicit coverage of scenarios involving misaligned superintelligent systems

The focus on immediate deployment risks (manipulation, discrimination, privacy violations) rather than model-level capabilities means the Act may not adequately address risks from future highly capable AI systems. Organizations like the AI Now Institute (in a report by 50+ experts) warned that foundation models have inherent risks in their training data and architectures that use-case regulation cannot fully address.81

Enforcement Challenges and Implementation Delays

Section titled “Enforcement Challenges and Implementation Delays”

Several practical enforcement concerns have emerged:8283

  • Regulatory capacity: Whether the AI Office and national authorities have sufficient expertise and resources to effectively monitor rapidly evolving foundation models
  • Extraterritorial reach: Questions about enforcing requirements on non-EU providers, especially for open-source models uploaded from outside the EU
  • Compliance burden: Particularly for SMEs and researchers who may lack resources for extensive documentation and risk assessments
  • Loopholes: Compute-intensive models may quickly become outdated as a metric; workarounds for training data disclosure requirements

Proposed amendments have sought to address some concerns by delaying high-risk system obligations to December 2027, but privacy advocates like Max Schrems have criticized provisions allowing AI training on special category data to fix bias as undermining GDPR protections.84

The Act’s phased implementation has also faced criticism. While prohibited systems were banned in February 2025, full GPAI compliance doesn’t occur until August 2025-2027, creating a window where potentially risky systems can operate under legacy frameworks.85

Relationship to Other Regulatory Frameworks

Section titled “Relationship to Other Regulatory Frameworks”

The Act intersects with multiple existing EU regulations, creating potential overlap and confusion:8687

  • GDPR: Data protection requirements for AI training data; controversies over “legitimate interests” basis for processing
  • Digital Services Act (DSA): Enforcement of AI-generated content moderation; recent cases include €120M fines against platform X in December 2025 for failing to control AI-generated sexual content
  • Copyright Directive: Article 50 transparency deadline in 2026 for training data disclosure
  • Proposed Chat Control: Ongoing debates about scanning for child abuse material, with temporary measures extended to April 2026

The January 21, 2026 joint opinion by the European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS) criticized unclear divisions between the AI Office’s role for GPAI supervision and EDPS oversight, calling for amendments to avoid governance overlaps.88

Recent political discourse has shown a shift from the Act’s original fundamental rights framing toward concerns about over-regulation harming competitiveness.89 As of early 2026:

  • European policymakers increasingly emphasize the risk that strict AI rules may cause Europe to fall further behind the US and China
  • Proposals circulate for simplifying GPAI obligations and extending transition periods
  • The contrast with US deregulation under recent executive orders raises questions about regulatory arbitrage
  • Parliament debates in January 2026 focused on stronger DSA-AI Act synergies for enforcement against deepfakes and illegal AI-generated content

These developments suggest the Act’s implementation may evolve significantly based on economic performance and global competitive dynamics.

International Cooperation and Global Impact

Section titled “International Cooperation and Global Impact”

The AI Act applies not only to providers and deployers operating within the EU, but also to organizations outside the EU whose AI systems target or affect EU users.9091 This extraterritorial scope, similar to GDPR’s global reach, means:

  • US, UK, and other non-EU AI companies must comply if their systems are used by EU customers
  • American employers using AI for EU-targeted outputs (e.g., automated hiring tools processing EU applicants) fall under the Act’s jurisdiction
  • Multinational organizations must navigate compliance across different regulatory regimes

The Act’s global influence extends through its potential to serve as a model for other jurisdictions considering AI regulation, though critics question whether it will inspire adoption or enable regulatory arbitrage as companies shift development to less restrictive environments.92

The EU actively engages in international AI governance through multiple channels:93

  • Bilateral cooperation: Partnerships with Canada, US, India, Japan, South Korea, Singapore, Australia, and UK
  • Multilateral forums: Participation in G7, G20, OECD, and Global Partnership on AI discussions
  • US-EU AI collaboration: January 27, 2023 agreement to conduct joint research on AI applications in extreme weather forecasting, emergency response, health, electric grid optimization, and agriculture94

The AI Pact, initiated in May 2023, fosters voluntary industry commitment to implement AI Act requirements ahead of legal deadlines and serves as a coordination forum across jurisdictions.95

To balance regulation with competitiveness, the Act includes several innovation support mechanisms:9697

  • Regulatory sandboxes: National authorities establish controlled environments for testing AI before market launch
  • AI innovation package: Launched January 2024 to support European startups and SMEs
  • GenAI4EU initiative: Stimulates generative AI adoption across strategic EU industrial ecosystems
  • AI Factories and Gigafactories: Infrastructure investments for AI development
  • InvestAI Facility: Funding mechanism for trustworthy AI projects
  • AI Skills Academy: Planned educational initiative to build EU AI talent

An AI Observatory tracks AI trends and assesses impacts across specific sectors, while the Apply AI Alliance serves as a coordination forum bringing together AI providers, industry, public sector, academia, and civil society.98

Several major questions remain about the Act’s effectiveness and evolution:

  1. Enforcement effectiveness: Will the AI Office and national authorities have sufficient capacity to meaningfully oversee rapidly advancing foundation models, particularly those developed by well-resourced non-EU companies?

  2. Innovation impact: Will the regulatory framework ultimately protect European competitiveness by providing legal certainty, or will compliance burdens drive AI development and deployment to less restrictive jurisdictions?

  3. Risk assessment validity: Can the Act’s risk-based approach adequately address the rapidly evolving capabilities of foundation models, or will fixed categories and compute thresholds quickly become obsolete?

  4. Advanced AI safety: Does the Act provide sufficient safeguards for potential risks from highly capable future AI systems, or does its focus on deployment-level harms miss model-level dangers?

  5. Regulatory arbitrage: How will the Act’s approach interact with US deregulation and Chinese state-directed AI development? Will global companies develop separate systems for different jurisdictions or push for regulatory harmonization?

  6. Implementation consistency: Will the phased rollout and proposed delays (e.g., extending high-risk obligations to December 2027) undermine the Act’s effectiveness, or will they provide necessary flexibility for organizations to adapt?

  7. Systemic risk designation: How will the Commission operationalize its discretion to designate foundation models as posing systemic risk beyond the compute threshold? Will this process be transparent and predictable?

  8. Open-source implications: How will the Act affect open-source foundation models like Llama, Mistral, and others? Will compliance burdens disproportionately affect open development compared to proprietary systems?

  1. Software Improvement Group - EU AI Act Summary

  2. European Commission - Regulatory Framework for AI

  3. IBM - EU AI Act Topics

  4. DNV - Introduction to the EU’s AI Act

  5. Eyreact - When was EU AI Act passed

  6. Alexander Thamm - EU AI Act Timeline

  7. Artificial Intelligence Act - Implementation Timeline

  8. DataGuard - EU AI Act Timeline

  9. IAPP - Contentious Areas in the EU AI Act Trilogues

  10. Time - EU AI Regulation Foundation Models

  11. Eyreact - When was EU AI Act passed

  12. Alexander Thamm - EU AI Act Timeline

  13. Artificial Intelligence Act - Developments

  14. Artificial Intelligence Act - Developments

  15. Artificial Intelligence Act - Developments

  16. Alexander Thamm - EU AI Act Timeline

  17. AI Regulation - Regulating Foundation Models in the AI Act

  18. Artificial Intelligence Act - Developments

  19. IAPP - Contentious Areas in the EU AI Act Trilogues

  20. IAPP - Contentious Areas in the EU AI Act Trilogues

  21. Artificial Intelligence Act Newsletter 40

  22. Artificial Intelligence Act Newsletter 40

  23. Eyreact - When was EU AI Act passed

  24. IAPP - Contentious Areas in the EU AI Act Trilogues

  25. Artificial Intelligence Act - Developments

  26. Artificial Intelligence Act - Developments

  27. Eyreact - When was EU AI Act passed

  28. Artificial Intelligence Act - Developments

  29. Artificial Intelligence Act - The Act

  30. Software Improvement Group - EU AI Act Summary

  31. Software Improvement Group - EU AI Act Summary

  32. Software Improvement Group - EU AI Act Summary

  33. IBM - EU AI Act Topics

  34. IBM - EU AI Act Topics

  35. Software Improvement Group - EU AI Act Summary

  36. Artificial Intelligence Act - Overview

  37. Software Improvement Group - EU AI Act Summary

  38. ModelOp - EU AI Act

  39. IBM - EU AI Act Topics

  40. AI Regulation - Regulating Foundation Models in the AI Act

  41. Stanford CRFM - EU AI Act

  42. AI Regulation - Regulating Foundation Models in the AI Act

  43. IBM - EU AI Act Topics

  44. European Commission - GPAI Models FAQ

  45. Stanford CRFM - EU AI Act

  46. AI Regulation - Regulating Foundation Models in the AI Act

  47. Software Improvement Group - EU AI Act Summary

  48. Artificial Intelligence Act - Implementation Timeline

  49. Artificial Intelligence Act - Implementation Timeline

  50. Artificial Intelligence Act Newsletter 86

  51. DataGuard - EU AI Act Timeline

  52. DataGuard - EU AI Act Timeline

  53. European Commission - Navigating AI Act

  54. European Commission - Meet Chairs Leading GPAI Code

  55. Artificial Intelligence Act - Code of Practice

  56. European Commission - GPAI Models FAQ

  57. Freshfields - EU AI Act Unpacked 9

  58. Euronews - Meet the Europeans Behind AI Regulation

  59. Artificial Intelligence Act - Developments

  60. Freshfields - EU AI Act Unpacked 9

  61. European Commission - GPAI Models FAQ

  62. AI Regulation - Regulating Foundation Models in the AI Act

  63. Usercentrics - EU AI Regulation

  64. IBM - EU AI Act Topics

  65. Orrick - EU AI Act 6 Steps

  66. Usercentrics - EU AI Regulation

  67. Artificial Intelligence Act Newsletter 40

  68. Time - EU AI Regulation Foundation Models

  69. Artificial Intelligence Act Newsletter 40

  70. DLA Piper - Comparing US AI EO and EU AI Act

  71. EY - EU AI Act Guide

  72. Artificial Intelligence Act Newsletter 40

  73. European Commission - European Approach to AI

  74. Kaizenner - Reflections on AI Act

  75. EA Forum - EU AI Act Needs Definition of High-Risk Foundation

  76. ModelOp - EU AI Act

  77. Tech Policy Press - False Confidence in EU AI Act

  78. Stanford CRFM - EU AI Act

  79. EY - EU AI Act Guide

  80. Stanford CRFM - EU AI Act

  81. Time - EU AI Regulation Foundation Models

  82. Kaizenner - Reflections on AI Act

  83. CFG - EU AI Act

  84. Crowell - EU AI Act GDPR Changes

  85. BSR - EU AI Act Where Do We Stand in 2025

  86. Hunton - EDPB and EDPS Opinion

  87. European Parliament - Tackling AI Deepfakes

  88. Hunton - EDPB and EDPS Opinion

  89. ERA Ideas on Europe - Revisiting What Problems the EU AI Act Solves

  90. Orrick - EU AI Act 6 Steps

  91. Ogletree - EU AI Act for US Employers

  92. Tech Policy Press - Expert Predictions 2026

  93. European Commission - Navigating AI Act

  94. Jones Day - US EU AI Collaboration

  95. European Commission - Navigating AI Act

  96. European Commission - European Approach to AI

  97. Usercentrics - EU AI Regulation

  98. European Commission - European Approach to AI