Skip to content
Longterm Wiki
Back

Mistral AI and the EU AI Act: Foundational Model Regulation Debate

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: TechCrunch

This TechCrunch article covers the EU AI Act trilogue negotiations in November 2023, focusing on debates over regulating foundational/general-purpose AI models and Mistral AI's lobbying position, which is directly relevant to AI governance and policy.

Metadata

Importance: 42/100news articlenews

Summary

The article reports on divisions in EU AI Act trilogue negotiations over how to regulate foundational/general-purpose AI models. French startup Mistral AI is highlighted as opposing the European Parliament's tiered regulatory approach, arguing it creates bureaucratic friction disadvantaging European AI startups. Mistral claims to support AI safety goals but objects to the evolving complexity of the framework.

Key Points

  • EU AI Act trilogue talks are stalled over how to regulate foundational/general-purpose AI models, a key unresolved issue.
  • The European Parliament proposed tiered obligations for foundational model makers, including transparency and documentation of training data.
  • Mistral AI denies lobbying to block regulation but argues the framework has become overly bureaucratic and disadvantages EU startups vs. US giants.
  • Member States in the Council are pushing back against the Parliament's approach, creating a legislative stalemate.
  • Critics suggest some AI companies may prefer regulatory stalemate to allow self-regulation rather than binding hard law.

Cached Content Preview

HTTP 200Fetched Apr 14, 202629 KB
Divisions over how to set rules for applying artificial intelligence are complicating talks between European Union lawmakers trying to secure a political deal on draft legislation in the next few weeks, as we reported earlier this week . Key among the contested issues is how the law should approach upstream AI model makers.

 French startup Mistral AI has found itself at the center of this debate after it was reported to be leading a lobbying charge to row back on a European Parliament’s proposal pushing for a tiered approach to regulating generative AI . What to do about so-called foundational models — or the (typically general purpose and/or generative) base models that app developers can tap into to build out automation software for specific use-cases — has turned into a major bone of contention for the EU’s AI Act.  

 
 
 
 

 
 
 
 

 The Commission originally proposed the risk-based framework for regulating applications of artificial intelligence back in April 2021 . And while that first draft didn’t have much to say about generative AI (beyond suggesting some transparency requirements for techs like AI chatbots) much has happened at the blistering edge of developments in large language models (LLM) and generative AI since then.

 So when parliamentarians took up the baton earlier this year , setting their negotiating mandate as co-legislators, they were determined to ensure the AI Act would not be outrun by developments in the fast-moving field. MEPs settled on pushing for different layers of obligations — including transparency requirements for foundational model makers. They also wanted rules for all general purpose AIs, aiming to regulate relationships in the AI value chain to above liabilities being pushed onto downstream deployers. For generative AI tools specifically, they suggested transparency requirements aimed at limiting risks in areas like disinformation and copyright infringement — such as an obligation to document material used to train models.

 But the parliament’s effort has met opposition from some Member States in the Council during trilogue talks on the file — and its not clear whether EU lawmakers will find a way through the stalemate on issues like how (or indeed whether) to regulate foundational models with such a dwindling timeframe left to snatch a political compromise.

 More cynical tech industry watchers might suggest legislative stalemate is the objective for some AI giants, who — for all their public calls for regulation — may prefer to set their own rules than bend to hard laws.

 For its part, Mistral denies lobbying to block regulation of AI. Indeed, the startup claims to support the EU’s goal of regulating the safety and trustworthiness of AI apps. But it says it has concerns about more recent versions of the framework — arguing lawmakers are turning a proposal that started as a straightforward piece of product safety legislatio

... (truncated, 29 KB total)
Resource ID: ae30bf045438337d | Stable ID: sid_4BPixMgGuA