Skip to content
Longterm Wiki
Back

AI Safety Newsletter #35: Lobbying on AI Regulation

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Center for AI Safety

A newsletter from the Center for AI Safety covering AI lobbying trends, new multimodal models from OpenAI and Google, and regulatory challenges, relevant to AI governance and policy discussions.

Metadata

Importance: 52/100blog postnews

Summary

This newsletter covers the surge in AI lobbying efforts in the US, with organizations nearly tripling from 158 to 451 between 2022-2023, including tech giants opposing safety regulations. It also discusses new multimodal models GPT-4o and Google's Project Astra, and their potential regulatory implications under the EU AI Act.

Key Points

  • AI lobbying groups in the US nearly tripled from 158 to 451 organizations between 2022 and 2023, with tech giants like IBM, Meta, and Nvidia leading opposition to safety regulations.
  • OpenAI released GPT-4o ('omni'), a multimodal model handling text, images, video, and audio, representing a shift toward live verbal AI interaction.
  • Google DeepMind demoed Project Astra, a real-time video-understanding model intended as a step toward autonomous AI agents.
  • Both GPT-4o and Project Astra may face legal challenges in the EU under the AI Act's prohibition on emotion inference systems in workplace/education contexts.
  • Influential opponents of AI regulation include Andreessen Horowitz and Charles Koch, who are investing tens of millions to block safety-oriented legislation.

1 FactBase fact citing this source

Cached Content Preview

HTTP 200Fetched Apr 7, 202614 KB
AI Safety Newsletter #35: Lobbying on AI Regulation 
 
 
 
 
 

 

 

 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 

 

 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 
 
 

 
 
 
 
 

 

 
 
 

 

 

 

 

 
 
 
 

 
 

 
 
 

 

 

 
 AI Safety Newsletter 

 Subscribe Sign in AI Safety Newsletter #35: Lobbying on AI Regulation

 Plus, New Models from OpenAI and Google, and Legal Regimes for Training on Copyrighted Data

 Corin Katzke , Julius Simonelli , and Dan Hendrycks May 16, 2024 9 3 1 Share Welcome to the AI Safety Newsletter by the Center for AI Safety . We discuss developments in AI and AI safety. No technical background required. 

 Subscribe OpenAI and Google Announce New Multimodal Models 

 In the current paradigm of AI development, there are long delays between the release of successive models. Progress is largely driven by increases in computing power, and training models with more computing power requires building large new data centers. 

 More than a year after the release of GPT-4, OpenAI has yet to release GPT-4.5 or GPT-5, which would presumably be trained on 10x or 100x more compute than GPT-4, respectively. These models might be released over the next year or two, and could represent large spikes in AI capabilities.

 But OpenAI did announce a new model last week, called GPT-4o . The “o” stands for “omni,” referring to the fact that the model can use text, images, videos, and audio as inputs or outputs. This new model modestly outperforms OpenAI’s previous models on standard benchmarks of conversational skill and coding ability. More importantly, it suggests a potential change in how people interact with AI systems, moving from text-based chatbots to live verbal discussions. 

 OpenAI employees talking with GPT-4o in a live demo of the new model. 

 Google DeepMind demoed a similar model, called Project Astra . It can watch videos and discuss them in real-time. This model is intended to be part of a path towards building AI agents that can act autonomously in the world. Google also announced improvements to their Gemini series of closed source models, and Gemma series of open source models.  

 One interesting note for those interested in AI policy is that these models could potentially be deemed illegal in the European Union. The EU AI Act prohibits: 

 the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons. 

 Users can ask multimodal AI systems like GPT-4o and Project Astra to look at a person’s face and assess whether they’re happy, sad, angry, or surprised. Does this mean that these models wi

... (truncated, 14 KB total)
Resource ID: kb-cfdceda4543e3a6a