Skip to content
Longterm Wiki
Back

Wave of State-Level AI Bills Raise First Amendment Problems

web

This FIRE article analyzes First Amendment implications of state-level AI regulation bills, arguing that compelled disclosure requirements (watermarking, disclaimers) may be unconstitutional, which is relevant to AI governance and the legal constraints on AI safety policy tools.

Metadata

Importance: 42/100opinion pieceanalysis

Summary

FIRE analyst John Coleman argues that many state-level AI bills violate the First Amendment by compelling speech through mandatory watermarking, disclaimers, and metadata disclosures on AI-generated content. He contends that existing laws already cover illegal AI-assisted speech (fraud, defamation), and that new disclosure mandates face heightened constitutional scrutiny. The piece uses the Ninth Circuit's X Corp. v. Bonta ruling as precedent.

Key Points

  • The First Amendment protects AI-generated speech just as it protects human-generated speech — there is no 'AI exception' to constitutional protections.
  • Many state AI bills compel speech through watermarking, disclaimers, and metadata requirements, which generally violates the First Amendment.
  • Existing laws already prohibit illegal AI-assisted speech such as fraud, defamation, and criminal conduct, reducing the need for new restrictions.
  • The Ninth Circuit's X Corp. v. Bonta ruling reaffirmed that even admirable transparency goals must yield to constitutional constraints.
  • Commercial speech exceptions allowing compelled factual disclosures are narrow and unlikely to cover blanket AI-content disclosure mandates.

Cited by 1 page

PageTypeQuality
US State AI Legislation LandscapeAnalysis70.0

Cached Content Preview

HTTP 200Fetched Apr 21, 202617 KB
Home 

 
 Newsdesk 
 

 
 
 
 

 
 Contents
 
 

 
 
 

 
 

 
 

 
 
 
 

 
 

 
 
 
 There’s no ‘artificial intelligence’ exception to the First Amendment
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 by 
 
 John Coleman 

 
 
 
 
 February 13, 2025 

 
 

 
 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 View credit 
 
 
 
 close 
 
 
 Shutterstock.com

 
 
 

 

 

 
 
 
 

 
 
 

 
 
 
 AI is enhancing our ability to communicate, much like the printing press and the internet did in the past. And lawmakers nationwide are rushing to regulate its use, introducing hundreds of bills in states across the country. Unfortunately, many AI bills we’ve reviewed would violate the First Amendment — just as FIRE warned against last month. It’s worth repeating that First Amendment doctrine does not reset itself after each technological advance. It protects speech created or modified with artificial intelligence software just as it does to speech created without it.

 On the flip side, AI’s involvement doesn’t change the illegality of acts already forbidden by existing law. There are some narrow, well-defined categories of speech not protected by the First Amendment — such as fraud, defamation, and speech integral to criminal conduct — that states can and do already restrict. In that sense, the use of AI is already regulated, and policymakers should first look to enforcement of those existing laws to address their concerns with AI. Further restrictions on speech are both unnecessary and likely to face serious First Amendment problems, which I detail below.

 Constitutional background: Watermarking and other compelled disclosure of AI use 

 We’re seeing a lot of AI legislation that would require a speaker to disclose their use of AI to generate or modify text, images, audio, or video. Generally, this includes requiring watermarks on images created with AI, mandating disclaimers in audio and video generated with AI, and forcing developers to add metadata to images created with their software. 

 
 
 
 
 
 
 
 

 

 
 
 Many of these bills violate the First Amendment by compelling speech. Government-compelled speech—whether that speech is an opinion, or fact, or even just metadata—is generally anathema to the First Amendment. That’s for good reason: Compelled speech undermines everyone’s right to conscience and fundamental autonomy to control their own expression.

 To illustrate: Last year, in X Corp. v. Bonta , the U.S. Court of Appeals for the Ninth Circuit reviewed a California law that required social media companies to post and report information about their content moderation practices. FIRE filed an amicus curiae — “friend of the court” — brief in that case, arguing the posting and reporting requirements unconstitutionally compel social media companies to speak about topics on which they’d like to remain silent. The Ninth Circuit agreed, holding the law was likely unconstitutional. While acknowledging the state had an interest in providing transparency, the court reaffirmed that “

... (truncated, 17 KB total)
Resource ID: 98dcc75fead24988 | Stable ID: sid_865Mrp6ZMA