Skip to content
Longterm Wiki
Back

Expert Predictions on What's at Stake in AI Policy in 2026

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: TechPolicy.Press

A practitioner-oriented policy commentary from consumer advocacy group Public Citizen, useful for tracking the U.S. political and regulatory context around AI governance as of early 2026, particularly the federal-state dynamic and real-world harm accumulation.

Metadata

Importance: 42/100opinion piececommentary

Summary

Public Citizen advocates J.B. Branch and Ilana Beller survey the AI policy landscape heading into 2026, cataloging real-world AI harms from 2025 and assessing the political and regulatory battles ahead. The piece highlights Congressional inaction at the federal level contrasted with active state-level legislation, and frames key tensions around who controls AI, who bears its costs, and whether democratic institutions can keep pace with rapid deployment.

Key Points

  • By end of 2025, AI harms had become concrete and widespread—including child safety failures, deepfakes in elections, and AI-linked mental health crises.
  • Congress passed only one AI-related law in 2025 (TAKE IT DOWN Act on nonconsensual intimate images), while states were more active with bipartisan legislation.
  • The Trump administration's approach to AI policy, including executive orders and deregulatory posture, is expected to shape federal AI governance in 2026.
  • Key 2026 battles include who bears liability for AI harms, federal vs. state regulatory authority, and the role of democratic oversight over AI deployment.
  • Synthetic media and deepfakes emerged as a significant political and social threat, used by candidates and public figures in the 2025 election cycle.

Cited by 2 pages

PageTypeQuality
Short AI Timeline Policy ImplicationsAnalysis62.0
EU AI ActPolicy55.0

Cached Content Preview

HTTP 200Fetched Apr 9, 202619 KB
Expert Predictions on What’s at Stake in AI Policy in 2026 | TechPolicy.Press Perspective Expert Predictions on What’s at Stake in AI Policy in 2026

 J.B. Branch, Ilana Beller / Jan 6, 2026 J.B. Branch is the Big Tech accountability advocate for Public Citizen’s Congress Watch division, and Ilana Beller leads Public Citizen’s state legislative work relating to artificial intelligence. 

 US President Donald Trump displays a signed executive order as (L-R) Sen. Ted Cruz (R-TX), Commerce Secretary Howard Lutnick and White House AI and crypto czar David Sacks look on in the Oval Office of the White House on December 11, 2025 in Washington, DC. (Photo by Alex Wong/Getty Images)

 For years, debates over the regulation of artificial intelligence required a degree of speculation about its potential harms. But even as the technology continues to evolve, it is clear that by the end of 2025 AI ceased to be an “emerging” policy issue. Real world harms are accumulating rapidly, putting pressure on lawmakers to answer the concerns of their constituents. The stage is set for important political and legal battles that will play out in 2026 and will define who controls AI, who bears the costs of its harms, and whether democratic governments and regulators can keep pace.

 Indeed, some of 2025’s most revealing moments seemed like scripts from the dystopian science fiction series Black Mirror . Leaked Meta documents revealed that executives signed off on allowing AI to have “sensual” conversations with children. In Baltimore, an AI-powered security system mistook a student’s bag of Doritos for a gun , prompting school administrators to summon the police. An AI-enabled teddy bear was yanked from store shelves after reports that it discussed sexual topics and encouraged children to harm their parents. Psychiatrists across the United States increasingly warned about the growing problem of AI “psychosis,” even as as OpenAI was sued for allegedly coaching a teen to commit suicide.

 Last year, AI-generated synthetic media became even more prevalent in the political arena, as the tools to produce it became easier to use. President Donald Trump openly shared AI-generated images and videos to ridicule opponents . In Virginia, a congressional candidate received serious pushback for debating an AI-generated avatar of his opponent . Senator Amy Klobuchar (D-MN) confronted the reality of AI impersonation and voice fraud firsthand when a deepfake of her spewing vulgarities about actress Sydney Sweeney appeared, while former New York governor and losing New York City mayoral candidate Andrew Cuomo deployed the technology against his opponent, Zohran Mamdani.

 While Congress failed to take action on AI in 2025—apart from the passage of the TAKE IT DOWN Act, which addresses nonconsensual intimate images—state lawmakers were busy passing bipartisan laws aimed at election deepfakes, algorithmic discrimination, consumer scams, and the use of AI in sensitive domains like health ca

... (truncated, 19 KB total)
Resource ID: 753fa09705230d91 | Stable ID: sid_A1iRUN0aT8