Skip to content
Longterm Wiki
Back

The Authoritarian Risks of AI Surveillance

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Lawfare

Published on Lawfare, a national security and law-focused outlet, this piece is relevant to AI governance discussions about dual-use risks and the geopolitical dimensions of AI deployment, particularly for researchers studying how AI could enable large-scale societal control.

Metadata

Importance: 55/100opinion pieceanalysis

Summary

This Lawfare article examines how AI-powered surveillance technologies can be exploited by authoritarian regimes to monitor, control, and suppress populations. It explores the political and governance risks posed by the proliferation of AI surveillance tools, both domestically and through export to repressive governments.

Key Points

  • AI surveillance tools (facial recognition, predictive policing, social scoring) dramatically amplify state capacity for population control and repression.
  • Authoritarian governments can use AI surveillance to target dissidents, minorities, and political opponents with unprecedented precision and scale.
  • Export of AI surveillance technology by democratic nations to authoritarian regimes raises serious human rights and geopolitical concerns.
  • Weak international governance frameworks currently allow widespread proliferation of surveillance AI with limited accountability.
  • Domestic misuse of surveillance AI in democracies risks gradual erosion of civil liberties and normalization of authoritarian practices.

Cited by 2 pages

Cached Content Preview

HTTP 200Fetched Apr 9, 202620 KB
The Authoritarian Risks of AI Surveillance | Lawfare
 


 



 
 

 
 
 
 
 
 
 
 
 

 
 
 
 Matthew Tokson
 
 


 
 
 
 
 
 mtokson 
 
 
 
 
 
 
 
 @mtokson.bsky.social 
 
 
 
 
 

 
 
 
 Meet The Authors 
 
 
 
 Subscribe to Lawfare 
 
 

 
 
 
 Concerns about authoritarianism loom large in American politics. Against this backdrop, another phenomenon may be pushing democracies toward authoritarianism: artificial intelligence (AI) law enforcement. AI surveillance and policing systems are currently used by authoritarian nations around the world. Evidence suggests that these systems are effective in suppressing political unrest and entrenching existing regimes. Concerningly, AI surveillance and policing systems have also become increasingly prevalent in cities across the United States.

 As I explain in a new article , AI law enforcement tends to undermine democratic government, promote authoritarian drift, and entrench existing authoritarian regimes. AI-based systems can reduce structural checks on executive authority and concentrate power among fewer and fewer people. In the wrong hands, they can help authorities detect subversive behavior and discourage or punish dissent, while enabling corruption, selective enforcement, and other abuses. These effects are already visible in today’s relatively primitive AI systems, and they’ll become increasingly dangerous to democracy as AI technology improves.

 AI Law Enforcement from China to the U.S. 

 To get a sense of the capabilities of AI law enforcement, look to present-day China. Analysts estimate that over half of the world’s surveillance cameras are in China, and many of those cameras use AI facial recognition. AI algorithms identify people and track their movements, allowing the government to monitor their activities and their meetings with others. Iris scans act as a visual fingerprint of people, even those wearing masks. Spy drones fly above China’s cities, recording activities in ever-sharper detail. AI analytics can spot unlawful or anomalous actions, even littering. In recent years, Chinese authorities have installed facial recognition cameras inside residential buildings, hotels, and even karaoke bars. The goal of installing these systems is, according to a Fujian province police department, “controlling and managing people.”

 Increasingly, AI is used not just for surveillance but also for policing. Semi-autonomous AI police robots operate without human input a majority of the time. In China, these police robots patrol public places and use facial recognition to scan for people wanted by law enforcement. When such a person is detected, the robot begins following them until the police arrive. Other robots knock suspects over or fire a “net gun” to immobilize them.

 These AI systems also facilitate the overt oppression of minority groups. Xinjiang province is the home of the Uyghurs, 

... (truncated, 20 KB total)
Resource ID: ae842d471373d0fb | Stable ID: sid_3QPGuRz6Dq