Criminal Justice & the Rule of Law Surveillance & Privacy

The Authoritarian Risks of AI Surveillance

Matthew Tokson
Thursday, May 1, 2025, 1:00 PM

AI-powered surveillance facilitates authoritarianism across the globe. Here’s how courts and lawmakers could stop it from happening here.

Interactive data visualization of facial recognition technology in New York. (Ars Electronica/Amnesty International, www.flickr.com/photos/arselectronica/52974170333/in/photostream/, CC BY-NC-ND 2.0, creativecommons.org/licenses/by-nc-nd/2.0/deed.en)

Published by The Lawfare Institute
in Cooperation With
Brookings

Concerns about authoritarianism loom large in American politics. Against this backdrop, another phenomenon may be pushing democracies toward authoritarianism: artificial intelligence (AI) law enforcement. AI surveillance and policing systems are currently used by authoritarian nations around the world. Evidence suggests that these systems are effective in suppressing political unrest and entrenching existing regimes. Concerningly, AI surveillance and policing systems have also become increasingly prevalent in cities across the United States.

As I explain in a new article, AI law enforcement tends to undermine democratic government, promote authoritarian drift, and entrench existing authoritarian regimes. AI-based systems can reduce structural checks on executive authority and concentrate power among fewer and fewer people. In the wrong hands, they can help authorities detect subversive behavior and discourage or punish dissent, while enabling corruption, selective enforcement, and other abuses. These effects are already visible in today’s relatively primitive AI systems, and they’ll become increasingly dangerous to democracy as AI technology improves.

AI Law Enforcement from China to the U.S. 

To get a sense of the capabilities of AI law enforcement, look to present-day China. Analysts estimate that over half of the world’s surveillance cameras are in China, and many of those cameras use AI facial recognition. AI algorithms identify people and track their movements, allowing the government to monitor their activities and their meetings with others. Iris scans act as a visual fingerprint of people, even those wearing masks. Spy drones fly above China’s cities, recording activities in ever-sharper detail. AI analytics can spot unlawful or anomalous actions, even littering. In recent years, Chinese authorities have installed facial recognition cameras inside residential buildings, hotels, and even karaoke bars. The goal of installing these systems is, according to a Fujian province police department, “controlling and managing people.”

Increasingly, AI is used not just for surveillance but also for policing. Semi-autonomous AI police robots operate without human input a majority of the time. In China, these police robots patrol public places and use facial recognition to scan for people wanted by law enforcement. When such a person is detected, the robot begins following them until the police arrive. Other robots knock suspects over or fire a “net gun” to immobilize them.

These AI systems also facilitate the overt oppression of minority groups. Xinjiang province is the home of the Uyghurs, a Turkic ethnic group persecuted for their religion and their ethnicity. Xinjiang is also ground zero for China’s AI surveillance apparatus. It is full of digital checkpoints, where AI facial recognition cameras track Uyghurs’ movements, matching their faces to photos previously taken by police at mandatory “health checks.” When a Uyghur reaches the edge of their neighborhood, the AI system takes note. Some Uyghurs are required to install surveillance apps on their phones that alert police if Arabic script or Quran verses are detected. . Uyghurs outside of Xinjiang province face special scrutiny, from phone trackers seeking to identify them by their phone apps (like a Uyghur-to-Chinese dictionary), to facial recognition cameras identifying them by their facial features and triggering a “Uyghur alarm.”

The widespread use of AI-based surveillance and policing technologies, including police robots, can be observed in other authoritarian nations, particularly in the Middle East. But similar technologies have also become increasingly popular in the United States, giving local police departments surveillance power that would, in the words of a senior ACLU attorney, “really shock most people.” In cities like Chicago and New York, police have created integrated networks of thousands or tens of thousands of AI surveillance cameras, which they use to monitor public streets and investigate crimes. These systems can identify and sort objects or categories of activity. Police can search digital databases of AI-processed video footage for items, cars, or clothing of interest. AI software can also identify anomalous behavior or suspicious activities captured by a surveillance camera. For example, a car in a store’s parking lot after closing time, or activity in an alley at night, might be flagged as suspicious by an AI monitoring system. These AI analytics are in their infancy, but their use will spread as the technology advances. They can allow a police department to monitor a large space in remarkable detail without employing large numbers of officers or paying officers to watch thousands of hours of video.

Facial recognition is already widely used by police departments in America. Police employ it in New York, Los Angeles, Chicago, Detroit, San Diego, and in hundreds of state and local law enforcement agencies. Facial recognition matches have already been used as the basis for arrests, including wrongful arrests of suspects misidentified by AI algorithms. Due to disparities in AI training data, facial recognition technology in the United States has proved systemically less accurate for people who are Black, East Asian, American Indian, or female.

In recent years, police departments in the United States have also begun to adopt semi-autonomous robots. New York City has deployed several such robots, including two robot dogs created by Boston Dynamics that feature surveillance and infrared cameras and a robot arm that can open doors. These robot dogs have also been used by the Massachusetts State Police and in Los Angeles, Honolulu, Miami, Houston, St. Petersburg, and Michigan. The Department of Homeland Security has announced plans to use similar four-legged robots to patrol the U.S. border and detect illegal entry. New York also recently deployed a 400-pound, 5-foot-tall cone-shaped robot with a 360-degree camera, LiDAR, sonar, GPS, and 16 microphones. About a dozen other police departments are now using the same robot. Although semi-autonomous police robots are in an early stage of development, their use is likely to increase substantially as they become cheaper and more effective over time. 

AI Law Enforcement and Authoritarian Drift

AI law enforcement tends to facilitate authoritarianism, both at the national level and in smaller jurisdictions. It does this in several ways. First, AI law enforcement concentrates power among fewer and fewer people. The number of people needed to operate a largely automated police force will be far less than the number needed to operate a traditional, pre-AI police force. This decreases the number of officers that a commander has to enlist to monopolize the use of force in a given jurisdiction, making autocracy easier to establish and maintain. At the extreme end, a single person with control over an automated enforcement system could achieve dominance over an area without any human allies or supporters.

This phenomenon makes authoritarian control substantially easier to achieve and maintain. Traditionally, dictators tend to rise through political or military ranks, amassing power by leading a political faction that triumphs in a domestic struggle, or by winning elections in a nation seeking relief from a crisis. Even local officials with tyrannical power over a smaller jurisdiction maintain that power through political or personal means, persuading others to follow their commands. In a world of AI enforcement technology, the difficulties of maintaining power and loyalty are reduced, and dominant force can be marshaled by any small group of persons with command privileges over the AI systems that police the jurisdiction.

Second, replacing human police or military officers with automated ones increases loyalty and reduces the likelihood of discretionary restraint. In 2024, when South Korean President Yoon Suk Yeol declared martial law and sent military troops to take over the South Korean National Assembly building, they were thwarted by staffers and members of Parliament who barricaded doorways and pushed the soldiers back. The soldiers and military police did not fire on the unarmed resisters. One opposition party spokeswoman grabbed a soldier’s rifle; he pulled it back and pointed it at her. “Aren’t you ashamed of yourself?” she asked, and he backed down.

In a democracy, soldiers are likely to hesitate before firing on peaceful protesters or opposition lawmakers. But automated systems will not hesitate to follow orders, and shame will not prevent them from using deadly force when commanded. They will do as ordered. Commanding a violent force characterized by total loyalty to an administrator and a willingness to do anything the administrator requests empowers authoritarians and facilitates human rights abuses. Likewise, human employees of an autocratic regime can act as whistleblowers and documenters of abuses. But fully automating processes of law enforcement or surveillance can eliminate the possibility of whistleblowing and reduce transparency.

Third, AI law enforcement decreases the cost and increases the pervasiveness of government surveillance, overcoming traditional barriers to panoptic monitoring. Automated enforcement tools offer autocracies the deterrent power of a massive police force without needing to pay human police officers. Low-cost cameras, drones, and robots can pervasively monitor a large area, allowing for more effective nationwide surveillance and the consolidation of power in a central authority. These tools facilitate social scoring programs, in which regimes monitor individuals by giving them scores according to their trustworthiness, loyalty, and compliance with regime commands. They also make the detection of enemies of the state or people who openly oppose the regime substantially easier.

Pervasive AI surveillance makes large-scale political organization more difficult, making coups or mass protests harder to organize and less likely to occur. Early evidence suggests that fewer people protest when public safety agencies acquire AI surveillance technology. AI surveillance systems can also help in terms of international relations, as autocracies may “end up looking less violent because they have better technology for chilling unrest before it happens.” By suppressing conflict before it can arise, these technologies may reduce the need for bloodshed, but they do so by entrenching existing autocrats.

In addition, AI enforcement tools are often more effective than human police officers. AI surveillance systems can monitor an area 24/7 and without lapses in attention. Police robots have sensors that can look beyond human sight and detect sound beyond human hearing, drones can cover more territory than human police officers, and robotic sensors can identify trace amounts of drugs or explosives. Robots are often more resistant to physical force, including gunshots, than are human officers, and they have no fear and little sense of self-preservation, which makes them more effective combatants.

The pro-authoritarian effects of AI enforcement can give rise to several forms of abuse and discrimination. For example, AI surveillance can facilitate selective enforcement. Pervasive surveillance systems may be able to generate probable cause against almost all residents of an area, at least for low-level crimes like speeding or jaywalking. Under current law, this would give police the ability to arrest virtually anyone and to search their persons, their cars, and (to a more limited extent) their homes incident to arrest. The government might use this enormous discretion to target individuals for discriminatory or political purposes.

Automated law enforcement systems controlled by one or a few individuals can also facilitate corruption and lawlessness. Local officials with a monopoly of force in an area and few other humans to oversee their activities may be especially tempted to engage in self-dealing and personal enrichment. Powerful local sheriffs have a long history of corruption, and the reduction in transparency and increase in enforcement power associated with AI enforcement is likely to exacerbate this problem. Automated law enforcement agents can also give executive officials the ability to dictate law and policy in a given area, with little consideration for actual governing law. The legal right to protest a government official matters little if police robots attack or arrest anyone who protests.

What Can Courts and Lawmakers Do About It?

The Fourth Amendment protects individuals’ privacy against unlawful government surveillance. But it can also be used to check authoritarianism in ways that go beyond privacy protection. Anti-authoritarian theories of the Fourth Amendment can lay the theoretical groundwork for a novel approach toward technologies like AI. Following the guidance of these theories, courts should analyze AI-based enforcement systems based on the assumption that such systems will be both pervasive and aggressively employed. The way to address these systems’ risks is proactively, before an unregulated, automated system capable of eroding democracy establishes a monopoly of force in a given jurisdiction.

Turning to the specific context of AI-enabled surveillance camera systems, courts should strongly favor judicial oversight on Fourth Amendment grounds, because of the potential for such systems to rapidly expand and facilitate authoritarian drift. At the same time, the standard of reasonableness applied to such systems need not be as stringent as the standard that often applies in the privacy context, namely, an individualized warrant supported by probable cause. An anti-authoritarian approach might permit the use of retrospective video observation, occurring after a reported crime and limited to locations or persons obviously implicated in the crime. Courts or legislatures might further check authoritarian abuses of these systems if they limit these investigations to only a subset of serious crimes, and if they prohibit the AI systems themselves from generating their own crime reports via AI analytic detection.

The Fourth Amendment law of policing should also be adapted to effectively regulate police robots. Many current doctrines, premised on the safety and fallibility of human police officers, are poor fits for automated law enforcement agents. For example, human police officers are permitted to use deadly force in many situations. Police robots, which are (at least currently) considered property rather than persons, should never be permitted to use deadly force to defend themselves. Even in situations where police robots might defend human civilians from death or serious bodily harm, the default approach should be for human operators to manually direct the use of deadly force in most such situations.

Police robots’ uses of non-deadly force should also be sharply limited, because many of the factors that justify the use of such force by human officers—pain, fear, confusion, and stress—do not apply to police robots. Further, the use of highly effective, non-deadly automated force can facilitate authoritarianism, by intimidating disfavored groups and speakers, suppressing protest, and eliminating the possibility of evading wrongful policing. Other specific Fourth Amendment rules premised on officer safety should also be revised, from pat-downs for weapons during an investigativestop to “protective sweeps” through houses incident to an arrest, to searches of a vehicle for weapons without probable cause. By restricting police robots’ ability to search protected areas, courts can limit the potential for overenforcement and abuse.

More broadly, courts assessing police robots’ use of force under a totality of the circumstances test should consider AI-specific factors related to the possibility of authoritarian drift. Whether a human officer is controlling a robot in use-of-force situations, or the extent of civilian involvement in shaping the protocols for police robots, may be especially important in determining the constitutional reasonableness of a given use of force.

Relatedly, greater civilian oversight of policing and surveillance can provide a means for local communities to prevent AI-driven authoritarian drift. Local civilian oversight boards might be empowered to determine which crimes AI systems can police, or review citizen complaints regarding AI enforcement, or even approve or disapprove general programmatic uses of AI systems. This could help shift power away from law enforcement and reallocate it to communities, checking the authoritarian tendencies of automated enforcement systems. By adapting the principles of the anti-authoritarian Fourth Amendment to the new frontier of AI law enforcement, legal actors can restrain the authoritarian effects of AI enforcement technologies.


Matthew Tokson is a Professor of Law at the University of Utah S.J. Quinney College of Law, writing on the Fourth Amendment, cyberlaw, and artificial Intelligence, among other topics. He is also an affiliate scholar with Northeastern University's Center for Law, Innovation and Creativity. He previously served as a law clerk to the Honorable Ruth Bader Ginsburg and to the Honorable David H. Souter of the United States Supreme Court, and as a senior litigation associate in the criminal investigations group of WilmerHale, in Washington, D.C.
}

Subscribe to Lawfare