Foreign Relations & International Law

China’s Total Information Awareness: Second-Order Challenges

Ashley Deeks
Tuesday, January 16, 2018, 9:00 AM

Every day seems to bring a new article about China’s pervasive use of facial recognition technology.

(Photo: Pxsphere)

Published by The Lawfare Institute
in Cooperation With
Brookings

Every day seems to bring a new article about China’s pervasive use of facial recognition technology. Both the New York Times and the Washington Post have reported how widely China is using this technology, collecting and storing video evidence from cameras on every street corner and road, at apartment building entrances, and in businesses, malls, transportation hubs, and public toilets. The Chinese government seeks to consolidate this information with people’s criminal and medical records, travel plans, online purchases, and comments on social media. China would link all of this information to every citizen’s identification card and face, forming one omnipotent database.

Similarly, the Wall Street Journal produced a chilling long-form article tracking a journalist’s trip to Xinjiang province. The piece details not just the use of facial recognition software but also more intrusive steps such as the use of DNA collection, iris scanning, voice-pattern analysis, phone scanners, ID card swipes, and security checkpoints, all to further suppress unrest among the predominantly Muslim Uighur population. The piece frames life in Xinjiang as a forecast of what’s to come in China more broadly.

These developments feel relatively distant, both geographically and as a matter of current U.S. domestic practice. Our government does not collect video feeds from cameras in public toilets and private apartment buildings. Nor does it possess a database containing every citizen’s photograph. Nevertheless, federal and local government agencies in the United States are increasing their use of facial recognition software at the border and in law enforcement contexts. There are a range of second-order questions that we should begin to think about as facial recognition software continues to improve and as its use expands, both within and beyond China’s borders.

One challenge relates to U.S. intelligence collection. If the Chinese government can recognize every person on the street and easily track a person’s comings and goings, this will make it even harder for foreign intelligence agencies to operate inside the country. Not only will U.S. and other Western intelligence agents be even easier to follow (electronically), but the Chinese government will also be able to identify Chinese nationals who might be working with Western intelligence services—perhaps using machine learning and pattern detection to extract patterns of life. China’s facial recognition efforts thus facilitate its counterintelligence capacities.

A second challenge is posed by the fact that this technology surely will spread to other (probably authoritarian) countries. China seems committed to becoming a (maybe the) leader in artificial intelligence, and is promoting startups that focus in this area. No doubt China will seek to export AI technology to other states that seek a high level of government and social control over their populations. Sooner or later, the United States therefore will need to decide what it thinks about the use of pervasive video surveillance and, more specifically, whether this kind of surveillance violates basic human rights norms.

I have not been able to find U.S. government statements, law review articles, or statements from nongovernmental organizations articulating views about whether such pervasive surveillance implicates Article 17 of the International Covenant on Civil and Political Rights (ICCPR), which states in part that “[n]o one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence.” (If such documents exist, I would welcome reader feedback.) Without doubt, some level of public surveillance poses no difficulty under the covenant; after all, one generally lacks a reasonable expectation of privacy for those acts one performs publicly. States such as the U.K. widely employ closed circuit cameras in public spaces, and London’s metropolitan police purportedly deployed a trial of facial recognition software at a gathering in August 2017. The FBI operates a face recognition service that it uses in criminal investigations to search through a database of 30 million photographs, to compare, for instance, a still photo taken from a surveillance camera. Customs and Border Patrol uses a more limited technology that compares a photo of a person seeking admission to the United States with the photograph in his or her passport. The Electronic Privacy Information Center lists a few other contexts in which U.S. actors (including companies such as Facebook) are using facial recognition tools.

And yet constant surveillance in the public sphere implicates some of the same concerns that several members of the Supreme Court raised in U.S. v. Jones. Justice Samuel Alito, joined by three other justices, worried that long-term monitoring by the government of a person’s movement in public places might not pass Fourth Amendment muster. In her concurrence, Justice Sonia Sotomayor suggested that the court’s jurisprudence might not be adequate in “cases of electronic or other novel modes of surveillance that do not depend upon a physical invasion on property.” She noted, “GPS monitoring generates a precise, comprehensive record of a person's public movements that reflects a wealth of detail about her familial, political, professional, religious, and sexual associations”—something that robust, closed circuit collection of people’s movements would generate as well. This is a domestic case, to be sure, but the underlying question about whether total government surveillance of one’s public actions ultimately violates privacy rights resonates within the international law concept of privacy as well.

China is not a party to the ICCPR, so it need not be troubled as a matter of international law by charges that this level of surveillance runs afoul of international privacy rules. However, many other states are parties to the agreement. How will the United States react, if at all, when those states begin to employ China’s techniques? Will the United States argue that pervasive public surveillance violates the covenant? Where and how will it draw the line between permissible and impermissible levels of surveillance? Does the agreement’s privacy provision even have enough teeth to offer a shield against Chinese-level facial surveillance?

A third, related challenge will arise in the context of the U.S. military. The U.S. military has employed facial recognition software in the conflicts in Iraq and Afghanistan. It seems safe to assume that the technologies used in wartime contexts will continue to improve as private companies fine-tune and then sell their algorithms. What if the U.S. military creates large facial recognition databases during conflicts such as Iraq or Afghanistan, to protect U.S. and allied troops and detect combatants or terrorists? When the time comes for the U.S. military to leave, should the military turn that database and software over to the host state? What protections exist to guard against abuse, especially where the host state has given indications that it will use these tools to harass or harm its population?

There is undoubtedly much more to say about widespread facial surveillance and related surveillance algorithms, which raise questions about accuracy and the difficulty correcting inaccurate identifications; about the ways in which such surveillance chills free speech and association; and about who will have access to the information. However, even this short list of second-order challenges suggests that we should start thinking now about how to respond to deep surveillance states. The U.S. government, the military, intelligence agencies, human rights groups, and the public all have a role here. Credit these news articles for allowing us to see what a modern total information awareness society looks like—and to imagine what one would feel like.


Ashley Deeks is the Class of 1948 Professor of Scholarly Research in Law at the University of Virginia Law School and a Faculty Senior Fellow at the Miller Center. She serves on the State Department’s Advisory Committee on International Law. In 2021-22 she worked as the Deputy Legal Advisor at the National Security Council. She graduated from the University of Chicago Law School and clerked on the Third Circuit.

Subscribe to Lawfare