Foreign Relations & International Law

'One Nation Under CCTV': The U.K. Tackles Facial Recognition Technology

Shannon Togawa Mercer, Ashley Deeks
Monday, May 7, 2018, 8:00 AM

“Artificial Intelligence Could Soon Enhance Real-Time Police Surveillance” reads a recent Wall Street Journal headline. Technology companies are working with U.S. police departments to develop facial recognition technology for body cameras—but the United States isn’t alone in its exploration and development of facial recognition technology.

"One Nation Under CCTV," 2007 mural by Banksy (Flickr/ogglog)

Published by The Lawfare Institute
in Cooperation With
Brookings

“Artificial Intelligence Could Soon Enhance Real-Time Police Surveillance” reads a recent Wall Street Journal headline. Technology companies are working with U.S. police departments to develop facial recognition technology for body cameras—but the United States isn’t alone in its exploration and development of facial recognition technology.

One of us has written on the Chinese government’s use of facial recognition software (FRS), and together we outlined FRS’s costs and benefits in the public and private sectors. This piece will address a question we posed in the latter post: “how would a responsible, privacy-respecting state use facial recognition software?”

Enter the United Kingdom.

In 1998, London became one of the first municipalities in the world to use CCTV with facial recognition in public areas. Now, the U.K. government is in the middle of a fraught, but still quite high-level, debate over what regulation of facial recognition technology is necessary. In this piece, we focus on the regulatory environment surrounding U.K. law enforcement’s use of FRS.

In short, the U.K. government has not yet passed any new regulations specific to law enforcement’s use of FRS; instead, law enforcement relies on existing codes of conduct while Parliament and the London municipal government attempt to formulate new policies. At the same time, U.K. law enforcement is moving forward to test and fund further uses of FRS.

Legal Framework

The Home Office—a ministerial department with responsibilities ranging from securing the U.K. border and controlling immigration to handling terrorist threats and crime prevention—holds the general responsibility for regulation of FRS, with two independent commissioners overlapping in the as-of-yet poorly defined area of Facial Recognition Technology regulation: the Office of the Biometrics Commissioner and the Surveillance Camera Commissioner. Both offices, established by the Protection of Freedoms Act of 2012, have involved themselves in the dialogue about FRS regulation, but the exact division of responsibility between them is still ambiguous. Additionally, the Information Commissioner’s Office—an independent government body established to monitor and uphold information rights—has weighed in with its own recommendations for the use of CCTV technology connected to those promulgated by the Surveillance Camera Commissioner under its memorandum of understanding with the Surveillance Camera Commissioner to “ensure effective cooperation.” The Surveillance Camera Commissioner is also formulating non-regulatory compliance mechanisms. Separately, the independent Forensic Science Regulator is responsible for setting standards for forensic science services, some of which may involve custody images (pictures taken of individuals after arrest) and FRS.

The Home Office has outlined what law and codes of conduct apply to FRS as used in conjunction with CCTV. In a Sept. 12, 2017 response to the question, “What assessment [has then-Home Secretary Amber Rudd] made of the effectiveness of current legislation regulating the use of CCTV cameras with facial recognition and biometric tracking capabilities?,” Member of Parliament and Minister of State for Policing and Fire Services Nick Hurd responded:

There is no legislation regulating the use of CCTV cameras with facial recognition and biometric tracking capabilities. However, the Surveillance Camera Code of Practice requires any police use of facial recognition or other biometric characteristic recognition systems to be clearly justified and proportionate in meeting the stated purpose. The retention of facial images by the police is governed by data protection legislation and by Authorised Professional Practice produced by the College of Policing.

The Home Office reaffirmed this response in part in a Dec. 11, 2017 written question and answer exchange.

There appear to be two stages at which the U.K. government believes that facial recognition technology can, and perhaps should, be regulated: first, when parties are deciding whether to use FRS, and second, when any information about individuals will be retained in databases as input for the technology. There is currently very little law governing either stage. As Lord Brian Paddick mentioned in a recent House of Lords debate, these decisions are often “left to the police alone to decide for themselves.”

There is, however, some law creating a framework around FRS use. The Police and Criminal Evidence Act of 1984 allows the police to take facial photographs of anyone detained. In 2007, the U.K. government put forward a CCTV Strategy that acknowledged the connection between developing CCTV and future technologies. The Strategy noted, “Improving the quality of CCTV images will support the development of … future technologies such as facial recognition.” More directly, the Strategy lays out principles for retention of CCTV images: “It has long been accepted that CCTV recordings should routinely be kept between 28 and 31 days before recording over.” (As we describe below, the standards for retention have shifted over time, but at the time of the 2007 Strategy, this rule would have applied to CCTV images that would be used in FRS.) The Protection of Freedoms Act of 2012—which regulates the processing, retention, and gathering of biometric data—only applies indirectly to the use of FRS by mandating a code of practice for surveillance camera systems. The Data Protection Act (DPA) is the primary U.K. legislation controlling how personal data is used by the public and private sectors and it contains extensive regulation of the processing and control of data. Specifically relevant to the retention of images for comparison against faces viewed through FRS is that the DPA classifies “custody images” as personal data. There is no more enacted legislation directly applicable.

The Surveillance Camera Code of Practice includes two directly relevant provisions:

3.2.3 Any use of facial recognition or other biometric characteristic recognition systems needs to be clearly justified and proportionate in meeting the stated purpose, and be suitably validated. It should always involve human intervention before decisions are taken that affect an individual adversely.

4.12.1 Any use of technologies such as . . . facial recognition systems which may rely on the accuracy of information generated elsewhere such as databases provided by others should not be introduced without regular assessment to ensure the underlying data is fit for purpose.

(As was mentioned above the Information Commissioner’s Office has also put forth its own code of practice that works in conjunction with the Surveillance Camera Code of Practice. It briefly mentions automated facial recognition technologies as subject to the data protection regulations in the codes of practice described herein.)

The Authorized Professional Practice (APP) on the Management of Police Information Code of Practice (MoPI) governs the use and retention of images by police in more detail. It is the Home Office’s stance that MoPI governs the review and retention of custody images, implying that the 2007 CCTV strategy guidelines are no longer applicable. The MoPI requires the chief officers to “have regard” for the codes of practice and contains rules elucidating retention requirements.

MoPI Section 7.4 states:

This subsection sets out the framework for decision making on the retention of police information. The key points relating to the National Retention Assessment Criteria are:

The infringement of an individual's privacy created by the retention of their personal information must satisfy the proportionality test;

Forces should be confident that any records they dispose of are no longer needed for policing purposes;

There should be a consistent approach to the retention of police information.

All records which are accurate, adequate, up to date and necessary for policing purposes will be held for a minimum of six years from the date of creation. This six-year minimum helps to ensure that forces have sufficient information to identify offending patterns over time, and helps guard against individuals' efforts to avoid detection over lengthy periods.

Beyond the six-year period, there is a requirement to review whether it is still necessary to keep the record for a policing purpose. The review process specifies that forces may retain records only for as long as they are necessary. The template in Appendix 4 provides guidance on establishing whether or not information is still needed for a policing purpose.

The national retention criteria asks a series of questions, focused on known risk factors, in an effort to draw reasonable and informed conclusions about the risk of harm presented by individuals or offenders. These questions are: Is there evidence of a capacity to inflict serious harm? … Are there any concerns in relation to children or vulnerable adults? … Did the behaviour involve a breach of trust? … Is there evidence of established links or associations which might increase the risk of harm? … Are there concerns in relation to substance misuse? … Are there concerns that an individual's mental state might exacerbate risk? …

Where the answer to any of the questions above is 'Yes' then information relating to the individual being assessed should be retained and reviewed again at intervals designated by the review schedule given in Appendix 4 ….

There may be other circumstances not covered by the criteria listed above, where forces consider that they have a genuine need to retain records. Wherever a record is assessed as being necessary and proportionate to the purpose it serves, it can be retained ….

In February 2017, the Home Office produced the “Review of the Use and Retention of Custody Images” in response to the 2012 High Court ruling in RMC and FJ v. Commissioner for Police for the Metropolis and Secretary of State for the Home Department (discussed further below). In it, the Home Office made recommendations for updating the MoPI guidance for the retention of custody images, including giving unconvicted individuals the right to request the deletion of their images, making police automatically review images to make sure they are only retaining those they need, and segmenting off records that “no longer have a policing purpose” into archives outside of the operation environment. (MoPI 2.2.2. defines “policing purposes” as "(a) protecting life and property, (b) preserving order, (c) preventing the commission of offences, (d) bringing offenders to justice, and (e) any duty or responsibility of the police arising for common or statute law.")

Local and National Government Discussions

But the current oversight system is far from perfect. Surveillance Camera Commissioner Tony Porter wrote in his January 2018 annual report, “Whilst the system may appear to be more proportionate than the previous process, I feel that given the burgeoning use of AFR [automatic facial recognition], the public interest will benefit from a greater degree of independent scrutiny and transparency….”

Parties in both the London municipal government and Parliament have also found the existing guidelines insufficient. On Feb. 7, 2018, the Greater London Authority (GLA) Oversight Committee asked Mayor of London Sadiq Khan to dissallow the use of FRS by the Metropolitan Police Service (MPS) —the police force responsible for the greater London area—pending the development of a legal framework for its use. Khan responded on March 5, providing the GLA with background information and assurances:

I have been assured that the MPS has previously engaged with officers of both the Information Commissioner and the Surveillance and Biometrics Commissioner, briefing them on the purpose and parameters of the trial use of facial recognition technology. Civil liberties groups Big Brother Watch and Liberty have also been engaged and were invited to observe the operational use of the trial system.

Khan described current the current oversight system, including “impact assessments, evaluation of past deployments and preparation for future trials.” He also announced that the London Policing Ethics Panel, an independent ethics panel, would review the MPS’s use of facial recognition technology. Finally, Khan expressed confidence in the MPS’s preparations for the May 2018 implementation of new EU data regulations: the General Data Protection Regulation (GDPR) and the Law Enforcement Directive (LED).

While conversations about the legal framework governing FRS are taking place (though they have not yet produced a concrete policy), several municipalities are also conducting trial runs of FRS software. In 2015, the use of FRS at the Download music festival in Leicestershire became a topic of controversy, both because the police didn’t inform the public of its use and because many of the concert attendees were under 18 years old. More recently, the use of FRS at the 2017 Notting Hill Carnival—during which the police’s use of FRS resulted in 35 false matches and a mistaken arrest—seems to have motivated the national government to undertake more aggressive investigation of FRS use as well. The South Wales police also used automatic facial recognition technology at the 2017 Champions League final. As mentioned in our last Lawfare post, some researchers and advocacy groups have put forward evidence of FRS racial and gender inaccuracy; the potential for bias continues to be of great concern to U.K. civil rights organizations.

The U.K. government appears to be aware that existing codes of conduct are not keeping up with the technology’s developing use. The Biometrics Commissioner himself has said that he was not consulted during the Notting Hill Carnival FRS trial, largely because there is no legal requirement that the police consult him or his office before using FRS. The U.K. government promised a “biometrics strategy” as far back as 2013, but its release has been significantly delayed; the current anticipated publication date is June 2018 (according to news reports and the Home Office Minister of State Baroness Williams of Trafford).

Parliament, too, has been struggling with how to regulate FRS, but has only just begun what will undoubtedly be a complicated legislative process. Currently, discussions are happening at a high level of abstraction—and they will likely will remain there until the Home Office delivers the biometrics strategy.

On Feb. 6, 2018, the House of Commons Science and Technology Committee held an evidence session to discuss the strategy with the Minister for Countering Extremism and the Forensic Science regulator. A brief exchange during this session between the chairperson of the Committee and the Home Office Minister captures one view about the need for legislation to regulate FRS:

Chair: Internationally, the Chinese are using this technology quite a lot, and there seems to be quite a lot of development here. We have to be careful about the direction in which we are going, do we not? There has been a lot of criticism of the Chinese for their use of facial recognition, and we have to be careful that we maintain exactly the principles that you articulate.

Baroness Williams of Trafford: That is where I thank God that we live in a democratic country.

A month later, the House of Lords debated facial recognition technology as well. The Baroness Jones of Moulsecoomb brought the debate to the floor because she

believe[s] that the use of automated facial recognition technology represents a turning point in our civil liberties and human rights in the U.K. It has barely been acknowledged anywhere that this could be a problem…The current system—or, more correctly, the lack of a current system—means that there is no law, no oversight and no policy regulating police use of automated facial recognition. The limited trials that we know about have shown that it can be completely ineffective and potentially discriminatory.

She then called on the government to order police forces to cease the use of FRS and remove the thousands of images of unconvicted people from the police database.

Lord Scriven highlighted that, in relation to FRS, “There is no legislation, codified regulation or independent oversight and therefore public trust will be diminished.” He cited a Freedom of Information request posed to various police offices:

It is telling that out of 32 forces that responded, 27 could not provide any national or local guidance for the use of biometric facial recognition technology—27 out of 32. In addition, 32 out of 32 had not done a privacy impact assessment. Five stated that the Home Office has a PIA and they were using that. Has the Home Office done a PIA on the police use of facial recognition technology? If so, when did it share the assessment with police forces and where is it public? There is no body with oversight powers or independent checks—none whatever.

Comments from Lord Harris of Haringey reflected the other side of the debate:

I just wonder whether we want to have a regulatory system that ties the hands of the police and security forces behind their back under such circumstances when those techniques are available. Of course there should be a regulatory framework, but if there is, it should apply universally. I leave it to the Government to work out how they would enforce such a regulatory framework in other sectors.

In this vein, other members emphasized that FRS provides benefits in counterterrorism and in identifying hostile reconnaissance activity.

Despite the legal and policy uncertainty, the U.K. government appears to be forging ahead with FRS, soliciting bidders for a £4.6 million contract to provide U.K. law enforcement and government authorities with FRS. The South Wales police have already received a £2 million grant for automated facial recognition software.

Judicial assessments of FRS-related issues

The courts have indirectly encountered FRS issues. Primarily, they have been concerned with the applicability of Article 8 of the European Convention on Human Rights (ECHR) on the right to respect for private life, home, and correspondence. Under Article 8, a government may interfere with these rights if sufficiently justified by legality and necessity. In the U.K. Supreme Court Case Bank Mellat v. Her Majesty’s Treasury, Lord Reed formulated an applicable test for legality and necessity:

(1) whether the objective of the measure is sufficiently important to justify the limitation of a protected right, (2) whether the measure is rationally connected to the objective, (3) whether a less intrusive measure could have been used without unacceptably compromising the achievement of the objective, and (4) whether, balancing the severity of the measure’s effects on the rights of the persons to whom it applies against the importance of the objective, to the extent that the measure will contribute to its achievement, the former outweighs the latter.

Although the legal parameters of Article 8 are beyond the scope of this post, there are some rulings pertinent to this discussion. In a 2012 ruling concerning the lawfulness of the retention of data, RMC and FJ v. MPS, the U.K. High Court found that the “indefinite retention of the claimant’s [custody photographs] was an unjustified interference with their rights under” Article 8.

Similarly, the European Court of Human Rights (ECtHR) found in Peck v the United Kingdom that video surveillance of public places where the visual data is recorded, stored, and disclosed to the public fall under the scope of Article 8. In Reklos v Greece—a case in which the ECtHR held that the taking of photographs of a newborn baby without the consent of the parents constituted a violation of Article 8—the court wrote, “[a] person's image constitutes one of the chief attributes of his or her personality, as it reveals the person's unique characteristics and distinguishes the person from his or her peers." The ECtHR reasoned that the right to the protection of one's image is one of the essential components of personal development and presupposes the right to control the use of that image. The court also commented on the danger of retention: "The baby's image was thus retained in the hands of the photographer in an identifiable form with the possibility of subsequent use against the wishes of the person concerned and/or his parents.”

However, courts also undertake proportionality assessments: in Murray v The United Kingdom, the ECtHR found that the taking and retention of a photograph of a suspected terrorist without her consent was not disproportionate to the legitimate terrorist-prevention aims of a democratic society. And in Van der Velden v The Netherlands and W v The Netherlands, the ECtHR found that the retention of biometric data of individuals already convicted caused "relatively slight" harm. These decisions sketch out the parameters within which any regulations of FRS will be evaluated, especially considering that the U.K. plans to remain a party to the ECHR after Brexit.

The principle articulated in Murray may be an important source of friction for the ECtHR: what are the parameters of the “legitimate terrorist-prevention aims of a democratic society”? In the 2015 ECtHR case Zakharov v Russia, the court dealt with the secret interception of mobile phone communications. It outlined the interests balanced:

As to the question whether an interference was “necessary in a democratic society” in pursuit of a legitimate aim...when balancing the interest of the respondent State in protecting its national security through secret surveillance measures against the seriousness of the interference with an applicant’s right to respect for his or her private life, the national authorities enjoy a certain margin of appreciation in choosing the means for achieving the legitimate aim of protecting national security....In view of the risk that a system of secret surveillance set up to protect national security may undermine or even destroy democracy under the cloak of defending it, the Court must be satisfied that there are adequate and effective guarantees against abuse. The assessment depends on all the circumstances of the case, such as the nature, scope and duration of the possible measures, the grounds required for ordering them, the authorities competent to authorise, carry out and supervise them, and the kind of remedy provided by the national law. The Court has to determine whether the procedures for supervising the ordering and implementation of the restrictive measures are such as to keep the “interference” to what is “necessary in a democratic society.”

In Case of Szabo and Vissy v. Hungary the ECtHR further specified that the “powers of secret surveillance of citizens” are only acceptable if they are, at a general level, “strictly necessary for safeguarding the democratic institutions” or “strictly necessary, as a particular consideration, for obtaining vital intelligence in an individual operation”—two standards that are certainly different, and may be more rigorous and specific, than “necessary in a democratic society.”

The Court of European Justice (CJEU) has also tackled these issues. In Tele2 Sverige AB v Post-och telestyrelsen and Secretary of State for the Home Department v. Watson and others, the court found the retention of communications data to be subject both to the requirements of Article 7 (respect for private and family life, home and communications) and Article 8 of the Charter of Fundamental Rights (the right to the protection of personal data, fair data processing for specified purposes and with consent, the right of access to data and the right to rectification) and a balancing test. The court wrote: “[T]he obligation [to retain communications data] must be proportionate, within a democratic society, to the objective of fighting serious crime, which means that the serious risks engendered by the obligation, in a democratic society, must not be disproportionate to the advantages which it offers in the fight against serious crime.”

While data retained for the use of FRS—namely photographs—may not be communications data, it is personal data, and there is reason to believe that litigants will continue to challenge the use of FRS and that the courts will have to consider how proportionality principles apply in this context.

Moreover, any future regulations—or at least regulations put in place before Brexit—will also have to align with the soon-to-be-implemented EU Law Enforcement Directive (LED) EU Directive 2016/680 and, as applicable, the GDPR (EU Regulation 2016/679). LED section (51) already includes language that will influence the regulation of FRS:

The risk to the rights and freedoms of natural persons, of varying likelihood and severity, may result from data processing which could lead to physical, material or non-material damage, in particular: where the processing may give rise to discrimination, identity theft or fraud, financial loss, damage to the reputation, loss of confidentiality of data protected by professional secrecy, unauthorised reversal of pseudonymisation or any other significant economic or social disadvantage; where data subjects might be deprived of their rights and freedoms or from exercising control over their personal data; where personal data are processed which reveal racial or ethnic origin, political opinions, religion or philosophical beliefs or trade union membership; where genetic data or biometric data are processed in order to uniquely identify a person or where data concerning health or data concerning sex life and sexual orientation or criminal convictions and offences or related security measures are processed; where personal aspects are evaluated, in particular analysing and predicting aspects concerning performance at work, economic situation, health, personal preferences or interests, reliability or behaviour, location or movements, in order to create or use personal profiles; where personal data of vulnerable natural persons, in particular children, are processed; or where processing involves a large amount of personal data and affects a large number of data subjects.

The United Kingdom may be on the cusp of new FRS regulations—but for now, the technology seems to be developing faster than the government’s ability to ensure its responsible use. For those interested in exploring how a democracy is navigating the costs and benefits of FRS, it is worth keeping an eye on the level of control exercised over FRS in the U.K.


Shannon Togawa Mercer is a senior associate at WilmerHale. Her practice focuses on complex global data protection, privacy, and cybersecurity matters. Ms. Togawa Mercer has extensive experience counseling clients on cross border data protection and privacy compliance as well as cyber incident response. She has practiced in London and Washington D.C. and previously served as Managing Editor and Senior Editor at Lawfare. Ms. Togawa Mercer also served as National Security and Law associate at the Hoover Institution.
Ashley Deeks is the Class of 1948 Professor of Scholarly Research in Law at the University of Virginia Law School and a Faculty Senior Fellow at the Miller Center. She serves on the State Department’s Advisory Committee on International Law. In 2021-22 she worked as the Deputy Legal Advisor at the National Security Council. She graduated from the University of Chicago Law School and clerked on the Third Circuit.

Subscribe to Lawfare