Facial Recognition in UK Policing: Are we compromising anonymity for security?
- mollyruthfinlay
- Mar 18, 2022
- 6 min read

How does Big Data inform Facial Recognition Technology?
Facial Recognition Technology (FRT) uses machine learning algorithms to identify relative characteristics that are unique to different faces. Identifying a face within an image, highly refined Facial Recognition (FR) software analyses the geometry of the face, measuring for example, the distance between a person’s eyes or the distance between their forehead and chin. This, along with other uniquely individual characteristics such as DNA, voice patterns or fingerprints, is known as biometric data. Compiling this data, FR software creates a “facial signature” or “faceprint” – a set of unique characteristics that make one face different every other face. Once a set of unique characteristics has been identified, FR software searches for a match in its database of existing photos, matching a human identity to the unique set of characteristics it has identified.
Databases utilised by FRT exist for a variety of reasons. American Facial Recognition company Clearview AI sells FR software informed by extremely large volumes of data (or Big Data), to power its algorithms. It’s database, the largest known, contains over 20 billion facial images ‘sourced from public-only web sources including news media, mugshot websites, public social media, and other open sources’. This illustrates the way in which technology is increasingly affecting our privacy.
Uses of Facial Recognition Technology in UK Policing
In UK policing, photo databases largely consist of individuals who are of interest to or have already interacted with a policy service in some way. In 2017, South Wales Police (SWP) trialled the use of Automated Facial Recognition (AFR) technology during the UEFA Champions League Final. Deploying vans installed with AFR technology in and around Cardiff city centre, SWP hoped to identify persons of interest by matching live footage with existing photographs in SWP databases. Persons of interest were sorted into categories according to the level of perceived risk they posed and included individuals of serious risk to public safety. Using ‘AFR Locate’ software, SWP were alerted to 2632 matches over four days. These translated into 78 true positives, (just 3%), and resulted in one arrest.
From 2016, the Metropolitan Police Service (MPS) has been trialling AFR technology at events and in crowded spaces such as shopping centres in London. In 2020, MPS announced they would deploy live facial recognition cameras operationally on London streets, and in 2021 the London Mayor approved plans allowing MPS to use historic images from CCTV feeds and social media in a bid to track down suspects. Since, a report conducted by HMICFRS has found that retrospective facial recognition is currently being used by six police forces in England and Wales.
How is this compromising our anonymity?
The prevalence of facial recognition software in our society is making it increasingly difficult to remain anonymous in public. Since the outbreak of Covid-19, you might have knowingly, or unknowingly, had your biometric data analysed by a thermal facial recognition or infrared camera to detect a potential fever. Alternatively, you might simply have been ‘scanned’ and analysed against a police database in your own city.
In 2020, Ed Bridges challenged South Wales Police’s use of Live Facial Recognition in public. Having first had his image captured in Cardiff city centre 2017, and for a second time at a peaceful protest at Cardiff International Arena, 2018 Bridges legally challenged SWP’s use of FRT, arguing that as a law-abiding citizen it had breached his privacy rights as well as data protection and equality laws. Indeed, human rights organisation Liberty highlight that the use of Facial Recognition Technology by UK police to manage the behaviour of the public at large scale events such as peaceful protests inherently undermines their democratic value.
Liberty go on to argue that the extensive use of Facial Recognition Technology within our society is oppressive, destroying our privacy and undermining our freedom of expression. Article 8 of the Equality and Human Rights Act protects the human right to ‘respect for your private and family life’. For this reason, organisations such as Liberty argue that the increasing use of Facial Recognition Technology in public is compromising people’s ability to go about their lives privately. To such an extent, after the High Court’s decision that legal framework around facial recognition provided sufficient safeguards, the Court of Appeal ruled in in favour of Ed Bridges, concluding that the use of AFR technology by South Wales Police was in fact unlawful. It ruled that there were ‘fundamental deficiencies’ in the legal framework resulting in a breach of Ed Bridges rights. Cases such as these highlight the way in which FRT can be used as an intrusive surveillance tool that harvest the publics’ biometric data without knowledge or consent.
Is our security actually being increased?
Political commentators and civil liberties groups have repeatedly drawn attention to the inefficiencies of Facial Recognition Technology. FRT has been found typically worse at identifying women and people of colour, as well as more likely to make inaccurate assessments of these groups. In policing specifically, concerns have been raised about the training of FR software on historical crime datasets, which are found to be entrenched with racial bias. In increasing the use of FRT in UK policing, we risk replicating or exacerbating biases that already exist within UK police databases.
What this suggests is that the current use of FRT in UK policing is largely ineffective. During a six-month period of SWP’s trial of AFR in Cardiff, 2017, AFR locate made 2,710 matches. This resulted in 94 true positives (3.46%) and led to 4 arrests. Such inefficiency and inaccuracy as well as the Bridges v SWP case led the Court of Appeal to investigate SWP’s use of AFR technology, concluding that its use was unlawful and that the service had not adequately checked that the technology did not exhibit gender or racial biases. This evidence does not indicate that the use of FRT in UK policing is increasing public security or safety, rather, it highlights that the state must take more responsibility for the technology it rolls out and how it impacts various sections of our society.
What are the consequences if no action is taken?
If the UK government continue to expand their implementation of FRT in policing and on our streets, the public will become increasingly close to losing their right to being anonymous in public. In China, government bodies have been found to use FRT to identify and manage members of the public’s behaviours, penalising citizens for jaywalking as well as publicly shaming those who choose to wear sleep-wear outdoors. Though this seems dystopian, influencing citizens expressions or their choice to use public spaces, transport or to attend protests already begins to infringe on human rights to privacy and choice.
Regardless of the way in which biometric data is used, it is highly personal and individual data; data that cannot be changed in the same way a password or contact details might be altered. Allowing the use of FRT and of biometric data analysis in public spaces is both ethically and legally challenging. Retrospective facial recognition especially ‘can turn back the clock to see who you are, where you've been, what you have done and with whom, over many months or even years’, infringing on peoples freedom of expression and ability to live without fear.
Research conducted by the Ada Lovelace Institute found that British citizens are already ‘concerned about the normalisation of surveillance’ resulting from increased use of FR technology. It is therefore important to sufficiently regulate existing FRT within the UK to prevent the complete mistrust of the public and the obsoletion of systems requiring Facial Recognition Technology for crucial and positive uses, for example at national border control points.
‘A trade-off between security and privacy’
While Article 9 of the General Data Protection Regulation (GDPR) does require consent for the collection of personal data, particularly sensitive information about an identifiable living individual, it remains unclear if facial images always fall under GDPR’s scope. This largely depends on the legal justification for processing, due to the potential for national security or public safety overriding the need for confidentiality.
An approach to Facial Recognition Technology that respects civil liberties would require that facial recognition databases held the information only of significant individuals of specific interest to police forces or national security. In a non ‘Big Brother State’ there is no requirement for the identification of regular law-abiding citizens, nor the analysis or storage of their biometric data.
We must call upon policy makers, not only to ensure that Facial Recognition Technology is not increased throughout the UK, but to ensure that existing technology utilised by police forces is appropriately regulated and assessed for equality and racial bias. While multiple US cities have succeeding in banning law enforcement groups from using FRT, the UK lags behind, relying on organisations such as Liberty to highlight the intrusive and discriminatory effects of FRT and orchestrate petitions for the attention of the UK Home Secretary. Under the guise of public security, FRT could have the potential to be come a tool for mass surveillance. If Facial Recognition Technology is as ineffective and distorted as studies have suggested, why are we compromising our anonymity for ‘security’?
Comments