A Comprehensive Explanation of UK Police Facial Recognition: What You Should Know

The use of live facial recognition (LFR) technology by UK police has been ongoing for nearly a decade, starting with the Met at Notting Hill Carnival in 2016. Since then, the Met and South Wales Police have increasingly deployed facial recognition-linked cameras to events and busy areas of London. The police claim that facial recognition helps them find people they would not otherwise be able to identify and acts as a deterrent to criminal behavior. However, the technology has faced controversy from the start due to concerns about its legality, transparency, accuracy (especially for women and people with darker skin tones), and potential for institutional racism.

In August 2020, a legal challenge concluded that South Wales Police’s use of facial recognition up to that point had been unlawful because the force failed to conduct proper assessments and consider the technology’s potential for discrimination. However, the ruling found that the problem lay in how the technology was approached and deployed by the police, rather than inherent flaws in the technology itself.

This guide explores how the police have been using facial recognition technology, ongoing concerns about its proportionality and effectiveness, and the future plans for its use in 2024 and beyond.

Facial recognition techniques beyond live facial recognition, such as retrospective facial recognition (RFR) and operator-initiated facial recognition (OIFR), are also gaining popularity. These techniques involve scanning faces and matching them against a watchlist of images compiled by the police. RFR can be applied to previously captured images, while OIFR allows officers to compare photos taken in the field with a watchlist using a mobile app.

Critics are worried about the increasing use of facial recognition technology due to the abundance of photos and videos available for analysis, as well as the potential human rights and privacy implications. Concerns also arise regarding the necessity and proportionality of facial recognition in a democratic society, especially considering the lack of public debate and alternative methods available to the police.

Issues of bias and discrimination are closely tied to the use of facial recognition. While the accuracy of the technology has improved, concerns have shifted towards structural bias within policing and how it is reflected in technology practices. Civil society groups argue that the technology can be discriminatory and oppressive, leading to the further entrenchment of existing patterns of discrimination. Some argue that even if the technology were to achieve 100% accuracy, it would still be problematic due to its potential impact on power discrepancies and criminal justice outcomes.

Watchlists used in facial recognition systems consist of images of people’s faces, often sourced from custody images stored in the Police National Database (PND). The concern is that these watchlists may be populated by individuals from certain demographics or backgrounds, potentially perpetuating bias and discrimination. Additionally, millions of custody images are unlawfully retained, raising questions about the lawful basis of including individuals who were never convicted of a crime.

The effectiveness of facial recognition in policing is also a subject of debate. Critics argue that the technology has been deployed infrequently and on relatively few people, leading to limited results. While the Home Office claims that facial recognition is a valuable crime prevention tool, questions arise about its effectiveness in capturing serious offenders compared to making arrests for other offenses. Some argue that the overt nature of deployments allows wanted individuals to avoid detection, leading to calls for more covert use. Others question whether the technology’s crime prevention capabilities rely more on its chilling effect than its actual ability to identify wanted individuals.

Currently, there is no dedicated legislation in the UK to regulate the police use of facial recognition technology. Its deployment is governed by a combination of existing laws and regulations, including the Police and Criminal Evidence Act, Data Protection Act, Protection of Freedoms Act, Equality Act, Investigatory Powers Act, Human Rights Act, and common law powers. However, there have been repeated calls for new legal frameworks to govern the use of biometrics and facial recognition by law enforcement.

Despite concerns about its legality, the UK government has continued to push for wider adoption of facial recognition technology by police forces. Policing minister Chris Philp has advocated for its integration with police body-worn video cameras and called for increased use of facial recognition. However, critics argue that the government’s data reforms could weaken oversight and regulation of police surveillance capabilities.

In conclusion, the use of facial recognition technology by UK police has sparked controversies regarding its legality, transparency, accuracy, bias, and effectiveness. Concerns about privacy, human rights, and discrimination persist, while calls for new legal frameworks and increased oversight continue. The direction of facial recognition technology in policing remains uncertain as debates and discussions surrounding its deployment and regulation persist.

Unlock your business potential with our expert guidance. Get in touch now!