Essex Police hasn’t taken a close enough look at the potential discrimination problems with its live facial recognition (LFR) technology. Big Brother Watch recently revealed documents that suggest the department’s equality assessment misses the mark on how its practices could unfairly impact different groups.
While Essex Police insists it’s thoroughly weighed the issues of bias and fairness, Big Brother Watch argues that the evidence cited in the assessment doesn’t hold up. They point out that Essex relies on dubious comparisons with other algorithms and repeats claims from its technology supplier that the LFR system is free from bias.
For instance, the police claim they set the threshold for the LFR system at 0.6 or higher, asserting this keeps false positives consistent across demographics. But this benchmark is based on tests from a different algorithm used by other forces, not the one Essex Police is using, which comes from the Israeli firm Corsight.
There’s also talk about Corsight’s algorithm being the least biased, based on a 2022 evaluation from the U.S. National Institute of Standards and Technology (NIST). However, when checking NIST’s website, it seems there’s no evidence backing Corsight’s claims. A Freedom of Information response indicated that Essex hadn’t done any extensive testing on the system itself by mid-January 2025.
Jake Hurfurt from Big Brother Watch emphasized that Essex Police’s lackadaisical approach to assessing this new surveillance method jeopardizes the rights of many individuals. He noted that facial recognition is particularly prone to misidentifying women and people of color. He urged Essex Police to halt the use of facial recognition until they conduct proper testing.
A pivotal moment in this discussion came from a legal challenge involving South Wales Police, where the UK Court of Appeal ruled that their LFR use was unlawful due to privacy violations. The court underscored the need for police to carefully consider how their practices affect different demographics.
Big Brother Watch called for equality assessments to be grounded in solid evidence, noting that Essex’s documentation doesn’t show adequate consideration for issues of discrimination. Academic Karen Yeung criticized the assessment for lacking depth and coherence, primarily relying on data from unrelated systems.
Essex Police stands by its assessment process, insisting they contractually require the software supplier to verify the algorithm’s performance. They highlighted their own deployment statistics, claiming one false positive stemmed from a low-quality image rather than a flaw in the technology.
However, they didn’t respond to why they cited testing from an entirely different algorithm in their equality impact assessment or why they hadn’t conducted their own evaluations before rolling out the system.
Essex Police claims that their algorithm has a false match rate of 0.0006 and is the least biased. But digging deeper into false positive statistics shows significant disparities; for example, West African women face a much higher false positive rate than Eastern European men.
Additionally, experts point out that the metrics cited for Corsight relate to one-to-one comparisons, while police usage typically involves one-to-many searches, which increases the potential for false matches drastically.
The assessment process for watchlists at Essex Police relies on what they call “intelligence cases.” They claim these deployments are proportionate and necessary. Yet there’s lingering doubt about how specific and justified these decisions are. Yeung raised concerns that the police might not be adequately documenting their decisions or meeting their legal obligations.
Overall, questions remain about Essex Police’s approach to facial recognition technology, especially regarding how they consider and mitigate discriminatory impacts in practice.