The UK Information Commissioner’s Office (ICO) just rolled out its AI and biometrics strategy, aiming to foster innovation while ensuring people’s data rights are protected.
Published on June 5, 2025, this strategy zeroes in on tech applications that carry the most risk but also hold great potential benefits. Think automated decision-making in recruitment, facial recognition used by police, and the creation of AI foundation models. The ICO plans to conduct audits and issue guidance to police on using facial recognition fairly and lawfully. They want to set clear expectations on how personal data can train generative AI models and develop a statutory code of practice for organizations using AI.
John Edwards, the Information Commissioner, stressed that trust is key. “The same data protection principles apply now as they always have,” he said, highlighting that public trust hinges on responsible use of personal information. He pointed out that the concern arises not from new technologies themselves but from careless applications that lack proper safeguards.
The ICO also wants to tackle public worries about transparency, bias, and rights. They plan to secure assurances from developers about using personal information and will issue guidance on the lawful use of police facial recognition. Audits of these systems will ensure they’re being governed correctly and that people’s rights are respected.
Dawn Butler, vice-chair of the AI All Party Parliamentary Group (APPG), emphasized that AI isn’t just a technological shift but a societal one. “AI must work for everyone and should be built on fairness, openness, and inclusion,” she said, highlighting how it can impact healthcare, education, and even democracy.
Lord Clement-Jones, co-chair of the APPG, remarked that trust is the foundation of the AI revolution. “Privacy, transparency, and accountability are not impediments to innovation,” he explained, urging the importance of balancing speed with public trust and individual rights.
Concerns about how AI and biometrics are used could stunt their acceptance. The ICO noted waning trust in police biometrics and automated recruitment algorithms. In 2024, only 8% of UK organizations reported using AI decision-making tools, while a mere 7% used facial or biometric recognition.
The ICO’s goal is to empower organizations to adopt emerging AI and biometric technologies in line with data protection law, ensuring stronger protections for individuals. But they’re ready to exercise their powers if organizations misuse personal information.
In late May 2025, the Ada Lovelace Institute highlighted noticeable gaps in existing governance of biometric surveillance, particularly concerning live facial recognition technology in policing. They stressed the need for legal clarity and effective governance across various biometric surveillance technologies, including those for cashless payments and emotion detection.
There have been numerous calls for new legal frameworks governing biometric use in law enforcement, with various inquiries and reviews shedding light on police biometrics alone. However, some reviews, like the one by Matthew Ryder QC, also examined private sector uses of biometrics, covering everything from public-private partnerships to workplace monitoring.