We’d love to hear from you! Whether you have questions, feedback, partnership inquiries, or just want to share your thoughts, the Robotics Observer team is here to help.
The UK data regulator outlined its approach to the regulation of biometric and artificial intelligence technologies. It will focus on automated decision making systems and police facial recognition.
According to the UK Information Commissioner’s Office, a new AI and biometrics strategy has been launched. The regulator claims that this will protect people’s rights while supporting innovation. Published on 5th June 2025, The
The strategy outlines how the ICO will concentrate its efforts on technology cases where the risks are concentrated but where “significant potential” exists for public benefit.
These include the use automated decision-making systems (ADM) in recruitment and public service, the use facial recognition by police, and the development AI foundation models.
The specific actions outlined in these areas include audits and guidance on “lawful, faire and proportionate” facial recognition by the police, setting “clear expectation” for how personal data of people can be used to create generative AI models and developing a code of practice for the use of AI by organisations. The regulator will also consult with early adopters like the Department for Work and Pensions to update guidance for ADM profiler, and produce a horizon scan report on the implications agentic AI which is increasingly capable of autonomously acting. “The same data privacy principles apply today as they have always done – trust is important, and it can be built only by organisations that use people’s personal information in a responsible manner,” said John Edwards, information commissioner at the launch. “New technologies are not the threat to public trust, but reckless applications of them outside of the necessary safeguards.”
In the strategy, it is also stated that the regulator will concentrate its efforts in these areas because “we consistently hear public concerns” about transparency and explainability.
For example, on AI models, the ICO will “secure assurances from developers” about how they use people’s personal data so that people are aware. And for police facial recognition it said it would publish guidance clarifying its legal deployment.
Police face recognition systems will be audited and the results published to ensure that they are well governed, and that people’s rights are protected.
The change in society is more than a technological change. But AI must work for all… and this means putting fairness into the foundations
Dawn Butler from AI All Party Parliamentary Group.
“Artificial Intelligence is more than a technology change. It is a change in our society.” Dawn Butler, vice chair of the AI All Party Parliamentary Group, said at the launch of the strategy that artificial intelligence will change the way we access healthcare, go to school, travel, and even experience democracy. “AI must work for everyone and not just a select few to change things,” said Dawn Butler, vice-chair of the AI All Party Parliamentary Group (APPG), at the strategy’s launch. Fairness, transparency and inclusion are the foundations of the AI revolution.
According to Lord Clement-Jones who is co-chairman of the AI APPG: “The AI Revolution must be founded on Trust.” Privacy, transparency and accountability do not hinder innovation; they are its foundation. AI is rapidly advancing, moving from generative models to self-driving systems. Increasing speed brings complexity. Risk is associated with complexity. We must ensure that innovation does no compromise public trust, individual freedoms, or democratic principles.”
The ICO noted that public concern is particularly high in regards to police biometrics and the use automated algorithms by recruiters. It also noted that AI was used to determine welfare benefits eligibility.
In 2024, only 8% of UK organizations reported using AI decision making tools when processing personal data, and 7% reported facial or biometric identification. The regulator said that both were only marginally higher than the previous year.
Our objective is to empower organizations to use these complex and changing AI and biometric technology in compliance with data protection laws. The people are therefore protected and can have greater trust and confidence in the way organisations use these technologies.
We will not hesitate to use formal powers to protect people’s rights, if organisations are using their personal information recklessly or trying to avoid their responsibilities. By intervening proportionately we will create a more level playing field for organisations that comply and ensure robust protections of people.
An analysis by the Ada Lovelace Institute in late May 2025 found “significant gaps and fragmentation”, in the existing “patchwork governance frameworks” for biometric surveillance technology. The Ada Lovelace Institute’s study focused on the shortcomings in the UK police’s use of the live facial recognition technology (LFR), which they identified as the most prominent biometric surveillance use. However, it also noted that there was a need for clear legal guidance and effective governance across the board for “biometric mass-surveillance technologies”.
Other forms of biometrics are also included, such fingerprints used for cashless payments at schools or systems that claim they can remotely infer emotions or truthfulness. They may also be deployed in other scenarios, like when supermarkets use LFR for shoplifters, or verification systems for ensuring people’s age for alcohol purchases.
Both the UK Parliament and civil society have repeatedly called for new legal frameworks that govern UK law enforcement’s usage of biometrics.
These include three separate inquiries by the Lords Justice and Home Affairs Committee into shoplifting and police algorithms, as well as two former UK biometrics commissioners Paul Wiles and Fraser Sampson, as well as an independent legal review conducted by Matthew Ryder QC, the UK’s Equalities and Human Rights Commission, and the House of Commons Science and Technology Committee which called for a LFR moratorium as early as July 2019.
The Met Police will deploy permanent facial recognition technology in Croydon.
By Sebastian Klovig Skelton.
Driving license data could be used to police facial recognition.
By: Sebastian Klovig Skelton.
UK Police continue to hold millions in custody images illegally