London’s Mayor Sadiq Khan Announces Metropolitan Police’s Upcoming Trial of a Facial Recognition Mobile App for Officers
Introduction to Operator-Initiated Facial Recognition Technology
The Metropolitan Police Service (Met) is preparing to pilot a new handheld facial recognition system designed to enable officers to perform biometric identity checks instantly in the field. This initiative, confirmed by London Mayor Sadiq Khan, introduces Operator-Initiated Facial Recognition (OIFR) technology through a smartphone application that captures facial images and cross-references them against police databases in real time.
Details of the Pilot Program and Its Scope
The six-month trial will deploy approximately 100 devices, backed by a budget of around £763,000. Mayor Khan emphasized that this technology aims to streamline identity verification during stops, potentially reducing the need for arrests and subsequent processing at police stations. The pilot’s oversight will be managed by the Mayor’s Office for Policing and Crime alongside the London Policing Ethics Panel to ensure ethical and proportionate use. However, Khan noted that the pilot’s continuation beyond the trial phase is not guaranteed.
Controversy and Public Concerns
Despite the announcement, the Met’s official website currently states that it does not employ operator-initiated facial recognition technology, a discrepancy that only came to light after questioning by London Assembly member Zoe Garbett. Garbett expressed alarm at the lack of transparency, highlighting that this development fundamentally alters the dynamic between law enforcement and the public.
She criticized the timing of the pilot’s disclosure, which coincided with ongoing government consultations on facial recognition regulations that closed in February 2026. Garbett argued that advancing the technology’s deployment before establishing a clear legal framework undermines public trust and threatens civil liberties, especially since UK law does not require individuals to identify themselves to police without valid cause.
Legal and Ethical Challenges Surrounding Facial Recognition
The Home Office is still reviewing responses to its consultation on regulating facial recognition technology, while the High Court continues to assess the legality of the Met’s prior use of live facial recognition (LFR). To date, only a few police forces, including South Wales, Gwent, and Cheshire, have conducted limited OIFR trials.
Concerns have been raised about the disproportionate impact of facial recognition on Black and minority ethnic communities, the absence of explicit legal authority governing its use, and the lack of transparency regarding the financial costs of deployment. Critics warn that without robust safeguards, the technology risks infringing on privacy rights and enabling unwarranted surveillance.
Perspectives from Police and Civil Rights Advocates
Lindsey Chiswick, the Met’s facial recognition lead, described the OIFR tool as an innovative method to quickly and accurately verify identities, potentially reducing unnecessary detentions. She assured that biometric data from unmatched scans would be deleted immediately and that the trial would initially involve a limited number of officers.
Conversely, Jasleen Chaggar, legal and policy officer at Big Brother Watch, condemned the lack of formal policy governing OIFR use, likening the public to “guinea pigs” in an unregulated experiment. She warned that the technology’s ability to instantly identify individuals in public spaces poses a severe threat to anonymity and civil liberties, potentially exposing sensitive personal information.
Chaggar also highlighted the Met’s history of facial recognition pilots quietly becoming permanent fixtures, urging an immediate halt to OIFR trials until comprehensive legislation is enacted to regulate and restrict its everyday application.
Historical Context: Previous Facial Recognition Trials and Their Implications
Academic research by Karen Yeung and Wenlong Li, published in September 2025, analyzed live facial recognition trials conducted by police forces in London, Wales, Berlin, and Nice. Their study concluded that while real-world testing is crucial for understanding AI system performance, existing trials have largely neglected the broader social and ethical consequences, failing to provide clear evidence of operational benefits.
They characterized the UK and European police’s approach to live facial recognition as a largely unregulated “Wild West,” where technology is deployed on local populations without sufficient oversight or safeguards. Specifically, the Met’s trials between 2016 and 2020 were criticized for blurring the line between experimental testing and active policing, with significant legal and social ramifications for individuals flagged by the system.
Further research from the University of Essex’s Human Rights, Big Data & Technology Project in 2019 identified a “presumption to intervene” bias among officers using facial recognition, where police tended to act on system alerts even when incorrect, increasing the risk of unwarranted public interactions.
Looking Ahead: The Need for Clear Regulation and Public Accountability
As facial recognition technology becomes increasingly integrated into law enforcement practices, the call for transparent policies, legal clarity, and ethical oversight grows louder. The Met’s upcoming OIFR trial underscores the urgency of establishing robust frameworks that balance technological innovation with the protection of fundamental rights.
Without such measures, the expansion of surveillance capabilities risks eroding public trust and infringing on privacy, particularly among vulnerable communities. Ensuring that any deployment of facial recognition technology is accompanied by stringent safeguards and meaningful public engagement remains a critical challenge for policymakers and law enforcement alike.