18.5 C
New York

Show us your face: New Orleans PD reportedly got secret facial recognition alerts

Published:

Since early 2023, facial recognition cameras run by a private nonprofit have scanned New Orleans visitors and residents and quietly alerted police, sidestepping oversight and potentially violating city law, according to a new report.

In 2022, the Big Easy’s city government relaxed its ban on the use of facial recognition technology. It could be used to investigate violent crimes, but had to be checked by a human operator before action was taken.

But an investigation published Monday by the Washington Post found that within a year, police were quietly receiving continuous real-time facial recognition alerts from a privately operated camera network. These alerts came from cameras managed by nonprofit Project NOLA, which runs a sprawling, privately funded surveillance network across the city, the report says.

Project NOLA claims access to more than 5,000 camera feeds in the New Orleans area, with over 200 equipped for facial recognition. The system compares faces against a privately compiled database of more than 30,000 individuals, assembled partly from police mugshots. When a match is detected, officers receive a mobile phone alert with the person’s identity and location, according to the report.

The police were required to notify the city council each time they used facial recognition technology in an investigation or arrest, but reportedly failed to do so. In multiple cases, police reports omitted any mention of the technology, raising concerns that defendants were denied the opportunity to challenge the role facial recognition played in their arrest.

By adopting this system – in secret, without safeguards, and at tremendous threat to our privacy and security – the City of New Orleans has crossed a thick red line

As scrutiny mounted, the police department distanced itself from the operation, saying in a carefully worded statement that it “does not own, rely on, manage, or condone the use by members of the department of any artificial intelligence systems associated with the vast network of Project NOLA crime cameras.”

“Until now, no American police department has been willing to risk the massive public blowback from using such a brazen face recognition surveillance system,” said Nathan Freed Wessler, deputy director of ACLU’s Speech, Privacy, and Technology Project, in a press release.

“By adopting this system – in secret, without safeguards, and at tremendous threat to our privacy and security – the City of New Orleans has crossed a thick red line. This is the stuff of authoritarian surveillance states, and has no place in American policing.”

Safeguards are there for a reason, as past cases have already shown. In 2022, Randall Reid was arrested in Georgia after Louisiana deputies used Clearview AI to match his driver’s license photo to surveillance footage from a purse theft, despite his claim that he had never been to the state. He spent six days in jail, incurred thousands in legal fees, and in 2023 filed a federal lawsuit alleging wrongful arrest based solely on a facial recognition match.

In 2020, Detroit police made headlines when they falsely identified and arrested Robert Williams on suspicion of being a shoplifter – Williams later testified to Congress about the experience. A year later, it was the turn of Lamya Robinson, then 14, who was ejected from a roller rink after being falsely pegged with “97 percent match” to a known troublemaker.

Cases like these helped fuel public backlash and legislative efforts to rein in facial recognition technology. New Orleans was no exception, banning the tech in 2020. But the 2022 ruling relaxed the rules slightly to allow its use via the Louisiana Fusion Center, which aggregates data from police across the state.

At the time, police assured city officials the technology would only be used as a last resort after other identification methods failed. Sergeant David Barnes testified that any request required supervisory approval and that matches had to be reviewed by multiple staff members before being acted upon.

Project NOLA wasn’t mentioned, and it’s possible police believed that receiving alerts from a private system exempted them from the rules. The nonprofit certainly has the hardware to support real-time surveillance – its website promotes AI-enabled cameras, offered free with installation fees, and cloud storage plans.

An outlay of $300 a year gets you a basic camera system, while $2,200 covers a high-end 4K model with 25x zoom, STARVIS night vision, and AI that automatically tracks people and vehicles, flashing red and blue lights and a spotlight when it detects intruders or suspicious activity. Footage is typically stored for 30 days, though that window has been extended to 90 days in some districts following recent policy changes.

  • TSA wants to expand facial recognition to hundreds of airports within next decade
  • Founder of facial-rec controversy biz Clearview AI booted from board
  • UK’s first permanent facial recognition cameras installed in South London
  • Who in America is standing up to privacy-bothering facial-recognition tech? Maine is right now leading the pack

The Post investigators started firing off questions to the police and the city in February. On April 8, NOPD boss Anne Kirkpatrick reportedly sent out an all-hands memo to staff, saying that an officer had raised concerns about the system and suspended its use.

She wrote that Project NOLA had been asked to suspend alerts to officers until she was “sure that the use of the app meets all the requirements of the law and policies.”

Neither NOPD or Project NOLA had any comment at time of going to press. (r)

www.roboticsobserver.com

Related articles

spot_img

Recent articles

spot_img