Translate page with Google

Pulitzer Center Update December 12, 2022

Massive Police Facial Recognition Database Now Requiring Policy Limits on Use

Author:
collage of individuals identified by facial recognition software on security cam footage in public
English

As police AI surveillance tech expands, what's the impact for minority communities?

South Florida Sun Sentinel series supported by the Pulitzer Center also cited in testimonies to U.S. Congress and President Joe Biden’s AI Bill of Rights


Florida police agencies’ unregulated use of a statewide facial recognition database—a black-box scenario allowing possible civil rights abuse—now faces new restrictions and oversight in the wake of a Pulitzer Center-supported series.

The South Florida Sun Sentinel investigative series last year found police agencies routinely used the fraught AI technology to run facial scans of peaceful protesters, Black residents at disproportionately high rates, and for misdemeanors, traffic incidents, and unspecified “intel” linked to no crimes.

Starting this year, police departments must create publicly available written policies to access a database holding tens of millions of Floridian’s images. In some cases, new parameters approved by police and elected city officials include much stronger oversight such as internal audits and civil rights protections for the first time in decades.

Most departments had no policies previously, and some even refused to adopt any. Fewer agencies now rely on the database, with users dropping 27 percent after the new requirements.

I spoke with Pinellas County Sheriff Bob Gualtieri, whose department has operated the Face Analysis Comparison & Examination System, or FACES, since it was created in 2001. Gualtieri moved to require written policies following the articles, including “South Florida Police Widely Use Facial Recognition, yet Resist Policies To Curb Abuse,” which he said “contributed” to the change.

Policing agencies “want to use our system, I’m fine with that. But they need to make sure it’s being used properly. They need to ensure their people understand this is simply an investigative tool,” Gualtieri told me. “It is not telling you who did anything.”

A facial recognition system “is not like a vending machine, where you put a coin in and you get a can of soda out,” he added. “You put a photo in, and it’s an indicator. It’s a lead.”

Among other concerns, the yearlong Sun Sentinel analysis of data supplied after public records requests found: South Florida agencies ran facial scans of Black Lives Matter protest organizers; gathered rampant unspecified “intel” during protests; and widely used the controversial biometric technology on Black residents, despite research showing that Black individuals are misidentified at higher rates.

Civil rights advocates have long suspected facial recognition surveillance. Yet specific use has proven tough to document, particularly in police agencies’ own records. “It’s unique and rare to have as comprehensive of a picture,” said Jake Wiener, a domestic surveillance law fellow at Electronic Privacy Information Center (EPIC), of the series. “If you’re skeptical of law enforcement, it’s not a surprise that facial recognition is being disproportionately used on Black and brown people. But it’s very useful to have that data.”

Various advocacy groups cited the 2021 article, “South Florida Police Quietly Ran Facial Recognition Scans To Identify Peaceful Protesters,” including the ACLU, Open the Government, and Restore the Fourth. The Surveillance Technology Oversight Project (S.T.O.P.) publicly condemned the reported facial recognition use in protests. And the Electronic Frontier Foundation (EFF), as well as Barry Friedman, a New York University Law School professor, cited the article at a Congressional hearing on police use of facial recognition. Congressional bills have followed such input.

Friedman, faculty director of the law school’s Policing Project, testified that federal regulation is needed, citing the article’s documentation of facial recognition scans at a Juneteenth Block Party in Fort Lauderdale. “There is a persistent inclination of those in power to investigate and tamp down dissent. The Framers of our Constitution understood this all too well. And, unfortunately, these uses of surveillance tools also too frequently are aimed at marginalized communities.”

The White House Office of Science and Technology Policy, meanwhile, spent the past year requesting input on harms posed by facial recognition and artificial intelligence for President Joe Biden’s AI Bill of Rights—which was released this October and called for limits on AI-based surveillance. The Project On Government Oversight (POGO) cited the Pulitzer Center-funded series in its response to that fact-finding mission.

“Fort Lauderdale police ran numerous face recognition searches to identify people who might be a ‘possible protest organizer’ or an ‘associate of protest organizer,’ noted POGO, a nonpartisan watchdog group that also cited reported scans “for the purpose of ‘intelligence’ collection, rather than to investigate any criminal offense.”

Last year, 269 agencies—Florida police agencies, the FBI, and ICE—had access to the database’s 38.5 million images, including mug shots and state driver’s license photos. This year’s policy requirement reduced some police facial recognition use, as nearly a third of agencies did not re-up agreements with the upgraded system, FACESNXT—dropping from 269 to 196 agencies, according to the Pinellas County Sheriff’s Office. Pinellas officials had previously conducted random limited audits, yet the updated contracts offered an opportunity to “tighten the parameters,” Gualtieri said.

Overall, the adopted policies offer a range of rules.

Various police departments reviewed in the series, including West Palm Beach and Boca Raton, now link possible image matches to an “investigative lead” and “not probable cause for an arrest.” Some policies require internal audits and direct supervisor oversight, with removed user access for “suspicious activities.” Boca Raton’s department, which ran scans of “Boca mall” protesters, revised previous rules, now banning the tech from being “used solely to track or identify individuals engaging in political, religious, or other protected free speech activities.”

Fort Lauderdale police, criticized for an aggressive response to the protests after George Floyd’s death, also barred facial scans for “conducting surveillance of persons or groups based solely on their religion, political affiliation, race, ethnicity, gender, sexual orientation, sexual identity, or other constitutionally protected activity or class membership.”

And even Gualtieri’s own Pinellas County Sheriff’s Office addresses civil rights issues for the first time in its policy, noting facial recognition cannot be used “to assist in identifying persons engaged in lawful peaceful assemblies or protests unless such person(s) are directly related to a criminal investigation.”

Some major South Florida departments, such as the Broward Sheriff’s Office and Palm Beach County Sheriff’s Office, had long refused to adopt any policies, even though federal agencies such as the U.S. Department of Justice promote facial recognition limits, as well as privacy, civil rights, and civil liberties protections.

In policies submitted this fall, Broward and Palm Beach police noted use for “lead” purposes, yet no civil rights protections. And the Miami-Dade Police Department—whose FACES records provided after the series also indicate racial disparities for Black residents—offered similar limits yet no civil rights restrictions.

Overall, experts welcome written policies. “The fact that we're seeing agencies actually put policies down on paper is a huge step,” said Katie Kinsey, chief of staff at NYU's Policing Project, which cited the series’ findings. “Explicit restrictions on use for constitutionally protected activities—protests and freedom to associate—is really important.”

Moving forward, advocates note other needed improvements, aside from facial recognition bans or moratoriums, including: federal legislation and regulation; warrant requirements; police awareness of “automation bias,” or overconfidence in tech performance leading to possible false arrests; as well as an understanding of risks and community input before AI-based technologies are used in the first place. 

Police promises to use the tech responsibly also remain unconfirmed without audits on real-world outcomes. As this series exposed, facial recognition scans used by law enforcement have disproportionately targeted Black individuals, especially those participating in First Amendment activities. Without proper oversight and policy, this technology was suspected to have caused unjust impacts on people's lives. As Kinsey noted: “Now we have this reporting to show us that’s true.”

Joanne Cavanaugh Simpson is the Pulitzer Center grantee for this project and an AI Accountability Network Special Adviser

RELATED INITIATIVES

Logo: The AI Accountability Network

Initiative

AI Accountability Network

AI Accountability Network

RELATED TOPICS

Governance

Topic

Governance

Governance
teal halftone illustration of a raised fist

Topic

Racial Justice

Racial Justice
an orange halftone illustration of a hand underneath a drone

Topic

AI Accountability

AI Accountability