Facial Recognition in Law Enforcement: Promises and Pitfalls

by | May 23, 2025

Facial recognition technology (FRT) is no longer science fiction. From unlocking our phones to streamlining airport security, FRT has been quietly integrated into daily life. Most of us don’t bat an eye when we see FRT-enabled cameras providing an extra layer of security at a sporting event, but when this powerful tech is used in policing, the conversation gets a lot more complicated.

Law enforcement agencies must approach this innovative technology cautiously, considering both benefits and risks. Following are five key considerations.

1. FRT Is Already in Use Everywhere

We often think of facial recognition as a new innovation, but it’s already firmly entrenched in many areas of public and private life. According to the U.S. Government Accountability Office, over two-thirds of police agencies use FRT in some capacity, though the predominant applications include facility and computer systems access. From unlocking smartphones to scanning faces at border crossings, FRT is part of a growing web of biometric security many of us now take for granted. As of mid-2024, for example, Customs and Border Protection had processed more than 540 million travelers using facial recognition.

In law enforcement, FRT offers huge potential. Agencies can use it to identify suspects from video footage, locate missing persons and even exonerate wrongly accused individuals. But that doesn’t mean the technology is infallible.

2. FRT Is Useful, But Must Be Handled with Care

There are plenty of real-world success stories. In one example, police in Scranton, Pennsylvania, used FRT to identify a sexual assault suspect from social media photos. Arizona authorities used the tool to match a convenience store robbery suspect from surveillance footage. And in Florida, FRT helped clear a man falsely accused of vehicular homicide by locating a crucial witness.

But these wins come with a caveat. FRT is a tool, not a solution. The technology should never be the sole basis for an arrest. Instead, it should serve to generate leads investigators follow up on using traditional police investigative techniques.

Some departments already have a written policy that reflects this. The NYPD’s policy, for example, provides FRT can be used to identify potential persons of interest but requires corroborating evidence before action is taken: “The facial recognition process does not by itself establish a basis for a stop, probable cause to arrest, or to obtain a search warrant.” Still, even when these policies are followed, accuracy and bias can complicate the process.

3. Accuracy and Bias Are Real Problems

Despite its promise, facial recognition is far from perfect. One major concern is accuracy. Poor-quality probe images, flawed algorithms and inadequate human oversight can all result in false positives. Consider the case of Robert Williams, who was wrongfully arrested in Detroit because of a faulty match from a grainy surveillance image.

Bias is another serious issue. Studies show FRT systems are less accurate at identifying people of color, women, and older adults. A 2018 MIT study found an error rate of nearly 35% for dark-skinned women, compared to less than 1% for lighter-skinned men. These disparities can lead to wrongful arrests and exacerbate existing inequalities in the criminal justice system.

Bias often stems from the training data used to build FRT algorithms. If the system is trained mostly on white male faces, it will struggle to accurately analyze other populations. Although algorithmic performance is improving, law enforcement must remain vigilant to avoid bias-related mistakes.

Facial recognition technology offers huge potential benefits to public safety, but those benefits come with strings attached.

4. Privacy Concerns Are Not Going Away

Beyond misidentification and bias, the widespread use of FRT raises serious questions about privacy. The technology enables real-time surveillance on a massive scale. If linked to public cameras, FRT can track a person’s movements throughout a city without their knowledge or consent.

According to the ACLU, the FBI’s FRT database includes hundreds of millions of photos, many of them pulled from driver’s license records. In the wrong hands, and without legal restrictions, this information can be used for invasive surveillance, potentially chilling free speech and discouraging public protest.

The European Union has acknowledged these risks with comprehensive legislation. The EU’s AI Act almost completely bans real-time facial recognition by law enforcement. In the U.S., some cities like San Francisco and Portland have passed local bans on law enforcement use of FRT, while states such as Washington and Virginia have enacted laws to add guardrails to its use. For much of the rest of the country, though, police agencies are solely responsible for how they implement the technology and what they do with it.

5. Agencies Must Self-Govern Responsibly

State laws vary on how they address FRT in law enforcement (if at all) and there are no federal laws yet providing guidance on using the technology. Until clear federal guidelines exist, the obligation to use FRT responsibly falls to individual law enforcement agencies. Following are some best practices for law enforcement:

  • Use FRT only to generate investigative leads and never as the sole basis for arrest.
  • Limit usage to serious crimes until comprehensive policies are developed.
  • Ensure transparency with the public and keep the community informed.
  • Require human review at every stage of the process.
  • Audit for bias and effectiveness on a regular basis.

Agencies should also appoint an internal FRT coordinator to oversee policy compliance and track how the technology is being used in the field. Ultimately, responsible use comes down to leadership.

The onus is on law enforcement to ethically self-govern.

Powerful Tech, Heavy Responsibility

Facial recognition technology offers huge potential benefits to public safety, but those benefits come with strings attached. Accuracy is imperfect. Bias is real. Privacy concerns are valid. And the legal framework is still catching up.

Law enforcement agencies that want to use this technology need to do so with humility and diligence. That means training officers not just in how to use the tools, but when and why. It means documenting every use and auditing results. And it also means recognizing that FRT is not a silver bullet.

Facial recognition technology has immense potential, but its dangers must not be ignored. If public safety is the goal, then law enforcement agencies must treat this tool with the care and caution it deserves. And that’s not just good policing. That’s constitutional policing.

PRATHI CHOWDRI served as a federal trial attorney defending the NYPD against claims of false arrest, excessive force, wrongful conviction and malicious prosecution in the Southern and Eastern Districts of New York from 2005 to 2010. Many of her cases as senior counsel involved complex civil litigation with extensive e-discovery and high-profile claims against the city, its police officers, prosecutors and corrections officers. She also worked as an associate at a private law firm in New York City where she continued federal trial litigation in both 42 USC § 1983 and medical malpractice. Prathi has also served as an adjunct professor in constitutional law at Florida Atlantic University. From 2015 to 2024 she worked as a member of Lexipol’s legal team, ensuring policy and other content conformed to state and federal standards. She is now chief legal advisor and director of strategy for Polis Solutions, focusing on law, policy and civil rights from a policing and AI perspective.

More Posts

Policy trends for law enforcement leaders