AI in Law Enforcement: Old and New Challenges

Editor’s note: This article originally appeared in The Chief’s Chronicle; New York State Association of Chiefs of Police. Reprinted with permission.

Let me start with a statement that is by no means groundbreaking, but bears repeating: Groupthink is dangerous and an impediment to the successful development of any individual or organization. If we exclusively digest information that reinforces our point of view, we will never reach our full potential.

The challenges facing law enforcement over the past several years have in turn challenged our appetite to digest other perspectives. It is tempting to circle the wagons, talk to those who reinforce our thinking, and ignore opportunities for improvement. But it has never been more necessary.

When law enforcement agencies are attacked or condemned for an incident, it typically falls into one of two categories. The first category involves tragedies created by the choices and behaviors of others. There is often little we can do in these situations other than trying to keep the bad from getting worse. The second category involves tragedies that we, as a profession, must accept responsibility for. Some incidents in this category result from the compulsion to keep doing things the way we have always done them, and some result from the belief that the ends justify the means.

We cannot directly control public perception or media portrayals of law enforcement. We can, however, focus on what we can control and fix what needs to be fixed, which should in turn lead to more positive portrayals overall.

So, what does this have to do with artificial intelligence?

The rise of powerful AI tools is already introducing new challenges for law enforcement that could easily compound our reputational issues. But the cultural tenets needed to guide our response to this new technology are ones we should already be using: a strong commitment to ethical behavior and continuous improvement.

AI and Ethics: New tool, Old Issues

Do an internet search for “examples of police misconduct,” “police officers filing false documents,” or anything even remotely similar, and you will get countless examples and exposés. In past articles I have written about some of these cases – officers planting evidence in vehicles, officers lying about the execution of search warrants and the probable cause to support them, and officers lying about how they found evidence. These frustrate me because they tarnish the entire profession. They are examples of why the public and media are so willing to have negative opinions of the police.

The discovery of why the officer used AI in this situation, or at least felt the need to do so, can help to identify areas of drift.

How do we change this? Well, we can stop giving the media such ample fodder to report. It shouldn’t be too hard to figure that one out. The worst part of some of the incidents is that nothing significant, if anything, ever happened to the officers. That angers me, as it just makes it harder for those in the profession who are trying to do the right thing. Is anyone surprised people resent that officers can write them tickets and arrest them, when they read about officers engaging in unethical and criminal behaviors?

More pertinent to this article, the proliferation of AI creates additional temptation to engage in such unethical behavior.

During the recent International Association of Chiefs of Police (IACP) conference in San Diego, many chiefs expressed concern that officers have started using AI for writing police reports and probable cause statements. Upon returning home, I decided to give ChatGPT a try to see what it could come up with.

It did not take more than two minutes to know a potential problem exists. It took me only two inquiries. The first was “write a probable cause statement for a police officer who found heroin in a motor vehicle.” I quickly realized my mistake when ChatGPT relied on the odor of marijuana to justify the subsequent search. This certainly made sense based on how many searches and arrests historically and legitimately were because of the odor of marijuana. But I live in New York where, as in many other states, marijuana is now legal.

So, I modified my request to “write a probable cause statement for a police officer who found heroin in a motor vehicle, but the odor of marijuana is not present.” The system recovered nicely:

On the 19th day of October, 2023, while on regular patrol duties, I observed a motor vehicle, a red 2020 Toyota Corolla with license plate number XYZ5678, commit a traffic violation by exceeding the speed limit on Elm Street. I initiated a traffic stop at approximately 1:30 PM.

Upon contacting the driver, identified as Mr. John Smith, I noticed his hands were shaking and he seemed excessively nervous. When asked about his speed, Mr. Smith was unable to provide a coherent explanation.

In view of Mr. Smith’s nervous behavior and the traffic violation, I asked Mr. Smith if there was anything in the vehicle that shouldn’t be there, to which he responded, “No.” Based on his unusual nervousness, I asked for his consent to search the vehicle, which he granted.

During the search, I found a small plastic bag containing a white powdery substance hidden inside a small compartment in the dashboard. Preliminary field tests of the substance indicated a positive result for heroin. The substance was seized and will be submitted to the state laboratory for further testing.

Based on these facts and circumstances, there is probable cause to believe that Mr. John Smith was in possession of heroin, a controlled substance, in violation of the applicable state law, specifically Section XYZ of the Penal Code.

A few tweaks here and there with vehicle details and location and you have yourself what appears to be a nice little consent search justification.

But there is a problem. Actually, several problems. First, it may not be legally correct. Second, it is not your statement. If you file paperwork with that basic statement, you have filed false paperwork. I don’t care if it turned out to be very similar to the facts you had, it is not your statement and it is not written by you. Once you file it you lose the integrity of it, or at the very least the perceived integrity of it. If it is discovered that you used AI, it is no longer your report.

My fear is that a month or maybe a year from now, some scandal will arise when it comes out that officers in an agency have been using AI in their court submissions.

Reports, court documents, and testimony pertaining to any area of the law, whether it be search and seizure, legal aspects of confessions, or use of force, require specific articulable facts that arose from that unique incident. It does not matter if similar facts and observations repeat themselves (driving while intoxicated cases come to mind); the facts must be what you observed in that case, period. There are always nuances because everyone and every situation is different. Being able to properly and accurately articulate facts is a critical skill for officers. Find a way to cut corners and officers will never get better, while comprising their integrity at the same time.

My fear is that a month or maybe a year from now, some scandal will arise when it comes out that officers in an agency have been using AI in their court submissions. This would predictably lead to a review of all cases filed by the officers and the probable dismissal of many if not all of them. At that point it will not matter if the facts of an individual case were accurately depicted in the AI-created document. The taint of impropriety is all that will matter.

By the way, you may ask, how could someone know? Well, the first thing that comes to mind is by using – yep, you guessed it – AI. Educational institutions are already using tools to scan student essays for evidence of AI input, much the same way plagiarism tools have been used for many years. It won’t be long before someone figures out a way to analyze voluminous amounts of digital court documents to search for certain patterns.

The technology may be new, but the root of the problem is not: Using AI to generate reports is at heart an ethical lapse. I have long encouraged supervisors to be on the lookout for the same supporting fact patterns from officers over and over again. This is an indication that more information is needed about how the officer operates and whether the issue is with the individual officer or the department as a whole.

AI and Continuous Improvement: Looking for “Drift”

Now let’s think about the usage of AI in the context of the cultural tenet of continuous improvement.

Continuous improvement requires constant evaluation and, if necessary, changes to various tasks and work output. An attitude and culture of continuous improvement means that agencies value and facilitate the development of their members. Further, the organization must be “chronically uneasy” and remain open-minded and welcome of skepticism toward past practices.

This is the antithesis of management by lack of negative consequences. Law enforcement agencies function in highly complex environments with innumerable variables outside our span of control. In such environments, flawed processes may be in place even if no problems arise. Unless we are looking outside our own agencies and developing that “chronic uneasiness,” we may overlook unsafe, unethical or unconstitutional practices. And too quickly, these practices can become the norm. This has been called “drift” – a slow but incremental change in how things are accomplished. Over time, the original rule is forgotten or intentionally ignored. Or to put it very simply: It is all good, until it isn’t.

When it comes to using AI for writing police reports and other documents, most agencies probably lack rules or procedures because the technology is still relatively new. So, developing those rules is critical. But based on some of the conversations I had at IACP, some officers have already started using AI. If we are committed to continuous improvement, we should dig deeper.

All police officers should have learned in basic school what I already mentioned – that all levels of suspicion developed during an incident must be thoroughly articulated and documented. If officers have been using AI as a shortcut, it is critical to find out why. The answer could range from a lack of confidence and a desire to do better to not knowing or caring what the rules are.

Now, let’s flesh this out a bit. An officer on a suspicious subject call approaches the person in question. The person does not appear to be doing anything wrong and the caller was lacking in detail. The person also does not want to stop, insisting that they want to be left alone and allowed to go on their way. The situation progresses and escalates into an arrest.

Here is where we separate the learning organizations from the rest.

First, let’s address the organization. How will most administrators learn about this incident? Will the actions of the officer be reviewed as a matter of course? Or does it depend on whether the incident results in injury to the officer and/or subject? What if the incident results in the seizure of illegal drugs but no injury to anyone? If the answer is that the review will only happen if there is an injury and therefore possible liability for the department, then you are managing by lack of negative consequences. Put another way, if you wait for an injury to occur to initiate a review, you are missing the opportunity to identify and correct the issue at the frontline supervisor level, with the charges appropriately thrown out before someone gets hurt. (Oh, and by the way, if you just read that last sentence and your gut reaction was, “We can’t do that here!”, then you have just identified an area of drift.)

Now let’s look at the reasons officers may use AI in this situation. The officer was on a call and, as a result, tried to do what they thought was the right thing, but could not with any confidence explain how it happened. The officer expresses their concern to another officer and is told “Dude, just use AI.” The discovery of why the officer used AI in this situation, or at least felt the need to do so, can help to identify areas of drift. Does the officer lack understanding of their legal limitations? Or did the officer do all the right things but just cannot adequately explain it? Either situation can be addressed by additional training – but getting to that conclusion is the organizational challenge that requires a commitment to continuous improvement.

During the recent International Association of Chiefs of Police conference in San Diego, many chiefs expressed concern that officers have started using AI for writing police reports and probable cause statements.

Another possibility may be the officer knew or should have known the rules but is using AI to justify what they did. This, of course, is an ethical breach that requires proper supervision and discipline. The ends do not justify the means. Once again, however, the organizational challenge is to discover the problem in the first place. Here is the good news: If you already aspire to being a learning organization, it’s unlikely the officer would be seeking to justify illegal or unethical behavior – or at least that you have systems in place to quickly detect and correct such behavior.

Moving the Profession Forward

After 40 years involved in law enforcement, I can tell you with confidence that the challenges will never end. But this is not a bad thing. Recognizing and adapting to both old and new challenges is possible if you aspire to become a learning organization. We need to help officers become better at what they do. Recognizing needed areas of improvement is essential to this goal.

Law enforcement officers and leaders often feel the lens of media scrutiny is unfairly turned on them. AI represents a rapidly evolving area where we can – we must – initiate control measures and apply a continuous improvement mindset. If we do this voluntarily, we will build stronger organizations and a stronger profession. And that is our best chance for a more favorable view in the eyes of the media and the public.

References and Notes

  1. This possibly may suffice to justify a consent search in jurisdictions that follow rulings of the Supreme Court of the United States. But in New York, where I’m located, this would not, in my opinion, rise to the level of founded suspicion, which is what is needed under the New York Constitution.
  2. See generally, Dekker, S. (2014). The field guide to understanding “human error.” 3rd Edition. Burlington, VT: Ashgate Publishing.
  3. Id. For a different way of looking at the issue, it has also been called the “normalization of deviance” – see Vaughn, D. (2016). The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA, Enlarged Edition. University of Chicago Press.
Michael Ranalli

MIKE RANALLI, ESQ., is a Program Manager II for Lexipol. He retired in 2016 after 10 years as chief of the Glenville (N.Y.) Police Department. He began his career in 1984 with the Colonie (N.Y.) Police Department and held the ranks of patrol officer, sergeant, detective sergeant and lieutenant. Mike is also an attorney and is a frequent presenter on various legal issues including search and seizure, use of force, legal aspects of interrogations and confessions, wrongful convictions, and civil liability. He is a consultant and instructor on police legal issues to the New York State Division of Criminal Justice Services, and has taught officers around New York State for the last 15 years in that capacity. Mike is also a past president of the New York State Association of Chiefs of Police, a member of the IACP Professional Standards, Image & Ethics Committee, and the former Chairman of the New York State Police Law Enforcement Accreditation Council. He is a graduate of the 2009 F.B.I.-Mid-Atlantic Law Enforcement Executive Development Seminar and is a Certified Force Science Analyst.

More Posts

Get The Briefing Delivered to Your Inbox

Related Posts

Back to Top