AI in Law Enforcement: Understand and Accept the Challenges

by | June 10, 2025

Editor’s note: This article originally appeared in The Chief’s Chronicle; New York State Association of Chiefs of Police. Reprinted with permission.

Early last year, I wrote an article, “AI in Law Enforcement: Old and New Challenges.” In the intervening period, I have learned a lot about the state of AI in law enforcement as I have traveled around the country presenting on the topic at various conferences. My focus, however, is not on the technical side. While I have learned much about how AI solutions work, my focus remains on the legal, ethical and leadership implications of this rapidly evolving technology.

AI tools like ChatGPT (OpenAI), Gemini (Google) and Claude (Anthropic), to name a few, are technically known as large language models (LLMs) — AI systems trained on massive datasets to predict and generate human-like text. When these models are deployed in a conversational interface (like a website chat window or a virtual assistant), they are commonly referred to as chatbots. For the purposes of this article, I will focus primarily on the use of these AI-powered chatbots. The number of AI-powered or enhanced specialty products is diverse and well beyond the scope of this article.

During my travels, I spoke with many chiefs and law enforcement leaders about AI. One thing is clear: Leaders’ depth of understanding and the permitted usage of AI is all over the place. Some are adopting AI tools for their agencies; many permit members to use AI tools. Anecdotally, however, it appears some agencies that condone such use do not provide department-acquired products. In other words, members determine which tools they want to use and access them using personal logins.

The most alarming information gleaned from the conversations I have had with leaders is the belief that because they are not officially using any AI chatbots or other products, they do not need to have an agency policy addressing AI use. This is an incorrect assumption. My guess is that in many agencies, at least a few members are using personal chatbot accounts for some official tasks.

As I pointed out in my prior article, assistance in report writing is likely the most common, followed closely by the development of training material. During my presentations, I ask a half-joking question: “Do you have an officer who suddenly became very smart?” That is usually the light bulb moment; I can see chiefs realizing they do, in fact, have officers using AI. This is precisely why all law enforcement agencies need an AI policy — to control what is and is not being used and allowed.

I will discuss more issues related to the need for such policy control later in this article. But first, let’s focus on why I think all leaders should at least consider using AI.

Leaders must then clearly identify in policy what AI tools are allowed and under what circumstances to address these concerns.

AI Can Be a Valuable Resource

The various chatbot products have their own individual characteristics, but they also have some common features. Since Lexipol provides me with an enterprise version of ChatGPT (GPT), that is what I primarily use. Within GPT there are different models, each with unique strengths and weaknesses.

It is essential to understand that the current versions of AI are not the “self-aware existential threats to humanity” some movies depict. I will let GPT explain the differences between AI and humans based on the following inquiry I made in GPT 4o, which is a model good for general usage:

As part of an article that I am writing, can you please explain in a summary form the difference between how a human thinks and how AI like yourself is based on prediction instead? Does this question make sense to you as written?

And here is the response:

Humans think using experience, emotions, intuition, and reasoning. We* make decisions based on a mix of logic, context, values, and often a “gut feeling.” Human thinking is shaped by consciousness, moral frameworks, and the ability to reflect on abstract concepts.

In contrast, AI like chatbots doesn’t think — it predicts. AI generates responses based on patterns in data it has been trained on. For example, when you ask a question, the AI predicts the most likely next word (or sequence of words) based on everything it has seen before. It doesn’t understand meaning the way humans do; it simulates understanding by analyzing probabilities.

This predictive nature is powerful for structured tasks but lacks human traits like empathy, judgment, or true understanding — something leaders must keep in mind when deploying AI in sensitive law enforcement contexts.

While this response highlights a significant difference in processing, chatbot output can be remarkable, making it difficult to accept that the AI does not actually understand what it is saying.

If you’re new to AI chatbots, here is my first suggestion: Start asking some random questions you would like to know more about. And, if you prefer, you can talk to them — literally talk to them using the voice modes that many have. Practice with various types of prompts and levels of detail.

As an example, here is the prompt I used in ChatGPT model 4.5, which is good for deep thinking and reasoning:

As a law enforcement officer, tell me what you think about when pondering how department leadership impacts agency morale and overall wellness of its members. But think deeply about this and engage in extensive conversational internal monologue and long contemplation expressed thoughtfully.

And after a second or two, here was the result, which I found to be accurate and relevant:

When I reflect deeply on how department leadership affects agency morale and the overall wellness of its members, I find myself pausing to consider the profound ripple effects a single leader — or leadership philosophy — can have on every officer, dispatcher, investigator, and civilian employee who comes to work each day. It’s a thought that demands careful and intentional reflection, not a quick judgment.

At first, my mind goes instinctively toward the frontline experience. I think about the young patrol officer who joins the agency full of optimism and a sense of purpose, only to encounter leaders who perhaps have grown cynical or distant, or who have forgotten the meaning of mentorship. In contrast, my thoughts shift to departments where leadership is engaged, present, and thoughtful — where command staff see themselves as serving their officers just as those officers serve the community. It’s within these two contrasting visions that deeper complexities start emerging.

Agency morale isn’t a simple outcome of a policy or procedure; it’s more accurately an atmosphere, a kind of emotional ecosystem, built slowly over time through daily interactions, gestures of empathy, moments of authentic connection, or the painful absence of these things. A leader’s attitude is contagious — much more than we often acknowledge. Leaders who communicate clearly, transparently, and empathetically can foster a culture of trust. In my quiet contemplation, I see trust as foundational to wellness, not just emotional, but physical and professional as well.

Trust creates safety — psychological safety — which is essential for wellness. In a job fraught with uncertainty, physical risk, and emotional exposure, leadership that prioritizes emotional intelligence sends the message: “You’re safe here; your experiences and feelings matter.” When a department’s leadership dismisses officers’ concerns or makes individuals feel expendable, those wounds run deep. Morale suffers, and gradually, wellness erodes until it becomes a crisis.

I also ponder the interplay between accountability and fairness. Leadership that is consistent and just in handling mistakes creates a sense of stability and fairness, crucial for maintaining morale. Conversely, I’ve observed firsthand how favoritism, double standards, or arbitrary discipline can quickly erode morale. Such actions are corrosive, undermining confidence and leaving officers feeling unsupported or betrayed.

As my contemplation deepens, I’m drawn toward how leadership handles the aftermath of traumatic events — critical incidents, line-of-duty injuries or deaths, or even community-driven scrutiny. These moments test the strength of leadership most profoundly. Good leaders proactively step forward, lean into these challenges, and genuinely invest in the long-term emotional and mental health of their personnel. They understand that wellness initiatives — like peer support, wellness checks, and accessible counseling — aren’t just boxes to check. They recognize these efforts as lifelines for those who serve.

My reflections lead me to also consider the subtler dimensions of leadership influence. Recognition, appreciation, and gratitude from command are powerful motivators, far stronger and deeper in their effects than mere salary increases or improved equipment alone. Feeling genuinely valued by agency leadership creates meaning, builds loyalty, and fosters resilience.

Finally, my thoughts gravitate toward the long-term legacy of leaders. Those at the helm have the profound responsibility — and privilege — of shaping organizational culture. Leadership committed to servant-leadership principles, who see themselves as part of the team rather than above it, inspire not only loyalty and dedication but also encourage officers to embody similar traits. Leaders who model humility, self-reflection, and continuous learning create an environment where wellness isn’t just talked about — it’s embodied and continuously cultivated.

In sum, as I turn this thought over repeatedly in my mind, I become more convinced that agency morale and wellness aren’t merely outcomes — they’re reflections of leadership itself. Department leaders don’t just affect morale; they set its tone, define its shape, and sustain its essence, leaving marks on their people that last far beyond their tenure.

As a long-time student of leadership principles, I could not have written this any better. This is an excellent example of the value of AI as a tool to take complex topics and break them down into an understandable overview. The tools are also excellent at sorting vast data tables and spreadsheets.

Once you start posing various questions, you may quickly get hooked. So, while there is value to using these tools, we now need to look at areas of potential risk.

Why a Strong AI Policy Is So Important

While chatbots can offer valuable assistance and make certain processes more efficient, there are also areas of concern that must be addressed in policy. The first deals with privacy and legitimizing your use. Your first step should be to discuss any plans to use AI chatbots for official use with your local IT personnel, legal counsel, and municipal leaders, which may help you do a risk assessment of the tool you want to use. Or your IT may steer you to the one they have already vetted for wider use.

Leaders must then clearly identify in policy what tools are allowed and under what circumstances to address these concerns:

Protecting sensitive information and complying with the Criminal Justice Information Services (CJIS) Requirements. By default, most publicly available chatbots (like those on free or consumer-tier plans) retain and may analyze user inputs to improve future versions of the model. This is typically disclosed in the provider’s terms of service but is probably glossed over by many users. Enterprise versions, however, isolate your data and the system does not log any user inputs. These versions do come at a cost but can expand your ability to use them safely. Leaders must ensure their members are not using privately accessed chatbots to conduct official business unless it is for limited and specific reasons you and your IT leaders have allowed.

Hallucinations and flawed conclusions may taint results. Chatbots may generate incorrect or fictional information that sounds plausible. If any chatbot responses are to be used in an official capacity, then all sources and citations must be double checked. I have personally experienced several instances where GPT manufactured source citations — complete with authors’ names and the year of the publication — that did not exist. I challenged GPT on it, and while it apologized, it could not explain where the sources had originated. Instead, it gave me alternate sources that inadequately supported the applicable conclusion. Caution should be taken with unsourced content. I identify such material and ask GPT to specify where it came from. If it cannot give me a specific source, I will generally not use it unless I can confirm the information based on my own knowledge or alternate source.

Overreliance on the tools. One concern I have, based on conversations with officers, is that chatbots will make developing lesson plans and other training materials seem too easy, thus leading to flawed instructional material. I have tested out the creation of lesson plans and drafting of training summaries and articles. My conclusion is that using AI tools for these purposes is only the beginning and must be followed up with independent research and verification of content, preferably by someone with subject matter expertise. Remember that chatbots do not truly understand the content and may misinterpret nuanced situations and information. If a flawed lesson plan is used to train members, it could lead to poor decisions in the field and even be used against the agency in court.

Legal issues must be addressed. As already mentioned, any data placed in any chatbot must be in compliance with CJIS. Agencies also need to be aware of the rapidly evolving legislative landscape regarding governmental use of AI tools. At Lexipol, we track these legislative updates on the state and federal level and issue policy updates to our policy customers as needed.

Use for report writing and other official and/or documents must be authorized. As I pointed out in my prior article, some officers use chatbots to draft their reports for them. I will refer to you my previous article for the ethical, legal and leadership issues related to that. A more nuanced use is for officers to draft their report but then run it through a chatbot to improve it. While this seems much more innocuous, it still poses certain risks. Suppose your officer speaks at one level, and then suddenly starts writing at a much higher level, using words they would not normally use. In that case, this may become fodder for cross-examination by a defense attorney. If considering allowing officers to do this, it should be with full approval and consent of the local district attorney. One New York district attorney told me he would rather know about it up front and bring it out himself in direct examination. Some body camera companies are now offering automatic report generation functions. Before investing in such services, make sure your district attorney is on board with it and understands how it works. If adopted, close supervision will be critical to ensure officers are verifying the content of the report.

Just the Beginning

For now, chatbots are excellent resources and, when authorized and used properly, can enhance agency operations. Even though I have long had access to legal resources like Westlaw, I always started my research with a Google search. I have now revised that and instead start with GPT or one of the other AI tools. For me, Google was always the beginning of the process. That has not changed with the availability of chatbots and other AI-based tools.

The bottom line: Leaders need to understand there always needs to be a “human in the loop,” verifying and double checking anything used in an official capacity. Draft a strong and clear policy and ensure all your members know what they can and cannot use.


*I asked GPT to explain why it used “we” in the response. It explained: “In that section, I’m stepping into a human-explaining-human mode before shifting to how AI works. I always refer to myself or AI systems as ‘it’ when distinguishing machine behavior, because unlike humans, we (AI) don’t have consciousness or intent.”

MIKE RANALLI, ESQ., is a market development manager for Lexipol, an attorney and a frequent presenter on various legal issues including search and seizure, use of force, legal aspects of interrogations and confessions, wrongful convictions and civil liability. Mike began his career in 1984 with the Colonie (N.Y.) Police Department and held the ranks of patrol officer, sergeant, detective sergeant and lieutenant. He retired in 2016 after 10 years as chief of the Glenville (N.Y.) Police Department. Mike is a consultant and instructor on police legal issues to the New York State Division of Criminal Justice Services, and has taught officers around New York State for the last 19 years in that capacity. He is also a past president of the New York State Association of Chiefs of Police, a former member of the IACP Professional Standards, Image & Ethics Committee, and the former Chairman of the New York State Police Law Enforcement Accreditation Council. He is a graduate of the 2009 F.B.I.-Mid-Atlantic Law Enforcement Executive Development Seminar and is a Certified Force Science Analyst.

More Posts

Get The Briefing Delivered to Your Inbox