POLICY AND PRACTICE REVIEWS article
Front. Polit. Sci.
Sec. Politics of Technology
Volume 7 - 2025 | doi: 10.3389/fpos.2025.1605619
This article is part of the Research TopicAccounting for the Use of Powers and Technologies in the Intelligence and Security SectorsView all articles
Fundamental considerations for the use of explainable AI in law enforcement
Provisionally accepted- Europol, Den Haag, Netherlands
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Explainable AI (XAI) methods have the potential to make the use of AI in law enforcement more understandable, and ultimately more trustworthy. We argue that explanation requirements differ strongly between use cases and between stakeholders ranging from law enforcement officers to affected persons. While no currently known XAI method provides a guarantee to fully reflect the functioning of an AI model, XAI methods are currently the most promising means to bridge the gap between human and AI after increasing the human's AI literacy. Even though the benefit of XAI vary strongly with the accuracy of the AI system and need to be balanced against incurring risks, like automation bias, we argue that not using XAI implies larger risks than exploring the technologies' benefits and further developing it. In order to overcome existing shortcomings, we advocate for more collaborations between law enforcement agencies, academia, and industry.
Keywords: Explainable AI, XAI, Law Enforcement, Transparency, Trustworthy AI
Received: 03 Apr 2025; Accepted: 24 Sep 2025.
Copyright: © 2025 ZOCHOLL, Stampouli, Wittfoth and Mounier. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Maximilian ZOCHOLL, maximilian.zocholl@mailbox.org
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.