EDITORIAL article
Front. Polit. Sci.
Sec. Politics of Technology
Volume 7 - 2025 | doi: 10.3389/fpos.2025.1611563
This article is part of the Research TopicHumans in the Loop: Exploring the Challenges of Human Participation in Automated Decision-Making SystemsView all 5 articles
Humans in the Loop: Exploring the Challenges of Human Participation in Automated Decision-Making Systems
Provisionally accepted- 1Delft University of Technology, Delft, Netherlands
- 2Inholland University of Applied Sciences, Hoofddorp, Netherlands
- 3Sustainable Computing Lab, Vienna University of Economics and Business, Vienna, Austria
- 4Vilnius University, Vilnius, Vilnius, Lithuania
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
"Human in the loop" (HITL) refers to a process or system design where human oversight, intervention, and collaboration are integrated into Automated Decision Making Systems (ADMS) at strategic points (Mosqueira-Rey et al, 2023). The apparent paradox of inserting human oversight into systems which use algorithms, data analysis, and predefined rules to make decisions with minimal or no human intervention, stems from the realization that gains in efficiency are offset with a potential for serious harm in the instances when ADMS err. Errors can occur through bias amplification in predictive systems, a lack of contextual awareness in high-stakes scenarios and an automated system’s inability to recognize outliers (Angwin & Larson, 2022). HITL is seen as a critical ethical safeguard against AI systems making consequential decisions without appropriate scrutiny. Many emerging AI regulations and frameworks (EU AI Act, NIST AI Risk Management Framework, for instance) explicitly require human oversight for high-risk applications, making HITL not just an ethical choice but a compliance necessity in many contexts.Historically, there was a shift from fully automated systems to more collaborative approaches. Early forms of automated decision-making systems, such as expert systems MYCIN andDENDRAL developed in the 1960s and1970s, operated under a design philosophy that sought to minimize human intervention, which was seen as inefficient or inconsistent. They attempted to capture human expertise in formal rules that could be executed without oversight. A series of high-profile failures catalysed a significant change in ADMS design philosophy, for instance the discovery that the COMPAS Criminal Risk Assessment, used to predict recidivism rates in the American criminal justice system, produced racially biased outcomes, or IBM's Watson Health's "unsafe and incorrect" cancer treatment recommendations. (Hao, 2019;Strickland 2019) By the mid-2010s, researchers and developers increasingly recognized that the most effective systems would combine machine efficiency with human judgment rather than attempting to eliminate the human entirely.Implementing HITL raises questions about the concrete frameworks in which humans interact with automated decisions. For instance, what kind of decision options are humans are provided, what data is made available to inform their decisions, is the time they are allocated to make their decisions sufficient and what level of oversight, accountability and liability are attached to human-made decisions? Most importantly, effective human-machine collaboration requires that human input is meaningful, and not just rubber-stamping decisions from ADMS (Wagner, 2019).The authors in this collection of articles initiate a discussion on the socio-legal and sociotechnical challenges associated with humans participation in ADMS, considering insights from law, social science, philosophy, computer science and engineering. Salvini et al, use case studies in social care, aviation, and vehicle driver monitoring systems to illustrate the challenges and tensions involved in the use of ADMS, and highlight that human oversight of ADMS is neither easily defined nor well implemented. Haitsma, in his analysis of a landmark judgment of the Court of Justice of the European Union in 2022 on discrimination and algorithmic profiling in a border security context, shows that courts dealing with legal challenges to ADMS struggle to assess risks and to prescribe clear safeguards and how to effectively implement them. Constantino and Wagner explore various accountability principles to effectively govern intelligence and security services in democratic societies in view to ensure responsible, answerable practices. These proposed principles of accountability include acting within duty, explainability, necessity, proportionality, reporting and record keeping, redress, and continuous independent oversight. Human, in his philosophical reflection on the loss of human agency and the threat to human rights in the digital age, argues for a paradigm shift from a predominantly "individual-centric" approaches to data protection and consenting toward human-compatible, collective approaches. He goes on to propose the establishment of novel sociotechnical mechanisms, such as the "Advanced Data Protection Control (ADPC)", within internet infrastructures to facilitate effective communication between users and stakeholders.In sum, the articles in this collection contribute to the debate how HITL should evolve beyond simplistic "human approval" models toward more sophisticated collaborative frameworks where humans and automated systems complement each other's strengths while mitigating respective weaknesses. Implementing effective human oversight remains challenging, and, most importantly, responsible development of automated systems means to go beyond merely implementing technical safeguards, and instead to thoughtfully design human-machine relationships that align with societal values and priorities.
Keywords: human-in-the-loop (HITL), human oversight, Human-algorithm interaction, Human factors in automation, human-centered AI, Collaborative decision-making
Received: 14 Apr 2025; Accepted: 30 Apr 2025.
Copyright: © 2025 Wagner, Kuebler and Zalnieriute. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Ben Wagner, Delft University of Technology, Delft, 2628 CD, Netherlands
Johanne Kuebler, Sustainable Computing Lab, Vienna University of Economics and Business, Vienna, Austria
Monika Zalnieriute, Vilnius University, Vilnius, LT-01513, Vilnius, Lithuania
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.