ORIGINAL RESEARCH article
Front. Artif. Intell.
Sec. AI for Human Learning and Behavior Change
Robots in the Moral Loop: A Field Study of AI Advisors in Ethical Military Decision-Making
Provisionally accepted- 1Embry-Riddle Aeronautical University Worldwide and Online, Daytona Beach, United States
- 2University of Colorado Boulder, Boulder, United States
- 3Air Force Research Laboratory, Wright-Patterson Air Force Base, United States
- 4US Air Force Academy, Air Force Academy, United States
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Humans now routinely work alongside AI in environments where the ethical consequences of decisions are profound, yet there remains limited understanding of how long-term collaboration with a robotic teammate shapes individuals’ moral judgment. Prior studies have demonstrated that people can be influenced by a robot’s moral recommendations, but such investigations have largely focused on single dilemmas or brief encounters conducted in laboratory settings. To address this gap, we conducted a three-month teaming program with 62 U.S. military cadets who interacted extensively with a Socially Intelligent and Ethical Mission Assistant (SIEMA) embodied either as a humanoid robot or as a human advisor in a field setting. After this sustained collaboration, cadets completed a graded moral dilemma that required balancing the lives of soldiers against those of civilians, during which they received a written recommendation from their SIEMA promoting a utilitarian option. Each participant recorded an initial judgment, then a second judgment after receiving SIEMA’s advice, and finally a third judgment following an opposing recommendation that emphasized civilian protection. Approximately half of the cadets shifted toward the utilitarian option after advice, regardless of whether the source was robotic or human. When subsequently presented with the recommendation to prioritize civilian protection, most of these cadets shifted again, often returning to their original stance. Qualitative analyses of open-ended explanations revealed that cadets justified their choices by invoking outcome-based reasoning, duties of protection, trust in their teammate, and personal values. Our findings demonstrate that robotic advisors can influence nuanced moral decisions and that such influence contributes to shaping future judgments. Accordingly, moral-AI design should present trade-offs transparently, surface competing values concurrently, and rely on human reflection rather than assuming isolated AI prompts will durably reset moral priorities.
Keywords: robot1, ethics2, AI3, decision-making4, Mixed-Methods5
Received: 23 Sep 2025; Accepted: 03 Nov 2025.
Copyright: © 2025 Tossell, Kuennen, Momen, Tolston, Funke and de Visser. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Ali Momen, amomen425@gmail.com
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.