Original Research ARTICLE
Meaningful Human Control Over Autonomous Systems: A Philosophical Account
- 1Ethics/Philosophy of Technology, Delft University of Technology, Netherlands
Debates on lethal autonomous weapon systems have proliferated in the last five years. Ethical concerns have been voiced about a possible raise in the number of wrongs and crimes in military operations and about the creation of a “responsibility gap” for harms caused by these systems. To address these concerns, the principle of “meaningful human control” has been introduced in the legal-political debate; according to this principle humans not computers and their algorithms should ultimately remain in control of, and thus morally responsible for relevant decisions about (lethal) military operations. However, policy-makers and technical designers lack a detailed theory of what “meaningful human control” exactly means. In this paper we lay the foundation of a philosophical account of meaningful human control, based on the concept of “guidance control” as elaborated in the philosophical debate on free will and moral responsibility. Following the ideals of “Responsible Innovation” and “Value-sensitive Design” our account of meaningful human control is cast in the form of design requirements. We identify two general, necessary conditions to be satisfied for an autonomous system to remain under meaningful human control: first, a “tracking” condition, according to which the system should be able to respond to both the relevant moral reasons of the humans designing and deploying the system and the relevant facts in the environment in which the system operates; second, a “tracing” condition, according to which the system should be designed in such a way as to grant the possibility to always trace back the outcome of its operations to at least one human along the chain of design and operation. As we think that meaningful human control can be one of the central notions in ethics of robotics and AI, in the last part of the paper we start exploring the implications of our account for the design and use of non-military autonomous systems, for instance self-driving cars.
Keywords: autonomous weapon systems, Meaningful human control, Responsibility gap, Ethics of robotics, Responsible Innovation in Robotics, AI ethics, Value-Sensitive Design in Robotics, Ethics of Autonomous Systems
Received: 24 Oct 2017;
Accepted: 01 Feb 2018.
Edited by:Ugo Pagallo, Università degli Studi di Torino, Italy
Copyright: © 2018 Santoni de Sio and Van den Hoven. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Dr. Filippo Santoni de Sio, Delft University of Technology, Ethics/Philosophy of Technology, Jaffalaan 5, Delft, 2628BX, Netherlands, email@example.com