In a recent proposal issued by the European Parliament, it was suggested that robots might need to be considered “electronic persons” for the purposes of social and legal integration. The very idea sparked controversy, and it has been met with both enthusiasm and resistance. Underlying this disagreement, however, is an important moral/legal question: When (if ever) would it be necessary for robots, AI, or other socially interactive, autonomous systems to have some claim to moral and legal standing? When (if ever) would a technological artifact need to be considered more than a mere instrument of human action and have some legitimate claim to independent social status? What are the costs and benefits of a proposal like that advanced by the European Parliament? Or to put it even more directly, what will our social world look like and what do we want it to look like in the face (or the faceplate) of social robots?
These questions are important and timely because they ask about the way that social robots will be incorporated into existing social organizations and systems. Typically technological objects, no matter how simple or sophisticated, are considered to be tools or instruments of human decision making and action. This instrumentalist definition not only has the weight of tradition behind it, but it has so far proved to be a useful instrument for responding to and making sense of innovation in artificial intelligence and robotics. Social robots, however, appear to confront this standard operating procedure with new and unanticipated opportunities and challenges. Following the predictions developed in the computer as social actor studies and the media equation, users respond to these technological objects as if they were another socially situated entity. Social robots, therefore, appear to be more than just tools, occupying positions where we respond to them as another socially significant Other.
This Research Topic seeks to make sense of the social significance and consequences of technologies that have been deliberately designed and deployed for social presence and interaction. The question that frames the collection is “Should robots have standing?” The question is derived from an agenda-setting publication in environmental law and ethics written by Christopher Stone, Should Trees Have Standing? Toward Legal Rights for Natural Objects. In extending this mode of questioning to social robots, contributions to this Research Topic will 1) debate whether and to what extent robots can or should have standing, 2) evaluate the benefits and the costs of recognizing social status, when it involves technological objects and artifacts, and 3) respond to and provide guidance for developing an intelligent and informed plan for the responsible integration of social robots.
Image credit: Robot mural. Ul. Zwierzyniecka, Kraków, Poland. Photograph by David J. Gunkel (2011).
In a recent proposal issued by the European Parliament, it was suggested that robots might need to be considered “electronic persons” for the purposes of social and legal integration. The very idea sparked controversy, and it has been met with both enthusiasm and resistance. Underlying this disagreement, however, is an important moral/legal question: When (if ever) would it be necessary for robots, AI, or other socially interactive, autonomous systems to have some claim to moral and legal standing? When (if ever) would a technological artifact need to be considered more than a mere instrument of human action and have some legitimate claim to independent social status? What are the costs and benefits of a proposal like that advanced by the European Parliament? Or to put it even more directly, what will our social world look like and what do we want it to look like in the face (or the faceplate) of social robots?
These questions are important and timely because they ask about the way that social robots will be incorporated into existing social organizations and systems. Typically technological objects, no matter how simple or sophisticated, are considered to be tools or instruments of human decision making and action. This instrumentalist definition not only has the weight of tradition behind it, but it has so far proved to be a useful instrument for responding to and making sense of innovation in artificial intelligence and robotics. Social robots, however, appear to confront this standard operating procedure with new and unanticipated opportunities and challenges. Following the predictions developed in the computer as social actor studies and the media equation, users respond to these technological objects as if they were another socially situated entity. Social robots, therefore, appear to be more than just tools, occupying positions where we respond to them as another socially significant Other.
This Research Topic seeks to make sense of the social significance and consequences of technologies that have been deliberately designed and deployed for social presence and interaction. The question that frames the collection is “Should robots have standing?” The question is derived from an agenda-setting publication in environmental law and ethics written by Christopher Stone, Should Trees Have Standing? Toward Legal Rights for Natural Objects. In extending this mode of questioning to social robots, contributions to this Research Topic will 1) debate whether and to what extent robots can or should have standing, 2) evaluate the benefits and the costs of recognizing social status, when it involves technological objects and artifacts, and 3) respond to and provide guidance for developing an intelligent and informed plan for the responsible integration of social robots.
Image credit: Robot mural. Ul. Zwierzyniecka, Kraków, Poland. Photograph by David J. Gunkel (2011).