Sec. Ethics in Robotics and Artificial Intelligence
Volume 9 - 2022 | https://doi.org/10.3389/frobt.2022.997386
Human augmentation, not replacement: A research agenda for AI and robotics in the industry
- 1Institute for Human-Centered Engineering, School of Engineering and Computer Science, Bern University of Applied Sciences, Bern, Switzerland
- 2Institute for Data Application and Security, School of Engineering and Computer Science, Bern University of Applied Sciences, Bern, Switzerland
- 3Institute New Work, Business School, Bern University of Applied Sciences, Bern, Switzerland
When talking about the threats of work automation through robotics and/or AI, the topic of human replacement is often the first to show up. If it is sometimes seen as something positive, it often revives the collective fear of people losing their jobs, a fear that has been continuously entertained through political discourse against immigration (Goldberg, 2015). The difference being that the threat is now machine that is thought to be much more productive than humans or, even, on the verge of becoming more intelligent than them: The so-called technology singularity (Kurzweil, 2005). In this position paper, we argue that the singularity myth has a negative influence on the current research agenda in artificial intelligence (AI) and robotics. Indeed, if complete human replacement is more a myth than a reality, new technologies are altering the way that we work, posing new challenges for the way we manage human-machine interactions, including work alienation, decision-making power and fairness that require attention. We call for greater attention to augmentation technologies that empower humans rather than mechanize and deskill them. We lay out the advantages of such a path, stressing that the industry can truly benefit from new technologies when human-machine complementarity is leveraged.
Human replacement is not the main threat
To better understand the general skepticism towards the singularity, it is useful to make a distinction between “narrow artificial intelligence” (NAI) which aims at efficiently solving a complex problem and “artificial general intelligence” (AGI) which aims at reproducing human intelligence capabilities (Goertzel, 2007). Both forms of artificial intelligence are based on very different grounds, with most of the research efforts focusing on NAI (Bundy, 2017). AGI, on the contrary, is “still at the stage of infancy” and most of the contributions in the field rely more on “imagination than on trustworthy data” (Braga and Logan, 2019). Indeed, there is no proof that the singularity really exists, and if it does, it is very unlikely to happen in a near future (National Science and Technology Council, 2016).
However, despite the highly hypothetical nature of the singularity, there is a lot of discussion around it, which has led to the creation of a modern myth, sometimes referred to as Apocalyptic AI (Geraci, 2008). Indeed, the hope to create a perfect, immortal human being is strongly anchored in the idea of human self-deification (Zimmermann, 2008). Interestingly, the tenets of this myth have been mainly sustained by experts in the fields (Natale and Ballatore, 2020) and it is still strongly influencing current scientific research and perception of AI (Bringsjord et al., 2012). One of most pregnant myths is the one of full autonomy, as described by (Mindell, 2015): contrarily to this myth, robots and (N) AI will never be completely autonomous, because, per design, intentions will always need to be defined by humans, ruling out the possibility of complete human replacement.
Nevertheless, the myth of full autonomy is strongly present in both the public and scientific debate. For instance, in 2017, a Eurobarometer survey showed that 72% of Europeans believe that “robots and AI steal people’s job” (European Commission, 2017). In 2017, a study from Frey and Osbourne (Frey and Osborne, 2017) predicted that 47% per cent of the jobs in the USA were at high risk to disappear through automation. Another study (Arntz et al., 2016) concluded that 9% of the jobs could be automatable, but not necessarily in an economically viable way. The main difference between the two studies is that Frey and Osbourne explored which tasks could in principle be automated, while overlooking the fact that these tasks were part of a more complex job that in its entirety was not suitable for automation (Fernandez-Macias and Bisello, 2020; Parker and Grote, 2022). As pointed out in (Macias et al., 2016; Autor and Salomons, 2018), previous waves of industrialization have already led to the automation of most of the physical labor and the remaining tasks are completely beyond the current capabilities of robots and (N)AI. In 2020, in a report of the European commission (Klenert et al., 2020), a systematic study of the impact of automation on jobs between 1995 and 2015 in Europe was performed, leading to the conclusion that automation had a positive impact on employment in manufacturing. The conclusions found in the literature on job replacement are however mitigated, due to differences in methodologies and level of analysis. A generic survey (Barbieri et al., 2019) highlighted that more detailed analyses considering the difficulties of automation lead to far less pessimistic predictions, comparable to previous technological revolutions (Dahlin, 2019). While the risk of unemployment should not be underestimated, we believe that the main challenge with AI and robotics lies in the quality of the interaction with the machine.
Human-machine interaction: Mechanization or empowerment?
If new automation technologies are unlikely to replace us in the near future, they are going to alter the way we work (Brynjolfsson and McAfee, 2011). Indeed, with the increasing data processing power of technology, machines can exercise intentionality over protocols and action selection, thereby challenging the dominance of human agency, autonomy and, ultimately, power (Murray et al., 2021). This calls for greater attention to the potential of worker empowerment or mechanization, shifting the focus from artificial intelligence to augmented intelligence.
In order to address this issue, we build on the taxonomy of conjoined agency defined as “shared capacity between humans and nonhumans to exercise intentionality” (p. 555) by Murray and others (Murray et al., 2021). Agency over what to do (action selection) and how to do it (protocol development) can rest either with the human or with the technology leading to four forms of conjoined agency including assisting, augmenting, arresting, or automating technologies (Table 1). We believe that, if the agency over action selection rests with the human, the interaction with the machine can help empowering them. However, if the agency for action selection gets transferred to technology, we witness a form of human mechanization through the interaction (Bui, 2020).
TABLE 1. Empowerment and mechanization in human-machine interactions (adapted from Murray et al., 2021, p. 555).
In the following, we highlight the role of the nature of the interaction for our understanding of augmented intelligence and augmented worker.
Augmented Intelligence refers to technologies of Artificial Intelligence that do not replace human decision-making, but rather provide additional information for decision-support (What is Augmented Intelligence?—IEEE, 2022), and thus falls in the above category of Empowerment. Imagine an AI-based software automatically extracting the relevant information from thousands of pages of text, and a human deciding upon that, including the broader view with more context, and reflecting the decision. The data analysis task would take days for a human, and the machine would not be able to consider any other aspects than the narrow view on the data. Augmented Intelligence thus allows for faster and efficient human decision-making. However, there are still challenges when implementing such technology; the decision-making may result in unfair and discriminatory decisions, but also in workers not being able to contest the decision because of its lack of explainability, leading to a loss of employees’ autonomy and job control (Jankauskaitė et al., 2022). Similarly, AI-based worker management (AIWM) systems (European Agency for Safety and Health at Work, 2022) can be seen as empowering technology for the manager, but the worker might be subject to a form of mechanization highlighting the need to reflect on power-asymmetries that new technology can perpetuate. Likewise, algorithmic and augmenting systems can accentuate discrimination, ranging from unfair decisions for specific groups in risk assessment systems or exclusion from the labor market (O’Neil, 2016; Draeger and Müller-Eiselt, 2019; Eubanks, 2019), to systems not working (or not working properly) for specific groups. This calls for a sensitive and reflective practice of using data and augmented intelligence in decision-making processes including a reflection on the moral values with which we, as a society, want to shape the future of human-machine interactions. When decisions are taken by machines, specific assessments can be implemented that require human intervention to assess fairness and human integrity (Leicht-Deobald et al., 2019). Indeed, it has been argued that it is a crucial yet neglected aspect of fairness metrics to consider the moral perspective (Leicht-Deobald et al., 2019; Hertweck et al., 2021), which cannot be automated. The decision what forms of discrimination, marginalization, and exclusions are morally justified, ultimately rests with the human.
The notion of Augmented Worker or Operator 4.0 (Romero et al., 2020) refers to an anthropocentric approach to manufacturing (Dworschak and Zaiser, 2014): The machine is seen as a tool to empower the workers, rather than replace them. Indeed, the world of industrialization is experiencing a change of paradigm: while the focus has been on human replacement for decades, it is now becoming clear that humans will still be needed in factories in the foreseeable future (Tan et al., 2019; Romero et al., 2020). To cope with the growing versatility of the market, some authors argue further that companies can achieve the largest boosts in performance by leveraging human-machine complementary strengths (Daugherty and Wilson, 2018): The flexibility of human work combined with the efficiency of automation. However, in practice, most manufacturing system are still implemented based on a traditional, technocentric approach (Dworschak and Zaiser, 2014): The work process is determined by the technology. The focus is on performance and repeatability and the human is expected to work as a machine (Bui, 2020), possibly leading to a feeling of mechanization. Several studies have underlined the danger of a further polarization of the work market (Holm et al., 2020): The people that serve the technology, the cyberproletariat (Huws, 2014) and the ones that detain the skills to control the technology. However, if in the past, machines were complex and costly, and the workers had little possibilities to modify the machines or to complement their own skills (Levine, 2019), new technologies such as collaborative robotics are democratizing the technology and making it more accessible to non-experts (Villani et al., 2018). AI and robotics become tools to support the augmented workers to make them more efficient in their jobs. Rather than reducing the decision control of the workers to avoid errors, the technology is used to prevent their errors and assist them in their task (Lu et al., 2021). According to manufacturing approaches such as Kaizen (Janjić et al., 2020) and Agile (Gunasekaran et al., 2019), it is only by giving control back to the worker on the shop floor that the full potential of new technologies can be leveraged to boost productivity, motivation and innovation, as illustrated in Figure 1.
FIGURE 1. The four types of workflows: Manual work, full automation, worker empowerment and worker mechanization. The two axes show the strengths of both humans and machines and the characteristics of the workflow associated with different forms of human-machine interaction.
The idea of a singularity outperforming humans continues to fascinate researchers and practitioners alike. In this position paper, we argued that the question of singularity is misleading. We need to rather attend to questions that are at the heart of our daily interactions with machines, shifting from the question of human replacement to the question of the quality of the human-machine interaction. Taking a closer look at the nature of such conjoined agency, we have differentiated among interactions that tend to mechanize and those that empower the human. To create viable futures, the emphasis should be on the latter. We have highlighted an agenda to bring this potential to fruit including questions of fairness, discrimination, skill, and power in organizations. Ultimately, it is not the technology itself that creates a threat for humans, but rather the way it is implemented (Clegg, 2000; Moore, 2019). We therefore call more attention to questions of agency and power when designing technology to ensure a vital human-in-command approach (De Stefano, 2018) to create sustainable employment and learning opportunities for all.
SDR, MK-B, NE, and OY contributed equally to the conception of the paper. DS drafted the initial manuscript based on inputs from MK-B, NE, and OY. NE revised the manuscript critically. All authors contributed to the manuscript final revision and approved the submitted version.
The work of SDR, NE, and OY was supported by the Swiss National Science Foundation (SNSF) as part of the National Research Program NRP77 Digital Transformation, grant no. 407740_187298. The publication fees were paid by the Bern University of Applied Sciences.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Arntz, M., Gregory, T., and Zierahn, U. (2016). The risk of automation for jobs in oecd countries: A comparative analysis. OCED Soc. Employ. Migr. Work. Pap., 1–34. doi:10.1787/1815199X
Autor, D., and Salomons, A. (2018). Is automation labor-displacing? Productivity growth, employment, and the labor share. National Bureau of Economic Research. Working Paper Series. doi:10.3386/w24871
Barbieri, L., Chiara, M., Piva, M., and Marco, V. (2019). Testing the employment impact of automation, robots and AI: A survey and some methodological issues. IZA – Institute of Labor Economics. Working Paper 12612. IZA Discussion Papers. Available at: https://www.econstor.eu/handle/10419/207437 (Accessed: 23 August 2022).
Braga, A., and Logan, R. K. (2019). AI and the singularity: A fallacy or a great opportunity? Information 10 (2), 73. doi:10.3390/info10020073
Bringsjord, S., Bringsjord, A., and Bello, P. (2012). “Belief in the singularity is fideistic,” in Singularity hypotheses: A scientific and philosophical assessment. (Berlin, Heidelberg: Springer The Frontiers Collection), 395–412. doi:10.1007/978-3-642-32560-1_19
Brynjolfsson, E., and McAfee, A. (2011). Race against the machine: How the digital revolution is accelerating innovation, driving productivity, and irreversibly transforming employment and the economy. Marina Del Rey, California, United States: Digital Frontier Press.
Bui, L. (2020). Asian roboticism: Connecting mechanized labor to the automation of work. Perspect. Glob. Dev. Technol. 19 (1–2), 110–126. doi:10.1163/15691497-12341544
Bundy, A. (2017). Preparing for the future of artificial intelligence. AI Soc. 32 (2), 285–287. doi:10.1007/s00146-016-0685-0
Clegg, C. W. (2000). Sociotechnical principles for system design. Appl. Ergon. 31 (5), 463–477. doi:10.1016/S0003-6870(00)00009-0
Dahlin, E. (2019). Are robots stealing our jobs? Socius. 5, 237802311984624. doi:10.1177/2378023119846249
Daugherty, R., and Wilson, H. J. (2018). Human + machine: Reimagining work in the age of AI. Boston, Massachusetts: Harvard Business Review Press.
De Stefano, V. (2018). Negotiating the algorithm”: Automation, artificial intelligence and labour protection. Comp. Labor Law Policy J. 41. Rochester, NY. doi:10.2139/ssrn.3178233
Draeger, J., and Müller-Eiselt, R. (2019). Wir und die intelligenten maschinen: Wie algorithmen unser leben bestimmen und wir sie für uns nutzen können. Auflage. München: Deutsche Verlags-Anstalt.
Dworschak, B., and Zaiser, H. (2014). Competences for cyber-physical systems in manufacturing – first findings and scenarios. Procedia CIRP 25, 345–350. doi:10.1016/j.procir.2014.10.048
Eubanks, V. (2019). in Automating inequality: How high-tech tools profile, police, and punish the poor. First Picador edition (New York: Picador St. Martin’s Press).
European Agency for Safety and Health at Work (2022). Artificial intelligence for worker management: An overview | safety and health at work EU-OSHA. Available at: https://osha.europa.eu/en/publications/artificial-intelligence-worker-management-overview (Accessed: July 16, 2022).
European Commission (2017). Attitudes towards the impact of digitisation and automation on daily life: Report. Brussels: European Commission. LU: Publications Office. Available at: https://data.europa.eu/doi/10.2759/835661 (Accessed: July 18, 2022).
Fernandez-Macias, E., and Bisello, M. (2020). A taxonomy of tasks for assessing the impact of new technologies on work. JRC Working Papers on Labour, Education and Technology. Joint Research Centre Seville site. Available at: https://ideas.repec.org/p/ipt/laedte/202004.html (Accessed: 16 July 2022).
Frey, C. B., and Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation? Technol. Forecast. Soc. Change 114, 254–280. doi:10.1016/j.techfore.2016.08.019
Geraci, R. M. (2008). Apocalyptic AI: Religion and the promise of artificial intelligence. J. Am. Acad. Relig. 76 (1), 138–166. doi:10.1093/jaarel/lfm101
Goertzel, B. (2007). Human-level artificial general intelligence and the possibility of a technological singularity: A reaction to ray kurzweil’s the singularity is near, and McDermott’s critique of kurzweil. Artif. Intell. 171 (18), 1161–1173. doi:10.1016/j.artint.2007.10.011
Goldberg, K. (2015). Robotics: Countering singularity sensationalism. Nature 526 (7573), 320–321. doi:10.1038/526320a
Gunasekaran, A., Yusuf, Y., Geyi, D., Papadopoulos, T., and Kovvuri, D. (2019). Agile manufacturing: An evolutionary review of practices. Int. J. Prod. Res. 57 (15–16), 5154–5174. doi:10.1080/00207543.2018.1530478
Hertweck, C., Heitz, C., and Loi, M. (2021). “On the moral justification of statistical parity,” in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event Canada, March 3 - 10, 2021, 747–757. ACM. doi:10.1145/3442188.3445936
Holm, J. R., Lorenz, E., and Nielsen, P. (2020). Work organization and job polarization. Res. Policy 49 (8), 104015. doi:10.1016/j.respol.2020.104015
Huws, U. (2014). Labor in the global digital economy: The cybertariat comes of age. New York, United States: NYU Press.
IEEE (2022). What is augmented intelligence? - IEEE digital reality. Available at: https://digitalreality.ieee.org/publications/what-is-augmented-intelligence (Accessed June 24, 2022).
Janjić, V., Todorović, M., and Jovanović, D. (2020). Key success factors and benefits of kaizen implementation. Eng. Manag. J. 32 (2), 98–106. doi:10.1080/10429247.2019.1664274
Jankauskaitė, V., Christenko, A., and Paliokaitė, A. (2022). Artificial intelligence for worker management: Existing and future regulations | safety and health at work EU-OSHA. Available at: https://osha.europa.eu/en/publications/artificial-intelligence-worker-management-existing-and-future-regulations (Accessed: July 16, 2022).
Klenert, D., Fernandez-Macias, E., and Antón, J. I. (2020). Do robots really destroy jobs? Evidence from Europe. JRC Working Papers on Labour, Education and Technology 2020–01. Joint Research Centre Seville site. Available at: https://econpapers.repec.org/paper/iptlaedte/202001.htm (Accessed: 16 July 2022).
Kurzweil, R. (2005). The singularity is near: When humans transcend biology. New York, US: Penguin Publishing Group.
Leicht-Deobald, U., Busch, T., Schank, C., Weibel, A., Simon, S., Isabelle, W., et al. (2019). The challenges of algorithm-based HR decision-making for personal integrity. J. Bus. Ethics 160, 377–392. doi:10.1007/s10551-019-04204-w
Levine, D. I. (2019). Automation as part of the solution. J. Manag. Inq. 28 (3), 316–318. doi:10.1177/1056492619827375
Lu, Y., Sastre, J., Chand, S., and Wang, L. (2021). Humans are not machines—anthropocentric human–machine symbiosis for ultra-flexible smart manufacturing. Engineering 7 (6), 734–737. doi:10.1016/j.eng.2020.09.018
Macias, E., Hurley, J., and Bisello, M. (2016). What do Europeans do at work? A task-based analysis. Luxembourg: Publications Office of the European Union. doi:10.2806/12545
Mindell, D. A. (2015). Our robots, ourselves: Robotics and the myths of autonomy. New York, United States: Penguin Publishing Group.
Moore, P. V. (2019). “OSH and the future of work: Benefits and risks of artificial intelligence tools in workplaces,” in Digital human modeling and applications in health, safety, ergonomics and risk management. Human body and motion. Editor V. G. Duffy (Cham: Springer International Publishing Lecture Notes in Computer Science), 292–315. doi:10.1007/978-3-030-22216-1_22
Murray, A., Rhymer, J., and Sirmon, D. G. (2021). Humans and technology: Forms of conjoined agency in organizations. Acad. Manage. Rev. 46 (3), 552–571. doi:10.5465/amr.2019.0186
Natale, S., and Ballatore, A. (2020). Imagining the thinking machine: Technological myths and the rise of artificial intelligence. Convergence. 26 (1), 3–18. doi:10.1177/1354856517715164
National Science and Technology Council (2016). Preparing for the future of artificial intelligence. Available at: https://publicintelligence.net/white-house-preparing-artificial-intelligence/(Accessed: July 16, 2022).
O’Neil, C. (2016). in Weapons of math destruction: How big data increases inequality and threatens democracy. First edition (New York: Crown).
Parker, S. K., and Grote, G. (2022). Automation, algorithms, and beyond: Why work design matters more than ever in a digital world. Appl. Psychol. 71 (4), 1171–1204. doi:10.1111/apps.12241
Romero, D., Stahre, J., and Taisch, M. (2020). The Operator 4.0: Towards socially sustainable factories of the future. Comput. Industrial Eng. 139, 106128. doi:10.1016/j.cie.2019.106128
Tan, Q., Tong, Y., Wu, S., and Li, D. (2019). Anthropocentric approach for smart assembly: Integration and collaboration. J. Robotics 2019, 1–8. doi:10.1155/2019/3146782
Villani, V., Lorenzo, S., Julia, N., Alexander, M., and Cesare, F. (2018). MATE robots simplifying my work: The benefits and socioethical implications. IEEE Robot. Autom. Mag. 25 (1), 37–45. doi:10.1109/MRA.2017.2781308
Zimmermann, M. (2008). The singularity: A crucial phase in divine self-actualization? Cosmos Hist. J. Nat. Soc. Philosophy 4, 347.
Keywords: human-machine interaction, artificial intelligence, augmented intelligence, robotics, complementary cooperation
Citation: Dégallier-Rochat S, Kurpicz-Briki M, Endrissat N and Yatsenko O (2022) Human augmentation, not replacement: A research agenda for AI and robotics in the industry. Front. Robot. AI 9:997386. doi: 10.3389/frobt.2022.997386
Received: 18 July 2022; Accepted: 02 September 2022;
Published: 04 October 2022.
Edited by:Ou Ma, University of Cincinnati, United States
Reviewed by:Subir Kumar Saha, Indian Institute of Technology Delhi, India
Zhaokui Wang, Tsinghua University, China
Copyright © 2022 Dégallier-Rochat, Kurpicz-Briki, Endrissat and Yatsenko. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Sarah Dégallier-Rochat, firstname.lastname@example.org