<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article article-type="review-article" dtd-version="2.3" xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Phys.</journal-id>
<journal-title>Frontiers in Physics</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Phys.</abbrev-journal-title>
<issn pub-type="epub">2296-424X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">951296</article-id>
<article-id pub-id-type="doi">10.3389/fphy.2022.951296</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Physics</subject>
<subj-group>
<subject>Review</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Trust in things: A review of social science perspectives on autonomous human-machine-team systems and systemic interdependence</article-title>
<alt-title alt-title-type="left-running-head">Akiyoshi</alt-title>
<alt-title alt-title-type="right-running-head">
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fphy.2022.951296">10.3389/fphy.2022.951296</ext-link>
</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Akiyoshi</surname>
<given-names>Mito</given-names>
</name>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
<uri xlink:href="https://loop.frontiersin.org/people/1520243/overview"/>
</contrib>
</contrib-group>
<aff>
<institution>Department of Sociology</institution>, <institution>Senshu University</institution>, <addr-line>Kawasaki</addr-line>, <country>Japan</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>
<bold>Edited by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/313975/overview">William Frere Lawless</ext-link>, Paine College, United States</p>
</fn>
<fn fn-type="edited-by">
<p>
<bold>Reviewed by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1864489/overview">Michael Wollowski</ext-link>, Rose-Hulman Institute of Technology, United States</p>
<p>
<ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/2030383/overview">Yohei Katano</ext-link>, Meiji University, Japan</p>
</fn>
<corresp id="c001">&#x2a;Correspondence: Mito Akiyoshi, <email>mito.akiyoshi@gmail.com</email>
</corresp>
<fn fn-type="other">
<p>This article was submitted to Interdisciplinary Physics, a section of the journal Frontiers in Physics</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>07</day>
<month>11</month>
<year>2022</year>
</pub-date>
<pub-date pub-type="collection">
<year>2022</year>
</pub-date>
<volume>10</volume>
<elocation-id>951296</elocation-id>
<history>
<date date-type="received">
<day>23</day>
<month>05</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>24</day>
<month>10</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#xa9; 2022 Akiyoshi.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Akiyoshi</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<p>For Autonomous Human Machine Teams and Systems (A-HMT-S) to function in a real-world setting, trust has to be established and verified in both human and non-human actors. But the nature of &#x201c;trust&#x201d; itself, as established by long-evolving social interaction among humans and as encoded by humans in the emergent behavior of machines, is not self-evident and should not be assumed <italic>a priori</italic>. The social sciences, broadly defined, can provide guidance in this regard, pointing to the situational, context-driven, and sometimes other-than-rational grounds that give rise to trustability, trustworthiness, and trust. This paper introduces social scientific perspectives that illuminate the nature of trust that A-HMT-S must produce as they take root in society. It does so by integrating key theoretical perspectives: the ecological theory of actors and their tasks, theory on the introduction of social problems into the civic sphere, and the material political economy framework developed in the sociological study of markets.</p>
</abstract>
<kwd-group>
<kwd>machine</kwd>
<kwd>algorithm</kwd>
<kwd>artificial intelligence</kwd>
<kwd>interdependence</kwd>
<kwd>sociology</kwd>
<kwd>trust</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<title>1 Introduction</title>
<p>In this paper, Autonomous Human Machine Teams and Systems (A-HMT-S) are defined as teams that include humans and increasingly intelligent and autonomous machines working together [<xref ref-type="bibr" rid="B1">1</xref>, <xref ref-type="bibr" rid="B2">2</xref>]. Intelligent machines are defined as machines or algorithms that think by scanning data for patterns, make inferences, and learn by testing inferences [<xref ref-type="bibr" rid="B3">3</xref>]. Advances in deep learning in the 21st century bring this emerging phenomenon closer to reality [<xref ref-type="bibr" rid="B1">1</xref>, <xref ref-type="bibr" rid="B2">2</xref>, <xref ref-type="bibr" rid="B4">4</xref>], though the idea of thinking machines was explored decades ago by Turing, Shannon, Weiner, Simon, and others, and was foreshadowed to an extent by Babbage&#x2019;s Difference Engines and Analytical Engine a century before that [<xref ref-type="bibr" rid="B5">5</xref>].</p>
<p>A key advance in the conceptualization of A-HMT-S is that intelligent machines are intended to operate as full-fledged team members collaborating with humans [<xref ref-type="bibr" rid="B4">4</xref>, <xref ref-type="bibr" rid="B6">6</xref>]. Not only do they assist human decision-making and automate information processing, they also make decisions on their own and instruct human workers and other machines [<xref ref-type="bibr" rid="B7">7</xref>]. For example, an artificially intelligent co-worker named Charlie has been developed by Cummings et al. [<xref ref-type="bibr" rid="B4">4</xref>]. Charlie is designed to perform typical white-collar tasks: she gives interviews, takes part in brain-storming sessions, and collaborates in writing papers. But exhibiting recognizable and anthropomorphized agency or human-like identity, as Charlie does, is not central to the definition of A-HMT-S. They may not have the attractive features of <italic>Blade Runner</italic>&#x2019;s replicants, but they have the kind of intelligence that would pass the Turing Test in the specific tasks to which they are assigned. They will also someday pass the &#x201c;toilet test&#x201d;&#x2014;the ability to run unsupervised while humans address their bodily needs [<xref ref-type="bibr" rid="B8">8</xref>]. In short, the defining feature of intelligent machines that constitute A-HMT-S is that they can model human recognition, learning, and reasoning [<xref ref-type="bibr" rid="B3">3</xref>].</p>
<p>As our machine helpers become increasingly autonomous and intelligent, it leads to increasing interdependence between human and non-human actors. Although that evolution is far from complete and may never end, quasi-A-HMT-S with semi-autonomous machines are now commonplace. They diagnose and treat diseases [<xref ref-type="bibr" rid="B9">9</xref>], drive vehicles [<xref ref-type="bibr" rid="B1">1</xref>, <xref ref-type="bibr" rid="B10">10</xref>], fly airplanes [<xref ref-type="bibr" rid="B11">11</xref>, <xref ref-type="bibr" rid="B12">12</xref>], educate students, conduct research [<xref ref-type="bibr" rid="B4">4</xref>], trade stocks and derivatives [<xref ref-type="bibr" rid="B8">8</xref>], market products and services [<xref ref-type="bibr" rid="B13">13</xref>], fight wars [<xref ref-type="bibr" rid="B2">2</xref>], all with mixed results.</p>
<p>Algorithms lie at the core of these capabilities. In addition to their sheer ubiquity, their complexity and opacity and their anticipated consequences for humanity have motivated interdisciplinary research on societal effects of A-HMT-S [<xref ref-type="bibr" rid="B14">14</xref>, <xref ref-type="bibr" rid="B15">15</xref>]. This paper is part of that interdisciplinary effort, addressing a crucial aspect of the implications of the integration of A-HMT-S into society: trust.</p>
<p>In traditional organizations, trust among workers is essential in achieving quality performance [<xref ref-type="bibr" rid="B16">16</xref>]. The increasing interdependence between humans and intelligent machines poses a series of trust-related questions: as machines become more autonomous, what are the causes and consequences of trust-building in A-HMT-S? What does it mean to trust non-human actors in a system? This paper uses a social-scientific toolkit to address these questions. In doing so, it might help to quickly look backward for a moment and consider the issue as it was faced by users of the earliest known human tools: handaxes. The user of a handaxe had to &#x201c;trust&#x201d; that its shape and material would be adequate to the task, which usually involved cutting into some kind of organic matter. Since the user probably also made the tool, she or he had an inbuilt basis for trusting it, including to trust that it wouldn&#x2019;t suddenly assume agentive power of its own and diverge from the user&#x2019;s goal, notwithstanding any animistic beliefs that might have been in play. The only other entities with &#x201c;agency&#x201d; in this scenario would have been other proto-humans, and the distribution of trust across the group would be established by longstanding social norms and rules. In short, the issue of trust was severable from other considerations, and its resolution was an intra-human one.</p>
<p>The history of technology since then has seen that simple allocation of trust be thoroughly complicated by the folding of more and more human capability into the tools themselves&#x2014;at first physical and then mental [<xref ref-type="bibr" rid="B17">17</xref>]. A late medieval cannoneer had to trust the cannon wouldn&#x2019;t blow up in his face, but the location of that trustworthiness still resided in the cannon-maker. A Jacquard loom weaver, on the other hand, didn&#x2019;t have to place her trust in the card-maker because the output of the loom would reveal if the card-punching was accurate. A paddle-wheel steamboat passenger had to trust the boat and its crew, but might have known nothing about the mechanical steam engine governor that could be trusted (usually) to keep the engine speed steady. Today&#x2019;s automobile driver may only partially grasp the extent to which their survival depends on the trustability of dozens of microchips installed in the vehicle by factory workers who were overseeing relatively simple robots, which in turn had to be trusted to work right, with that chain of trust extending all the way back to the machines that designed the machines that designed them. Trust, once a human prerogative, is now diffused across multiple overlapping systems of systems. A-HMT-S is the inheritor of this long process.</p>
<p>But what is trust, and what makes an entity trustworthy? This paper accepts a widely agreed-upon definition of trust as the willingness of a trusting entity (the trustor) to be vulnerable to a trusted entity (the trustee) with respect to a pertinent domain, a trust object, against a backdrop of risk and uncertainty. Trust is therefore not a static thing but a constantly changeable relationship between actors, based on the assessment of each other&#x2019;s behavior in the relationship. One or both parties have just enough evidence to believe that the relationship will work out the way each of them expects it to [<xref ref-type="bibr" rid="B18">18</xref>&#x2013;<xref ref-type="bibr" rid="B20">20</xref>]. Though fragile, it is an absolute, foundational basis of society. That is why Dante in his <italic>Inferno</italic> reserved the lowest circle of hell for people who have betrayed other people&#x2019;s trust. Trustworthiness, meanwhile, is a roughly quantifiable set of properties that the trustee in a relationship displays to the trustor to signal their intentions and probable behavior.</p>
<p>Each dimension of trust&#x2014;trustor, trustee, and trust object&#x2014;is expressed across a spectrum of generality ranging from the most particular to the most highly generalized [<xref ref-type="bibr" rid="B18">18</xref>]. For example, one terrible visit to a physician may imply the withdrawal of trust in that particular doctor, in the category of medical professional she or he represents (e.g., cardiology), or in the entire community of medical experts. Whether a particular visit results in the demise of trust at any level of generality depends on other pertinent variables.</p>
<p>From that starting point, this review will provide a synthesis of key social scientific thinking relevant to the question of trust within and between human and non-human actors. The next section reviews social scientific literature on interpersonal trust, which is compared with human-machine trust in the section after that. Empirical and experimental studies have shown that multiple factors including algorithmic transparency and machine error rates affect the level of trustworthiness that humans ascribe to intelligent machines [<xref ref-type="bibr" rid="B21">21</xref>]. But trust in A-HMT-S is not fully reducible to design issues; we will see that the broader context of interactions between A-HMT-S and other spheres of society is also relevant. In order to examine inter-system trust, the fourth section draws on the urban ecology tradition in sociology, as well as on research on the construction of social problems and the sociology of technology. But rather than introducing concepts in the abstract, it discusses specific incidents that involve precursors of A-HMT-S. By way of conclusion, this paper argues that the issue of trust in A-HMT-S is a specific case of the broader issue of trust in abstract systems and that as such, trust-building spans multiple social ecosystems and is supported or undermined by interactions among them.</p>
</sec>
<sec id="s2">
<title>2 Industrialization and the transmutation of trust</title>
<sec id="s2-1">
<title>2.1 Interpersonal trust</title>
<p>Interpersonal trust is a linchpin of society. As discussed in <xref ref-type="sec" rid="s1">Section 1</xref>, trust processes can be analyzed in terms of the trustor, the trustee, the trust object of varying generality degrees. Small-scale societies are characterized by particularized trust because interactions tend to be embedded in a local context [<xref ref-type="bibr" rid="B17">17</xref>]. Societies that are more complexly organized require coordination among actors we may not personally know; in such societies, reliance on general trust has become widespread and is essential to their continued existence [<xref ref-type="bibr" rid="B22">22</xref>]. In either case, trust depends on complex mutual understandings that defy easy definition [<xref ref-type="bibr" rid="B23">23</xref>]. This tacit and yet robust trust in others to do what a mesh of overt and latent rules dictates, and which makes the social order possible, is one major focus of ethnomethodology, the sociological and anthropological study of the rules by which people organize their everyday lives [<xref ref-type="bibr" rid="B24">24</xref>].</p>
<p>Interpersonal trust, in this perspective, operates on a provisional basis, and involves a sort of pattern-matching exercise. Confirming every datum imaginable and eliminating all alternate interpretative possibilities are neither possible nor called for unless the veracity of a person&#x2019;s explicit or tacit claim is called into question. A just-good-enough assessment of the situation suffices [<xref ref-type="bibr" rid="B24">24</xref>]. Thus, if someone who &#x201c;looks like a college professor&#x201d; enters a college classroom and approaches the podium, students assume that person is the course instructor and rarely ask for official proof of his or her identity. Additional elements of legitimation may appear in the form of references to the shared institutional structure that encompasses both the professor and the students&#x2014;the topic of the course, the academic calendar, the grading system. As long as the behavior matches the observer&#x2019;s expectations in that setting, provisional trust will be satisfied.</p>
<p>We all do this a hundred times a day without even thinking about it. Social interaction is made possible by everyone&#x2019;s taking everyone else&#x2019;s claims at face value unless some contradictory evidence emerges that requires vetting [<xref ref-type="bibr" rid="B23">23</xref>]. The taken-for-granted nature of social life constitutes a cognitive and emotional common ground that is prior even to shared values and norms&#x2014;things that are thought of as &#x201c;culture&#x201d; in the social scientific sense. Trust evolves over time in organizations through interactions that involves people&#x2019;s values, attitudes, and emotions [<xref ref-type="bibr" rid="B16">16</xref>].</p>
<p>Because interpersonal trustworthiness is not fully or even primarily grounded in the procedure of fact checking, societies vary widely in terms of the level of confidence people have about one another [<xref ref-type="bibr" rid="B25">25</xref>]. This is verifiable by looking at situations where it is lacking. For example, the mafia-type organized crime syndicates in southern Italy came into being as enforcers of contracts in a low-trust environment [<xref ref-type="bibr" rid="B26">26</xref>, <xref ref-type="bibr" rid="B27">27</xref>]. Farmers who could not trust their counterparties in selling or buying produce and livestock had to turn to proto-mafiosi to guarantee transactions with threats of violence. Similarly, neighborhoods with high crime rates must invest heavily in security, and endure stressful anxiety, whereas individuals in low-crime areas can insouciantly leave their doors unlocked when they go out to run errands. The erosion of trust makes lives difficult. Until destroyed, the operation of trust tends to remain invisible, and yet trust is a public good from which other advantages such as cooperation, tolerance, functioning democracy, and market efficiency come about [<xref ref-type="bibr" rid="B16">16</xref>, <xref ref-type="bibr" rid="B28">28</xref>].</p>
</sec>
<sec id="s2-2">
<title>2.2 Trust in machines and abstract systems</title>
<p>Industrialization extended the scope of trust relationships to include abstract systems [<xref ref-type="bibr" rid="B29">29</xref>]. Individuals and organizations in highly industrialized societies must learn to trust knowledge systems and technologies they do not fully grasp. Again, perfect grounding is precluded and faith is an integral dimension underlying trust. People board trains not knowing how the public transportation system is organized and operated, and they receive mRNA vaccines to protect themselves against viral infections without a detailed understanding of the immune system or vaccine manufacturing. Workers also learn, through trial and error, to trust machines they operate to mass produce goods and services. The threat of deskilling might be seen as a potential source of the erosion of trust in cases of automation, but Zuboff finds that workers adopt and adapt through explorative use of new technologies and achieve reskilling by becoming their adept and creative users [<xref ref-type="bibr" rid="B30">30</xref>].</p>
<p>In our capacity as consumers, too, we have entered a world where we buy things produced by distant others. The rise of advertising and branding is associated with this shift towards mass production, distribution, and consumption which Beniger has called &#x201c;the control revolution&#x201d; [<xref ref-type="bibr" rid="B17">17</xref>]. Advertising and branding are important where interpersonal trust cannot guarantee the quality of goods produced by large-scale organizations and sold anonymously. As Max Weber&#x2019;s celebrated analysis has shown, bureaucracy arises to enable the operation of such organizations by releasing trust from the domain of interpersonal relationships and the immediacy of face-to-face interaction, replacing it with formally defined rules and procedures and a hierarchy of offices [<xref ref-type="bibr" rid="B31">31</xref>].</p>
</sec>
</sec>
<sec id="s3">
<title>3 Difficulties of building trust in A-HMT-S</title>
<p>Although trust in A-HMT-S has unique aspects, in principle the questions it raises are predictable extensions of the centuries-long process that preceded it [<xref ref-type="bibr" rid="B29">29</xref>]. Prior to the development of A-HMT-S, there were systems consisting of human operators and non-autonomous and non-intelligent machines and tools: vehicles, missile systems, nuclear power plants, and so on [<xref ref-type="bibr" rid="B1">1</xref>, <xref ref-type="bibr" rid="B17">17</xref>]. I call these complex but non-intelligent tools &#x201c;mundane systems&#x201d; in contrast to A-HMT-S.</p>
<p>Technology scholars Hengstler, Enkel, and Duelli argue that trust in automated systems has two aspects: trust in the automation technology itself and trust in organizations that develop it, use it, or in which it is embedded [<xref ref-type="bibr" rid="B32">32</xref>]. However, in the case of trust in A-HMT-S, it is neither analytically tractable nor appropriate to separate the technology from its organizations and institutions. The literature on the sociology of technology has demonstrated the futility of treating a technology&#x2019;s capabilities without reference to its users and its context of use. According to the constructionist perspective of technology, there is no such thing as technology per se [<xref ref-type="bibr" rid="B33">33</xref>, <xref ref-type="bibr" rid="B34">34</xref>]. The emergence of A-HMT-S reasserts that point with renewed exigency: in A-HMT-S, the technology implements, enacts, and embodies organizations&#x2019; purposes and goals. Technology is the organization in a literal sense, and <italic>vice versa</italic>.</p>
<p>Shestakofsky conducted participant-observation research at a software firm and found that two types of labor were performed to create dynamic collaboration between humans and autonomous algorithms [<xref ref-type="bibr" rid="B35">35</xref>]. Computational labor addresses the issue of machine lag, problems posed by limitations of technologies. Human teams engage in repetitive information-processing tasks in order to fix gaps in software infrastructure. At the same time, emotional labor by human workers deals with human lag, clients&#x2019; reluctance to use algorithms, and mediates the relationship between software systems and the latter. These findings suggest that trust among A-HMT-S actors is constructed in the course of collectively defining tasks and negotiating boundaries [<xref ref-type="bibr" rid="B35">35</xref>]. Jarrahi argues that human-AI symbiosis in organizational decision-making is possible when AI supplements human cognition and humans bring a holistic and intuitive approach in dealing with uncertainty [<xref ref-type="bibr" rid="B36">36</xref>].</p>
<p>A theoretical framework that addresses the issue of trust in A-HMT-S may be developed by treating the amalgam of non-humans and humans as-they-are. Studies have shown that human-to-machine trust is affected by various factors: the extent to which the machine exhibits human-like appearance, cognitive biases in general, automation-specific complacency and bias [<xref ref-type="bibr" rid="B37">37</xref>], algorithmic error rates, epistemic opacity, and the type of tasks [<xref ref-type="bibr" rid="B38">38</xref>]. Trustworthiness can be ascribed to intelligent machines and form a basis of productive collaboration in A-HMT-S, but the presence of biases and complacency means that humans can over-trust or under-trust intelligent algorithms and their decisions.</p>
<p>The problem with A-HMT-S is that it often involves &#x201c;black box algorithms,&#x201d; epistemically opaque to human observers because they keep self-improving by testing and learning [<xref ref-type="bibr" rid="B9">9</xref>]. Opacity raises concerns among developers, users, and the general public. Lee and See, observing that trust is essential in the adoption of automation systems, recommends such measures as the disclosure of intermediate results of the algorithms to the operators and the simplification of algorithms [<xref ref-type="bibr" rid="B20">20</xref>]. Similarly, Burrell supports greater regulations, algorithmic transparency, and education of the public [<xref ref-type="bibr" rid="B9">9</xref>, <xref ref-type="bibr" rid="B39">39</xref>]. The Defense Advanced Research Projects Agency (DARPA) attempted to address the opacity issue by developing &#x201c;explainable artificial intelligence&#x201d; [<xref ref-type="bibr" rid="B40">40</xref>]. Whether systems that &#x201c;look&#x201d; human, or visibly inserting actual humans into the decision loop, have any effect on trust and affinity is also investigated [<xref ref-type="bibr" rid="B41">41</xref>, <xref ref-type="bibr" rid="B42">42</xref>]. It is important, though, to recall that the issue of trust in &#x201c;black-box algorithms&#x201d; is only among the latest developments in the history of trusting increasingly distant others and longer chains of factors.</p>
<p>Dur&#xe1;n and Jongsma argue, using medical AI as a case study, that trust in black-box algorithms can be established by the principle of computational reliabilism (CR) [<xref ref-type="bibr" rid="B9">9</xref>]. Striving for algorithmic transparency, they claim, is a losing strategy because it defeats the purpose of deploying algorithms in the first place. &#x201c;Transparency will not provide solutions to opacity, and therefore having more transparent algorithms is not a guarantee for better explanations, predictions and overall justification of our trust in the result of an algorithm&#x201d; [9, p.331]. They suggest employing a version of the heuristic devices we use to assess the trustworthiness of our social interlocutors. In any given setting, CR assesses the trustworthiness of AI not by using interpretive parameters to check the system&#x2019;s inner state at points 1 through n, but by making multiple empirical inferences that turn out to be &#x201c;good enough&#x201d;: A comparison with known solutions (verification), comparison with experimental data (validation), robustness analysis, a history of successful or unsuccessful implementation, and expert knowledge. An analogy with human interaction is to judge people by their behavior and set aside speculation about the mental processes that led to that behavior. Epistemological opacity does not have to be removed as long as CR can be established [<xref ref-type="bibr" rid="B9">9</xref>]. This enables users to take advantage of sophisticated black box analysis while solving the dilemma of being dependent on it without comprehending its workings.</p>
<p>This is particularly important for medical AI, but is applicable to other domains and to the question of building trust in non-AI abstract systems. It is similar to the satisficing that we saw in the college professor story earlier. Limited as we all are by bounded rationality [<xref ref-type="bibr" rid="B3">3</xref>, <xref ref-type="bibr" rid="B43">43</xref>, <xref ref-type="bibr" rid="B44">44</xref>], humans and organizations have to abandon the ideal of perfect explainability and treat the state of trust as provisional and dynamic. Yet for this very reason provisional trust is a fragile construct that can collapse if challenged by outsiders. And that is likely to happen at the border between A-HMT-S and other communities across the broader society with which it interacts. At that interface, CR may not help. To address the fact that heterogeneous actors scattered across heterogenous fields also will be asking themselves questions about the trustworthiness of A-HMT-S, and about the impact of A-HMT-S on their own interests, the next section turns to the ecological perspective originated in urban sociology.</p>
</sec>
<sec id="s4">
<title>4 A-HMT-S as an ecosystem</title>
<p>Establishing trust in A-HMT-S increasingly entails ethical as well as legal challenges, including transparency, algorithmic fairness, safety, security, and privacy. Challenges in jurisprudence emerge when non-human actors assume human-like characteristics. Scientific as well as practitioner knowledge systems engage in articulating goals and means in trust promotion and production [<xref ref-type="bibr" rid="B45">45</xref>]. Opening up black-box algorithms is often presented as a key to this undertaking. But as we have seen, perfect algorithmic transparency is not always feasible or effective. To identify and better understand trust goals relevant to A-HMT-S, an urban ecology perspective is useful. Urban ecology, a sociological perspective developed by scholars at the University of Chicago in the 1920s allows us to grasp the dynamic and emergent nature of the trustor and the trustee in interaction because it incorporates heterogenous actors and can incorporate A-HMT-S as a focus of trust processes. Borrowing its key metaphor and related concepts, such as territorial competition and inter-group cooperation, from biology, it sought to account for the ways different populations distributed themselves across the space of the city and used its resources. In that tradition, authors sometimes use the word &#x201c;ecology&#x201d; to describe what we conventionally understand by the term &#x201c;ecosystem&#x201d; [<xref ref-type="bibr" rid="B8">8</xref>, <xref ref-type="bibr" rid="B46">46</xref>]. To avoid confusion, this paper will use that more conventional term. An ecosystem is an autonomous domain of actors, their tasks, and the relationship between actors and tasks [<xref ref-type="bibr" rid="B46">46</xref>]. It also includes the resources they obtain from the environment, and the other ways they interact with their surroundings. Territorial shifts of populations are seen in terms of invasion and ecological succession or the replacement of one group by another. For example, residential patterns of immigrants to major cities in the United States at the turn of the 20th century were determined by their place of work&#x2014;often in the central business district&#x2014;, as well as by their material means, and their social distance from native populations. Neighborhoods that had seen the arrival of immigrants experienced an exodus of middle-class families; the new groups further affected the types of businesses and services in these transitioning neighborhoods. The distribution of populations and differentiation of space are subjected to the process of interaction among diverse groups.</p>
<p>At this level of analysis, we can think of whole ecosystems as units of interaction. A-HMT-S researchers, developers, and popularizers constitute one such ecosystem. For people outside it to trust &#x201c;what the machines are doing,&#x201d; they have to trust or at least tolerate the ecosystem as a whole, including the motivations and behavior of the humans, the type and amount of environmental resources it uses and the way it uses them. Outsiders have to satisfy themselves that none of this poses a threat to their individual and collective livelihood or to how they understand the world and act in it. And they have to figure out how to minimize friction at the interface between their own ecosystem and that of the newcomer. As was mentioned earlier, achieving and keeping a state of trust will bring both cognitive and emotional dimensions into play, and the benchmark will tend to be: How well does this new ecosystem play by the taken-for-granted rules of everyday life [<xref ref-type="bibr" rid="B24">24</xref>]?</p>
<p>In the case of medical A-HMT-S, for instance, in order to take root in day-to-day medical practice it has to build trust relationships with patients, regulators, healthcare providers, insurance providers, and the general public. Computational reliabilism may be a necessary but not sufficient condition for that, as each party may judge the situation by different criteria. Physicians may be most concerned with diagnostic accuracy while insurance providers may worry over the cost-benefit issues and hospital technicians may care about fitting new practices into existing routines. If we recall that trust is a relation of varying generality as discussed in <xref ref-type="sec" rid="s1">Section 1</xref>, then highly particularized trust in a trust object does not entail trust in a category or ecosystem of which that trust object is an instantiation. A particularized trust object is in fact a construct of multiple ecosystems. Society-wide trust in A-HMT-S is thus a constant balancing act. And as we will see in a later section of this paper, it can be lost when a failure occurs and the system as a whole does not engage in trust-repairing behavior addressed collectively to people living and working in other ecosystems.</p>
<p>Mackenzie, drawing on Abbott, used the ecosystemic perspective in a study of the rise of High-Frequency Trading and its relation to existing trading and regulatory systems [<xref ref-type="bibr" rid="B8">8</xref>]. His research reveals the ripple effect of technological decisions as they impinge on the interests of other domains. HFT is a type of A-HMT-S made possible by machines that can analyze opportunities and execute orders at a speed that surpasses that of human-only teams. Because of this advantage, HFT firms quickly became major players in their respective markets. In the process, they generated enormous profits by engaging in legal but arguably unscrupulous trading activities, made possible only by the high-speed of their systems. Then, in a move apparently unrelated to what the HFTs were up to, the New York Stock Exchange decided to install a new communication antenna on the roof of its data center. Available to any member who paid the requisite hefty fee, the antenna would provide a half-microsecond improvement in transaction time by eliminating 260&#xa0;m of fiber optic cable from the transmission line. This was exactly the sort of time difference the HFTs had been exploiting through their proprietary technology, and now their advantage was threatened.</p>
<p>As a prelude to explaining what ensued, MacKenzie revisits an insurrection that took place in the English community of St. Albans in the late 14th century [<xref ref-type="bibr" rid="B8">8</xref>]. As part of a general wave of uprisings against feudalism, townspeople invaded the local Benedictine monastery and, after freeing people held in the monastery&#x2019;s prison, entered the abbot&#x2019;s parlor, methodically smashed its stone-paved floor, and carried pieces of it away with them. This seemingly random act was in fact retaliation for a previous abbot having confiscated the townspeople&#x2019;s millstones 50 years earlier and used the confiscated stones to pave the parlor floor. The motive for that had been to achieve a monastic monopoly over grain-milling and extract the consequent fees. Townsfolk never forgot this, which exemplifies a key point MacKenzie wants to emphasize: even seemingly minor changes in available technology are not neutral but are usually bound up in power relations with long-lasting effects.</p>
<p>Back in the 21st century New York, the new antenna plan had similar consequences that drew in multiple institutional spheres&#x2014;which MacKenzie refers to as &#x201c;ecologies.&#x201d; Eventually, the Securities and Exchange Commission, a local zoning board, residents of the town where the data center is located, the Stock Exchange itself, and others found themselves in conflict over something which had seemed like a simple technology decision: eliminating 260&#xa0;m of fiber. The eventual solution once again exemplifies the ways in which a material consideration can be waylaid by issues of power: as of 2020, it had been decided to reinsert the half-microsecond delay by adding a coil of cable to the transmission line, thereby returning everything to the status quo ante.</p>
<p>Mackenzie&#x2019;s point is generalizable. Just as biological populations compete for habitat and resources, different social actors behaving collectively will interact to create an observed distribution of functions (tasks that need to be executed for the maintenance of order) and habitats within and between ecoystems. Interactions will define actors and the nature of their tasks; what gets done, and who does it, are not rigidly defined by pre-existing functions [<xref ref-type="bibr" rid="B46">46</xref>]. Instead, turf battles for resources and legitimacy dynamically shape the things actors do and don&#x2019;t do, in a manner that social scientists call &#x201c;co-constitutive&#x201d; and that other disciplines might term &#x201c;emergent.&#x201d; Squabbling over a length of fiber optic cable, and expropriating a paving stone, can be inexplicable outside of a specific social, political and economic context that makes them highly meaningful.</p>
<p>The rapid growth of A-HMT-S capabilities and governmental attempts to control that process is another part of this story of ecosystems squaring off against one another. Whether unfettered development is encouraged or restrained is a function of interactions among the affected ecosystems. Lethal autonomous weapons systems (LAWS) provide a good example [<xref ref-type="bibr" rid="B2">2</xref>]. They will proliferate in a society if other ecosystems that interact with it invest in and legitimize their development, but will be suppressed in any society where the state reins in the military deployment of A-HMT-S.</p>
<p>The above examples show that when A-HMT-S is deployed it can trigger social effects across multiple domains. In the labor market, it can result in job creation, job elimination, or both. In the political domain, it can produce a crisis among regulators and legislators. Pfeffer addresses such broader implications in a study of the impact of AI on the economy and workers&#x2019; well-being [<xref ref-type="bibr" rid="B47">47</xref>]. He points out that the introduction of A-HMT-S can have detrimental effects on workers by eliminating jobs and forcing some workers to switch occupational categories, many of whom already experience stagnant wages and job precarity. Low fertility, government budget deficits, and runaway debt in many highly industrialized societies mean that public policy interventions to attenuate the negative labor market impacts of A-HMT-S are unlikely. A-HMT-S can be used to promote human well-being, but Pfeffer observes that they are as likely to be used in ways that exacerbate economic inequities [<xref ref-type="bibr" rid="B47">47</xref>]. If workers come to regard A-HMT-S as a tool to make themselves redundant, computational reliabilism will probably not help them trust it.</p>
<p>The expanding use of A-HMT-S will also force revisions of school curricula, similar to the way basic computer skills became a key subject in the final decades of the 20th century [<xref ref-type="bibr" rid="B48">48</xref>]. One can envisage a future in which students are required to learn how to work with A-HMT-S to optimize learning. The ecosystemic perspective helps us understand the complex nature of systems interacting with their environments; it enables us to see that what seems external to systems themselves are in fact constitutive of their functions. Adjacent ecosystems regulate, offer incentives and resources, call for accountability, and do many other things that can influence the success of A-HMT-S.</p>
<p>In terms of its effects on human activity, A-HMT-S is more than the automation or translation of tasks formerly performed by humans. It leads to the emergence of new tasks to address the challenges that it and other ecosystems present to each other as they each seek to thrive in the world they must share. In the course of building explainable systems, A-HMT-S must also explain itself to any audience whose activities could be upended by it. At first glance, it may have seemed strange that Pfeffer&#x2019;s paper on the effects of AI has data on fertility, national deficits and debts, but the ecosystemic perspective motivates such a focus on a nexus of multiple spheres [<xref ref-type="bibr" rid="B47">47</xref>].</p>
</sec>
<sec id="s5">
<title>5 How technological systems can breach trust</title>
<p>Prior to the development of A-HMT-S, there were many systems made up of human operators and non-autonomous and non-intelligent machines and tools: vehicles, missile systems, nuclear power plants, and so on. I referred earlier to these non-intelligent tools as &#x201c;mundane systems&#x201d; in contrast to A-HMT-S. Mundane systems have a track record of breaching the trust of their users and the general public. The way they fail illuminates the kind of trust issues that A-HMT-S may face going forward.</p>
<sec id="s5-1">
<title>5.1 Mundane system trust erosion: Three brief examples</title>
<p>Drunk driving: Car accidents caused by drunk drivers, and the public discourse surrounding them, remind us that the accepted narrative of interdependence between driver, car, and environment is only one of several potential ways to constellate the relevant elements. Typically, when an accident happens the drunk driver is designated as the &#x201c;cause&#x201d; and becomes the target of moral opprobrium. Alternate reasonings are possible but rarely accepted in what Gusfield calls the public drama of social problems [<xref ref-type="bibr" rid="B49">49</xref>]. The lack of public transportation to venues that serve alcohol, or the mingling of cars and pedestrians on the same thoroughfares, could be conducive to accidents caused by drunk driving, and yet poor urban planning is rarely singled out as a cause. Car manufacturers are not held accountable for building vehicles that can kill regardless of what mental state the operator is in. The underlying assumption regarding the interdependence of the driver, the car, and the streets is that the driver should be a morally upstanding individual who exercises prudence and is capable of controlling their own behavior. The presence of accidents caused by sober but incompetent drivers indicates that the association between behavior and morality involves the choice of a certain perspective.</p>
<p>Titan II missile explosion: In 1980, a Titan II intercontinental ballistic missile at a missile complex in Damascus, Arkansas was damaged when a worker accidentally dropped a wrench socket down its silo during a routine check of the oxidizer tank pressure, which caused a fuel leak [<xref ref-type="bibr" rid="B50">50</xref>]. The fuel exploded the following day, resulting in one death and multiple injuries. The interdependence of humans and non-intelligent machines can go awry without moral failure by the humans. The coexistence of the worker, the socket, and the vulnerable tank surface led to the explosion.</p>
<p>Fukushima Daiichi Nuclear Power Plant failure: After the East Japan Earthquake of 2011, the resulting tsunami hit the Fukushima Daiichi Nuclear Power Plant and its reactor cooling system failed. This led to reactor meltdowns, explosion and the atmospheric release of radioactive material [<xref ref-type="bibr" rid="B51">51</xref>]. A nuclear plant is an example of a mundane system. Even though the plant uses multiple machines and robots, they are not autonomous or intelligent. In the case of the Fukushima Daiichi Nuclear Powerplant, it turned out that TEPCO, the plant operator, and other related organizations had underestimated the risk of losing reactor cooling after a tsunami. Some seismologists familiar with the region&#x2019;s earthquake and tsunami history had warned that a cooling system failure due to major tsunami was possible, but those warnings were not heeded [<xref ref-type="bibr" rid="B52">52</xref>]. The interdependence between humans and the plant was disrupted not by a gap intrinsic in their relationships&#x2014;both humans and the plant were executing tasks assigned to them&#x2014;but by TEPCO management&#x2019;s decision years earlier to ignore evidence of a serious environmental risk.</p>
<p>As these cases illustrate, the interdependence of elements in mundane systems can be eroded by various factors. The misplacement of trust may only become evident ex post. Drunk drivers should not be trusted to drive safely and yet there is currently no scalable solution to prevent them from getting behind the wheel. The missile fuel tank was not designed to withstand the damage caused by a falling wrench socket, and it was never anticipated that a worker might drop a socket inside the silo. The Fukushima Daiichi Power Plant was supposed to have been built on safe ground and the risk of earthquake and tsunami was believed to be manageable, because the scientists who had warned of potential damage to the cooling system were considered an untrusted minority.</p>
<p>Being systems comprised of human and non-human actors, and operating among other groups and systems with their own idiosyncrasies, A-HMT-S could fail in the same ways mundane systems do: lack of fail-safe mechanisms, human error, poor coordination between actors. However, they can fail in ways unique to them because they have two types of intelligence: human intelligence and machine intelligence. Some further examples will illustrate this.</p>
</sec>
<sec id="s5-2">
<title>5.2 Two cases of failure in systems that are &#x201c;A-HMT-S-adjacent&#x201d;</title>
<p>Boeing 737 Max: Two crashes of this Boeing model were caused by some pilots&#x2019; inability to interact correctly with software that had been implemented to compensate for certain stall conditions [<xref ref-type="bibr" rid="B11">11</xref>, <xref ref-type="bibr" rid="B12">12</xref>]. Optimistically named Maneuvering Characteristics Augmentation System (MCAS), the software conflicted with human pilots&#x2019; judgement and behavioral habits acquired over years of flying previous 737s. A 737 Max without MCAS tends to nose upward in flight because of its large engines placed high on the wing. A nose-up condition can trigger a stall, which is a bad thing for an aircraft. MCAS identifies some conditions under which it automatically forces the nose downward. In the case of the two accidents, pilots who didn&#x2019;t know why the plane was suddenly dipping its nose reacted incorrectly and set in motion a sequence of events that led to tragedy.</p>
<p>But why place the engines so high? Because more efficient engines have larger diameter than less efficient ones, and to prevent them from scraping against the ground, they had to be positioned higher on the wing than the engines on earlier 737s. This higher placement compensates for the fact that the plane&#x2019;s landing gear struts are short, which was a design decision made in the 1960s to make the 737 cargo bay accessible at smaller airports that lacked a full complement of loading equipment, and that design factor was never changed through many decades. A long chain of design and performance decisions, and several hundred deaths, resulted arguably from that single criterion. This also means that redesigning the wing and engine was not even possible without many other changes that would turn it into a completely new plane, requiring a lengthy and costly certification process with multiple regulatory agencies involved. Once Boeing decided to &#x201c;re-engine&#x201d; the 737, a software fix was the only option to compensate for the awkward aerodynamics of the high-mounted engines. Boeing vigorously lobbied with regulators to allow the design changes without fully sharing details with airline companies or pilots [<xref ref-type="bibr" rid="B11">11</xref>]. Pilots were not informed about the existence, much less the operation, of MCAS; in the case of the two fallen planes they had not received simulator training to work with the software.</p>
<p>Boeing 737 Max can be regarded as a precursor to full-fledged A-HMT-S. Humans are on the loop rather than in the loop [<xref ref-type="bibr" rid="B2">2</xref>]. When they are not given authority to intervene when software made a faulty move, or when they aren&#x2019;t sure how to react to a machine decision, the entire system fails catastrophically.</p>
<p>ShotSpotter: ShotSpotter uses specially designed microphones, AI, and human analysts to detect and geolocate gunshots. It claims to offer precision policing solutions to detect crimes and protect lives. In May 2020, based on evidence from this gunfire detection system, a Chicago man named Michael Williams was accused of shooting a neighbor. Forensic reports prepared by ShotSpotter employees established his culpability. After he had been in jail for nearly a year, a judge decided the evidence against him was too weak and the case was dismissed. Williams claims he was giving a ride to the victim when that person was shot by someone else [<xref ref-type="bibr" rid="B53">53</xref>].</p>
<p>As is the case with human interactions, human-machine systems must earn the trust of those with whom they interact. With the ShotSpotter case and the 737 Max disasters, these systems that are on the road to A-HMT-S may not deserve anything more than a skeptical and provisional assessment of trustworthiness. Trust in mundane systems and A-HMT-S are both examples of trust in abstract systems, which is always potentially fraught with suspicion and competing claims [<xref ref-type="bibr" rid="B29">29</xref>]. What is distinct about trust in A-HMT-S granted by outside actors such as the media and the political system is that it involves trust in decisions made by autonomous and intelligent machines [<xref ref-type="bibr" rid="B1">1</xref>, <xref ref-type="bibr" rid="B2">2</xref>, <xref ref-type="bibr" rid="B4">4</xref>, <xref ref-type="bibr" rid="B7">7</xref>, <xref ref-type="bibr" rid="B39">39</xref>]. When high-stakes decisions such as making a criminal accusation or flying an airplane are made by A-HMT-S and then turn out to be wrong, trust will naturally erode.</p>
<p>But A-HMT-S are not solely responsible for their ability to achieve societal trust. Other ecosystems can enhance or suppress the likelihood of it. For example, Muehlematter and Vokinger recommend that one way to improve public trust in artificial-intelligence and machine-learning-based medical devices is to increase transparancy regarding their regulation and approval. Currently, there is an unexplained gap in the timing of approval of devices commonly approved in the United State and Europe [<xref ref-type="bibr" rid="B54">54</xref>].</p>
<p>A breach in trust could also set off what Alexander called the &#x201c;societalization&#x201d; of A-HMT-S [<xref ref-type="bibr" rid="B55">55</xref>]. Societalization happens when long-enduring problems cease to be internal to a given ecosystem (in the usage we employed earlier) and are redefined as a general crisis in the public sphere. Media play the role of agenda-setter with increased and detailed coverage [<xref ref-type="bibr" rid="B55">55</xref>]. Investigative reporting of dramatic cases cracks them open for public discourse and denunciation. The societalization process may trigger regulatory intervention, but that will depend on whether politicians perceive that what is at stake is aligned with their own interests: another example of different ecosystems interacting at the boundary of their respective domains [<xref ref-type="bibr" rid="B46">46</xref>].</p>
<p>The 737 Max disasters and the erroneous prosecution with ShotSpotter data foreshadow what the societalization of A-HMT-S might look like. General public trust in A-HMT-S will have to be actively produced and continuously maintained if A-HMT-S is to achieve the hoped-for synergy of humans and autonomous machines. The current backlash against documented instances of biased algorithms shows the consequences of failing to secure such trust [<xref ref-type="bibr" rid="B39">39</xref>, <xref ref-type="bibr" rid="B56">56</xref>&#x2013;<xref ref-type="bibr" rid="B58">58</xref>]. In 2020, a computer algorithm was used to determine grades for the General Certificate of Secondary Education and A-level qualification in the United Kingdom when exams were cancelled due to the COVID-19 pandemic. The algorithm was found to disproportionately and systematically suppress the grades of students from disadvantaged backgrounds because it used the historical grade distribution at the school level to weight the grades of individual students [<xref ref-type="bibr" rid="B59">59</xref>]. Faced with a nationwide controversy, the algorithmically-generated grades were eventually replaced with alternative grades that integrated teachers&#x2019; assessments. The emergent A-HMT-S deservedly failed to earn the trust of the public.</p>
<p>This section has focused on challenges involved in building trust in A-HMT-S, using cases that revealed design or deployment gaps. Of course, there are also cases in which human and non-human actors successfully achieve fully collaborative participation. In some such cases, non-human actors acquire their own agency equivalent to that of human actors and cease to be a mere assistant to the human actors [<xref ref-type="bibr" rid="B2">2</xref>, <xref ref-type="bibr" rid="B4">4</xref>].</p>
</sec>
</sec>
<sec id="s6">
<title>6 Conclusion</title>
<p>This paper reviewed the social scientific literature that illuminates our understanding of issues regarding trust in A-HMT-S. In research on AI and trust, establishing trust is often presented as a matter of algorithmic transparency above all [<xref ref-type="bibr" rid="B39">39</xref>]. Since A-HMT-S can inadvertently incorporate existing forms of inequality and discrimination, improving algorithmic transparency is certainly a key challenge. At the same time, the present review offers a broader context. The taken-for-granted nature of interpersonal trust among humans suggests some of the ground that human-machine systems will have to cover in order to display trustworthiness, and to achieve and maintain relationships of trust [<xref ref-type="bibr" rid="B8">8</xref>, <xref ref-type="bibr" rid="B23">23</xref>, <xref ref-type="bibr" rid="B24">24</xref>]. Anthropomorphizing interfaces and developing explainable AI are attempts to achieve trust within the ecosystem of A-HMT-S. But those things alone will probably not be enough to curtail skepticism on the part of people outside that ecoystem. Skepticism is not a luddite reaction. Rather, it is a predictable caution about the effects that A-HMT-S can have on well-being of those whose lives and livelihoods may be touched by them [<xref ref-type="bibr" rid="B47">47</xref>, <xref ref-type="bibr" rid="B59">59</xref>]. A-HMT-S researchers and developers&#x2019; engagement with the labor market, academia, mass media and other domains will contribute importantly to the goal of securing trust about technologies that are not fully explicable and yet lead to highly consequential outcomes.</p>
</sec>
</body>
<back>
<sec id="s7">
<title>Author contributions</title>
<p>MA is solely responsible for the entire contents of the article.</p>
</sec>
<ack>
<p>I wish to thank Gerald Lombardi and William Lawless, and two independent reviewers for their insights and helpful comments.</p>
</ack>
<sec sec-type="COI-statement" id="s8">
<title>Conflict of interest</title>
<p>The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
<p>The author declares a past co-authorship with the handling editor WL.</p>
<p>The handling editor declared a past co-authorship with one of the authors MA.</p>
</sec>
<sec sec-type="disclaimer" id="s9">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<label>1.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lawless</surname>
<given-names>WF</given-names>
</name>
</person-group>. <article-title>Toward a physics of interdependence for autonomous human-machine systems: The case of the uber fatal accident, 2018</article-title>. <source>Front Phys</source> (<year>2022</year>) <volume>10</volume>:<fpage>879171</fpage>. <pub-id pub-id-type="doi">10.3389/fphy.2022.879171</pub-id> </citation>
</ref>
<ref id="B2">
<label>2.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Lawless</surname>
<given-names>WF</given-names>
</name>
<name>
<surname>Mittu</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Sofge</surname>
<given-names>DA</given-names>
</name>
<name>
<surname>Shortell</surname>
<given-names>T</given-names>
</name>
<name>
<surname>McDermott</surname>
<given-names>TA</given-names>
</name>
</person-group>. <article-title>Introduction to &#x201c;systems engineering and artificial intelligence&#x201d; and the chapters</article-title>. In: <person-group person-group-type="editor">
<name>
<surname>Lawless</surname>
<given-names>WF</given-names>
</name>
<name>
<surname>Mittu</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Sofge</surname>
<given-names>DA</given-names>
</name>
<name>
<surname>Shortell</surname>
<given-names>T</given-names>
</name>
<name>
<surname>McDermott</surname>
<given-names>TA</given-names>
</name>
</person-group>, editors. <source>Systems engineering and artificial intelligence [internet]</source>. <publisher-loc>Cham</publisher-loc>: <publisher-name>Springer International Publishing</publisher-name> (<year>2021</year>). </citation>
</ref>
<ref id="B3">
<label>3.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Frantz</surname>
<given-names>R</given-names>
</name>
</person-group>. <article-title>Herbert Simon: Artificial intelligence as a framework for understanding intuition</article-title>. <source>J Econ Psychol</source> (<year>2003</year>) <volume>24</volume>(<issue>2</issue>):<fpage>265</fpage>&#x2013;<lpage>77</lpage>. <pub-id pub-id-type="doi">10.1016/s0167-4870(02)00207-6</pub-id> </citation>
</ref>
<ref id="B4">
<label>4.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Cummings</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Schurr</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Naber</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Charlie</surname>
<given-names>SD</given-names>
</name>
</person-group>. <article-title>Recognizing artificial intelligence: The key to unlocking human AI teams</article-title>. In: <person-group person-group-type="editor">
<name>
<surname>Lawless</surname>
<given-names>WF</given-names>
</name>
<name>
<surname>Mittu</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Sofge</surname>
<given-names>DA</given-names>
</name>
<name>
<surname>Shortell</surname>
<given-names>T</given-names>
</name>
<name>
<surname>McDermott</surname>
<given-names>TA</given-names>
</name>
</person-group>, editors. <source>Systems engineering and artificial intelligence [internet]</source>. <publisher-loc>Cham</publisher-loc>: <publisher-name>Springer International Publishing</publisher-name> (<year>2021</year>). </citation>
</ref>
<ref id="B5">
<label>5.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Gleick</surname>
<given-names>J</given-names>
</name>
</person-group>. <source>The information: A history, a theory, a flood</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>Vintage Books</publisher-name> (<year>2012</year>). p. <fpage>526</fpage>. </citation>
</ref>
<ref id="B6">
<label>6.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Jiang</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Fischer</surname>
<given-names>JE</given-names>
</name>
<name>
<surname>Greenhalgh</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Ramchurn</surname>
<given-names>SD</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Jennings</surname>
<given-names>NR</given-names>
</name>
</person-group>. <article-title>Social implications of agent-based planning support for human teams</article-title>. <conf-name>International Conference on Collaboration Technologies and Systems</conf-name> (<year>2014</year>). p. <fpage>310</fpage>. </citation>
</ref>
<ref id="B7">
<label>7.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lee</surname>
<given-names>MK</given-names>
</name>
</person-group>. <article-title>Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management</article-title>. <source>Big Data Soc</source> (<year>2018</year>) <volume>5</volume>(<issue>1</issue>):<fpage>205395171875668</fpage>. <pub-id pub-id-type="doi">10.1177/2053951718756684</pub-id> </citation>
</ref>
<ref id="B8">
<label>8.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>MacKenzie</surname>
<given-names>D</given-names>
</name>
</person-group>. <source>Trading at the speed of light: How ultrafast algorithms are transforming financial markets</source>. <publisher-loc>Princeton, NJ</publisher-loc>: <publisher-name>Princeton University Press</publisher-name> (<year>2021</year>). p. <fpage>290</fpage>. </citation>
</ref>
<ref id="B9">
<label>9.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dur&#xe1;n</surname>
<given-names>JM</given-names>
</name>
<name>
<surname>Jongsma</surname>
<given-names>KR</given-names>
</name>
</person-group>. <article-title>Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI</article-title>. <source>J Med Ethics</source> (<year>2021</year>) <volume>47</volume>(<issue>5</issue>):<fpage>329</fpage>. <pub-id pub-id-type="doi">10.1136/medethics-2020-106820</pub-id> </citation>
</ref>
<ref id="B10">
<label>10.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Panagiotopoulos</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Dimitrakopoulos</surname>
<given-names>G</given-names>
</name>
</person-group>. <article-title>An empirical investigation on consumers&#x2019; intentions towards autonomous driving</article-title>. <source>Transportation Res C: Emerging Tech</source> (<year>2018</year>) <volume>95</volume>:<fpage>773</fpage>&#x2013;<lpage>84</lpage>. <pub-id pub-id-type="doi">10.1016/j.trc.2018.08.013</pub-id> </citation>
</ref>
<ref id="B11">
<label>11.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Robison</surname>
<given-names>P</given-names>
</name>
</person-group>. <source>Flying blind: The 737 MAX tragedy and the fall of boeing</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>Doubleday</publisher-name> (<year>2021</year>). p. <fpage>336</fpage>. </citation>
</ref>
<ref id="B12">
<label>12.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mongan</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Kohli</surname>
<given-names>M</given-names>
</name>
</person-group>. <article-title>Artificial intelligence and human life: Five lessons for radiology from the 737 MAX disasters</article-title>. <source>Radiol Artif Intelligence</source> (<year>2020</year>) <volume>2</volume>(<issue>2</issue>):<fpage>e190111</fpage>. <pub-id pub-id-type="doi">10.1148/ryai.2020190111</pub-id> </citation>
</ref>
<ref id="B13">
<label>13.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ameen</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Tarhini</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Reppel</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Anand</surname>
<given-names>A</given-names>
</name>
</person-group>. <article-title>Customer experiences in the age of artificial intelligence</article-title>. <source>Comput Hum Behav</source> (<year>2021</year>) <volume>114</volume>:<fpage>106548</fpage>. <pub-id pub-id-type="doi">10.1016/j.chb.2020.106548</pub-id> </citation>
</ref>
<ref id="B14">
<label>14.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liu</surname>
<given-names>Z</given-names>
</name>
</person-group>. <article-title>Sociological perspectives on artificial intelligence: A typological reading</article-title>. <source>Sociol Compass</source> (<year>2021</year>) <volume>15</volume>(<issue>3</issue>):<fpage>e12851</fpage>. <pub-id pub-id-type="doi">10.1111/soc4.12851</pub-id> </citation>
</ref>
<ref id="B15">
<label>15.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rahwan</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Cebrian</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Obradovich</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Bongard</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Bonnefon</surname>
<given-names>JF</given-names>
</name>
<name>
<surname>Breazeal</surname>
<given-names>C</given-names>
</name>
</person-group>. <article-title>Machine behaviour</article-title>. <source>Nature</source> (<year>2019</year>) <volume>568</volume>(<issue>7753</issue>):<fpage>477</fpage>&#x2013;<lpage>86</lpage>. <pub-id pub-id-type="doi">10.1038/s41586-019-1138-y</pub-id> </citation>
</ref>
<ref id="B16">
<label>16.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jones</surname>
<given-names>GR</given-names>
</name>
<name>
<surname>George</surname>
<given-names>JM</given-names>
</name>
</person-group>. <article-title>The experience and evolution of trust: Implications for cooperation and teamwork</article-title>. <source>Acad Manage Rev</source> (<year>1998</year>) <volume>23</volume>(<issue>3</issue>):<fpage>531</fpage>&#x2013;<lpage>46</lpage>. <pub-id pub-id-type="doi">10.5465/amr.1998.926625</pub-id> </citation>
</ref>
<ref id="B17">
<label>17.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Beniger</surname>
<given-names>JR</given-names>
</name>
</person-group>. <source>The control revolution: Technological and economic origins of the information society</source>. <publisher-loc>Cambridge, Mass</publisher-loc>: <publisher-name>Harvard University Press</publisher-name> (<year>1986</year>). p. <fpage>508</fpage>. </citation>
</ref>
<ref id="B18">
<label>18.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schilke</surname>
<given-names>O</given-names>
</name>
<name>
<surname>Reimann</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Cook</surname>
<given-names>KS</given-names>
</name>
</person-group>. <article-title>Trust in social relations</article-title>. <source>Annu Rev Sociol</source> (<year>2021</year>) <volume>47</volume>(<issue>1</issue>):<fpage>239</fpage>&#x2013;<lpage>59</lpage>. <pub-id pub-id-type="doi">10.1146/annurev-soc-082120-082850</pub-id> </citation>
</ref>
<ref id="B19">
<label>19.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mayer</surname>
<given-names>RC</given-names>
</name>
<name>
<surname>Davis</surname>
<given-names>JH</given-names>
</name>
<name>
<surname>Schoorman</surname>
<given-names>FD</given-names>
</name>
</person-group>. <article-title>An integrative model of organizational trust</article-title>. <source>Acad Manage Rev</source> (<year>1995</year>) <volume>20</volume>(<issue>3</issue>):<fpage>709</fpage>&#x2013;<lpage>34</lpage>. <pub-id pub-id-type="doi">10.5465/amr.1995.9508080335</pub-id> </citation>
</ref>
<ref id="B20">
<label>20.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lee</surname>
<given-names>JD</given-names>
</name>
<name>
<surname>See</surname>
<given-names>KA</given-names>
</name>
</person-group>. <article-title>Trust in automation: Designing for appropriate reliance</article-title>. <source>Hum Factors</source> (<year>2004</year>) <volume>46</volume>(<issue>1</issue>):<fpage>50</fpage>&#x2013;<lpage>80</lpage>. <pub-id pub-id-type="doi">10.1518/hfes.46.1.50.30392</pub-id> </citation>
</ref>
<ref id="B21">
<label>21.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Robinette</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Howard</surname>
<given-names>AM</given-names>
</name>
<name>
<surname>Wagner</surname>
<given-names>AR</given-names>
</name>
</person-group>. <article-title>Effect of robot performance on human&#x2013;robot trust in time-critical situations</article-title>. <source>IEEE Trans Hum Mach Syst</source> (<year>2017</year>) <volume>47</volume>(<issue>4</issue>):<fpage>425</fpage>&#x2013;<lpage>36</lpage>. <pub-id pub-id-type="doi">10.1109/thms.2017.2648849</pub-id> </citation>
</ref>
<ref id="B22">
<label>22.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Simmel</surname>
<given-names>G</given-names>
</name>
</person-group>. <source>The philosophy of money</source>. <publisher-loc>London</publisher-loc>: <publisher-name>Routledge</publisher-name> (<year>2004</year>). p. <fpage>616</fpage>. </citation>
</ref>
<ref id="B23">
<label>23.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Goffman</surname>
<given-names>E</given-names>
</name>
</person-group>. <source>The presentation of self in everyday life</source>. <publisher-loc>Garden City, New York</publisher-loc>: <publisher-name>Doubleday &#x26; Company</publisher-name> (<year>1959</year>). p. <fpage>259</fpage>. </citation>
</ref>
<ref id="B24">
<label>24.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Garfinkel</surname>
<given-names>H</given-names>
</name>
</person-group>. <source>Studies in ethnomethodology</source>. <publisher-loc>Cambridge, UK</publisher-loc>: <publisher-name>Polity</publisher-name> (<year>1991</year>). p. <fpage>304</fpage>. </citation>
</ref>
<ref id="B25">
<label>25.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ward</surname>
<given-names>PR</given-names>
</name>
<name>
<surname>Mamerow</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Meyer</surname>
<given-names>SB</given-names>
</name>
</person-group>. <article-title>Interpersonal trust across six Asia-Pacific countries: Testing and extending the &#x2018;high trust society&#x2019; and &#x2018;low trust Society&#x2019; theory</article-title>. <source>Plos One</source> (<year>2014</year>) <volume>9</volume>(<issue>4</issue>):<fpage>e95555</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0095555</pub-id> </citation>
</ref>
<ref id="B26">
<label>26.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Dickie</surname>
<given-names>J</given-names>
</name>
</person-group>. <source>Cosa nostra: A history of the Sicilian mafia</source>. <publisher-loc>London</publisher-loc>: <publisher-name>Hodder &#x26; Stoughton</publisher-name> (<year>2004</year>). p. <fpage>483</fpage>. </citation>
</ref>
<ref id="B27">
<label>27.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Gambetta</surname>
<given-names>D</given-names>
</name>
</person-group>. <source>The Sicilian mafia: The business of private protection</source>. <publisher-loc>Cambridge, Mass</publisher-loc>: <publisher-name>Harvard University Press</publisher-name> (<year>1993</year>). p. <fpage>335</fpage>. </citation>
</ref>
<ref id="B28">
<label>28.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Axelrod</surname>
<given-names>RM</given-names>
</name>
</person-group>. <source>The evolution of cooperation</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>Basic Books</publisher-name> (<year>1984</year>). p. <fpage>241</fpage>. </citation>
</ref>
<ref id="B29">
<label>29.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Giddens</surname>
<given-names>A</given-names>
</name>
</person-group>. <source>Modernity and self-identity: Self and society in the late modern age</source>. <publisher-loc>Stanford</publisher-loc>: <publisher-name>Stanford University Press</publisher-name> (<year>1991</year>). p. <fpage>256</fpage>. </citation>
</ref>
<ref id="B30">
<label>30.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Zuboff</surname>
<given-names>S</given-names>
</name>
</person-group>. <source>The age of the smart machine: the future of work and power</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>Basic Books</publisher-name> (<year>1988</year>). p. <fpage>468</fpage>. </citation>
</ref>
<ref id="B31">
<label>31.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Weber</surname>
<given-names>M</given-names>
</name>
</person-group>. <source>Economy and society</source>. <publisher-loc>Cambridge, Mass</publisher-loc>: <publisher-name>Harvard University Press</publisher-name> (<year>2019</year>). p. <fpage>504</fpage>. </citation>
</ref>
<ref id="B32">
<label>32.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hengstler</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Enkel</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Duelli</surname>
<given-names>S</given-names>
</name>
</person-group>. <article-title>Applied artificial intelligence and trust&#x2014;the case of autonomous vehicles and medical assistance devices</article-title>. <source>Technol Forecast Soc Change</source> (<year>2016</year>) <volume>105</volume>:<fpage>105</fpage>&#x2013;<lpage>20</lpage>. <pub-id pub-id-type="doi">10.1016/j.techfore.2015.12.014</pub-id> </citation>
</ref>
<ref id="B33">
<label>33.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Latour</surname>
<given-names>B</given-names>
</name>
</person-group>. <source>Reassembling the social: An introduction to actor-network-theory</source>. <publisher-loc>Oxford</publisher-loc>: <publisher-name>Oxford University Press</publisher-name> (<year>2005</year>). p. <fpage>301</fpage>. </citation>
</ref>
<ref id="B34">
<label>34.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Grint</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Woolgar</surname>
<given-names>S</given-names>
</name>
</person-group>. <source>The machine at work: Technology, work and organization</source>. <publisher-loc>Cambridge, UK</publisher-loc>: <publisher-name>Blackwell Publishers</publisher-name> (<year>1997</year>). p. <fpage>199</fpage>. </citation>
</ref>
<ref id="B35">
<label>35.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shestakofsky</surname>
<given-names>B</given-names>
</name>
</person-group>. <article-title>Working algorithms: Software automation and the future of work</article-title>. <source>Work Occup</source> (<year>2017</year>) <volume>44</volume>(<issue>4</issue>):<fpage>376</fpage>&#x2013;<lpage>423</lpage>. <pub-id pub-id-type="doi">10.1177/0730888417726119</pub-id> </citation>
</ref>
<ref id="B36">
<label>36.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jarrahi</surname>
<given-names>MH</given-names>
</name>
</person-group>. <article-title>Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making</article-title>. <source>Bus Horiz</source> (<year>2018</year>) <volume>61</volume>(<issue>4</issue>):<fpage>577</fpage>&#x2013;<lpage>86</lpage>. <pub-id pub-id-type="doi">10.1016/j.bushor.2018.03.007</pub-id> </citation>
</ref>
<ref id="B37">
<label>37.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Parasuraman</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Manzey</surname>
<given-names>DH</given-names>
</name>
</person-group>. <article-title>Complacency and bias in human use of automation: An attentional integration</article-title>. <source>Hum Factors</source> (<year>2010</year>) <volume>52</volume>(<issue>3</issue>):<fpage>381</fpage>&#x2013;<lpage>410</lpage>. <pub-id pub-id-type="doi">10.1177/0018720810376055</pub-id> </citation>
</ref>
<ref id="B38">
<label>38.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Salem</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Lakatos</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Amirabdollahian</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Dautenhahn</surname>
<given-names>K</given-names>
</name>
</person-group>. <article-title>Would you trust a (faulty) robot? Effects of error, task type and personality on human-robot cooperation and trust</article-title>. In: <conf-name>2015 10th ACM/IEEE International Conference on Human-Robot Interaction</conf-name> (<year>2015</year>). p. <fpage>1</fpage>&#x2013;<lpage>8</lpage>. </citation>
</ref>
<ref id="B39">
<label>39.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Burrell</surname>
<given-names>J</given-names>
</name>
</person-group>. <article-title>How the machine &#x2018;thinks&#x2019;: Understanding opacity in machine learning algorithms</article-title>. <source>Big Data Soc</source> (<year>2016</year>) <volume>3</volume>(<issue>1</issue>):<fpage>205395171562251</fpage>. <pub-id pub-id-type="doi">10.1177/2053951715622512</pub-id> </citation>
</ref>
<ref id="B40">
<label>40.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gunning</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Aha</surname>
<given-names>D</given-names>
</name>
</person-group>. <article-title>DARPA&#x2019;s explainable artificial intelligence (XAI) program</article-title>. <source>AI Mag</source> (<year>2019</year>) <volume>40</volume>(<issue>2</issue>):<fpage>44</fpage>&#x2013;<lpage>58</lpage>. <pub-id pub-id-type="doi">10.1609/aimag.v40i2.2850</pub-id> </citation>
</ref>
<ref id="B41">
<label>41.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Ullman</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Malle</surname>
<given-names>BF</given-names>
</name>
</person-group>. <article-title>Human-Robot trust: Just a button press away</article-title>. In: <conf-name>Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction [Internet]</conf-name>. <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>Association for Computing Machinery</publisher-name>. </citation>
</ref>
<ref id="B42">
<label>42.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zlotowski</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Sumioka</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Nishio</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Glas</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Bartneck</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Ishiguro</surname>
<given-names>H</given-names>
</name>
</person-group>. <article-title>Persistence of the uncanny valley: The influence of repeated interactions and a robot&#x2019;s attitude on its perception</article-title>. <source>Front Psychol</source> (<year>2015</year>). <pub-id pub-id-type="doi">10.3389/fpsyg.2015.00883</pub-id> </citation>
</ref>
<ref id="B43">
<label>43.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Simon</surname>
<given-names>H</given-names>
</name>
</person-group>. <article-title>Theories of bounded rationality</article-title>. In: <source>Models of bounded rationality: Behavioral economics and business organization</source>. <publisher-loc>Cambridge, Mass</publisher-loc>: <publisher-name>MIT Press</publisher-name> (<year>1982</year>). p. <fpage>408</fpage>&#x2013;<lpage>23</lpage>. </citation>
</ref>
<ref id="B44">
<label>44.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Simon</surname>
<given-names>H</given-names>
</name>
</person-group>. <source>Administrative behavior: A study of decision-making processes in administrative organizations</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>Free Press</publisher-name> (<year>1997</year>). p. <fpage>368</fpage>. </citation>
</ref>
<ref id="B45">
<label>45.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rodrigues</surname>
<given-names>R</given-names>
</name>
</person-group>. <article-title>Legal and human rights issues of AI: Gaps, challenges and vulnerabilities</article-title>. <source>J Responsible Tech</source> (<year>2020</year>) <volume>4</volume>:<fpage>100005</fpage>. <pub-id pub-id-type="doi">10.1016/j.jrt.2020.100005</pub-id> </citation>
</ref>
<ref id="B46">
<label>46.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Abbott</surname>
<given-names>A</given-names>
</name>
</person-group>. <article-title>Linked ecologies: States and universities as environments for professions</article-title>. <source>Sociol Theor</source> (<year>2005</year>) <volume>23</volume>(<issue>3</issue>):<fpage>245</fpage>&#x2013;<lpage>74</lpage>. <pub-id pub-id-type="doi">10.1111/j.0735-2751.2005.00253.x</pub-id> </citation>
</ref>
<ref id="B47">
<label>47.</label>
<citation citation-type="web">
<person-group person-group-type="author">
<name>
<surname>Pfeffer</surname>
<given-names>J</given-names>
</name>
</person-group>. <article-title>The role of the general manager in the new economy: Can we save people from technology dysfunctions?</article-title> (<year>2008</year>). <comment>[Internet] 2018 [cited May 22, 2022] Stanford Graduate School of Business Working Paper No. 3714. Available from: <ext-link ext-link-type="uri" xlink:href="https://www.gsb.stanford.edu/faculty-research/working-papers/role-general-manager-new-economy-can-we-save-people-technology">https://www.gsb.stanford.edu/faculty-research/working-papers/role-general-manager-new-economy-can-we-save-people-technology</ext-link>
</comment>. </citation>
</ref>
<ref id="B48">
<label>48.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rafalow</surname>
<given-names>MH</given-names>
</name>
</person-group>. <article-title>Disciplining play: Digital youth culture as capital at school</article-title>. <source>Am J Sociol</source> (<year>2018</year>) <volume>123</volume>(<issue>5</issue>):<fpage>1416</fpage>&#x2013;<lpage>52</lpage>. <pub-id pub-id-type="doi">10.1086/695766</pub-id> </citation>
</ref>
<ref id="B49">
<label>49.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Gusfield</surname>
<given-names>JR</given-names>
</name>
</person-group>. <source>The culture of public problems: Drinking-driving and the symbolic order</source>. <publisher-loc>Chicago</publisher-loc>: <publisher-name>University of Chicago Press</publisher-name> (<year>1984</year>). p. <fpage>278</fpage>. </citation>
</ref>
<ref id="B50">
<label>50.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Schlosser</surname>
<given-names>E</given-names>
</name>
</person-group>. <source>Command and control: Nuclear weapons, the Damascus accident, and the illusion of safety</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>The Penguin Press</publisher-name> (<year>2013</year>). p. <fpage>632</fpage>. </citation>
</ref>
<ref id="B51">
<label>51.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Whitton</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Parry</surname>
<given-names>IM</given-names>
</name>
<name>
<surname>Akiyoshi</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Lawless</surname>
<given-names>W</given-names>
</name>
</person-group>. <article-title>Conceptualizing a social sustainability framework for energy infrastructure decisions</article-title>. <source>Energy Res Soc Sci</source> (<year>2015</year>) <volume>8</volume>:<fpage>127</fpage>&#x2013;<lpage>38</lpage>. <pub-id pub-id-type="doi">10.1016/j.erss.2015.05.010</pub-id> </citation>
</ref>
<ref id="B52">
<label>52.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ishibashi</surname>
<given-names>K</given-names>
</name>
</person-group>. <article-title>Genpatsu shinsai: Hametsuwo sakeru tameni</article-title>. <source>Kagaku</source> (<year>1997</year>) <volume>67</volume>(<issue>10</issue>):<fpage>720</fpage>&#x2013;<lpage>4</lpage>. </citation>
</ref>
<ref id="B53">
<label>53.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Stanley</surname>
<given-names>J</given-names>
</name>
</person-group>. <source>Four problems with the ShotSpotter gunshot detection system</source>. <comment>News &#x26; Commentary [Internet]</comment>. <publisher-loc>New York</publisher-loc>: <publisher-name>American Civil Liberties Union</publisher-name> (<year>2021</year>). </citation>
</ref>
<ref id="B54">
<label>54.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Muehlematter</surname>
<given-names>UJ</given-names>
</name>
<name>
<surname>Daniore</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Vokinger</surname>
<given-names>KN</given-names>
</name>
</person-group>. <article-title>Approval of artificial intelligence and machine learning-based medical devices in the USA and Europe (2015&#x2013;20): A comparative analysis</article-title>. <source>Lancet Digit Health</source> (<year>2021</year>) <volume>3</volume>(<issue>3</issue>):<fpage>e195</fpage>&#x2013;<lpage>203</lpage>. <pub-id pub-id-type="doi">10.1016/s2589-7500(20)30292-2</pub-id> </citation>
</ref>
<ref id="B55">
<label>55.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Alexander</surname>
<given-names>JC</given-names>
</name>
</person-group>. <article-title>The societalization of social problems: Church pedophilia, phone hacking, and the financial crisis</article-title>. <source>Am Sociol Rev</source> (<year>2018</year>) <volume>83</volume>(<issue>6</issue>):<fpage>1049</fpage>&#x2013;<lpage>78</lpage>. <pub-id pub-id-type="doi">10.1177/0003122418803376</pub-id> </citation>
</ref>
<ref id="B56">
<label>56.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>K&#xf6;chling</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Wehner</surname>
<given-names>MC</given-names>
</name>
</person-group>. <article-title>Discriminated by an algorithm: A systematic review of discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development</article-title>. <source>Bus Res</source> (<year>2020</year>) <volume>13</volume>(<issue>3</issue>):<fpage>795</fpage>&#x2013;<lpage>848</lpage>. <pub-id pub-id-type="doi">10.1007/s40685-020-00134-w</pub-id> </citation>
</ref>
<ref id="B57">
<label>57.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>O&#x2019;Neil</surname>
<given-names>C</given-names>
</name>
</person-group>. <source>Weapons of math destruction: How big data increases inequality and threatens democracy</source>. <comment>Reprint edition</comment>. <publisher-loc>New York</publisher-loc>: <publisher-name>Crown</publisher-name> (<year>2016</year>). p. <fpage>288</fpage>. </citation>
</ref>
<ref id="B58">
<label>58.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Nowotny</surname>
<given-names>H</given-names>
</name>
</person-group>. <source>AI we trust: Power, illusion and control of predictive algorithms</source>. <publisher-loc>Cambridge, UK</publisher-loc>: <publisher-name>Polity</publisher-name> (<year>2021</year>). p. <fpage>190</fpage>. </citation>
</ref>
<ref id="B59">
<label>59.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Waller</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Waller</surname>
<given-names>P</given-names>
</name>
</person-group>. <source>Why predictive algorithms are so risky for public sector bodies</source>. <comment>[Internet]</comment>. <publisher-loc>Rochester, NY</publisher-loc>: <publisher-name>Social Science Research Network</publisher-name> (<year>2020</year>). </citation>
</ref>
</ref-list>
</back>
</article>