<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article article-type="brief-report" dtd-version="2.3" xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Robot. AI</journal-id>
<journal-title>Frontiers in Robotics and AI</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Robot. AI</abbrev-journal-title>
<issn pub-type="epub">2296-9144</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">1235017</article-id>
<article-id pub-id-type="doi">10.3389/frobt.2023.1235017</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Robotics and AI</subject>
<subj-group>
<subject>Brief Research Report</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Anthropomorphic framing and failure comprehensibility influence different facets of trust towards industrial robots</article-title>
<alt-title alt-title-type="left-running-head">Roesler</alt-title>
<alt-title alt-title-type="right-running-head">
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/frobt.2023.1235017">10.3389/frobt.2023.1235017</ext-link>
</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Roesler</surname>
<given-names>Eileen</given-names>
</name>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
<uri xlink:href="https://loop.frontiersin.org/people/873146/overview"/>
</contrib>
</contrib-group>
<aff>
<institution>Department of Psychology</institution>, <institution>George Mason University</institution>, <addr-line>Fairfax</addr-line>, <addr-line>VA</addr-line>, <country>United States</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>
<bold>Edited by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/2032449/overview">Peter Thorvald</ext-link>, University of Sk&#xf6;vde, Sweden</p>
</fn>
<fn fn-type="edited-by">
<p>
<bold>Reviewed by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/2354728/overview">Keith Case</ext-link>, Loughborough University, United Kingdom</p>
<p>
<ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/534940/overview">Umer Asgher</ext-link>, National University of Sciences and Technology (NUST), Pakistan</p>
</fn>
<corresp id="c001">&#x2a;Correspondence: Eileen Roesler, <email>eroesle@gmu.edu</email>
</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>07</day>
<month>09</month>
<year>2023</year>
</pub-date>
<pub-date pub-type="collection">
<year>2023</year>
</pub-date>
<volume>10</volume>
<elocation-id>1235017</elocation-id>
<history>
<date date-type="received">
<day>05</day>
<month>06</month>
<year>2023</year>
</date>
<date date-type="accepted">
<day>28</day>
<month>08</month>
<year>2023</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#xa9; 2023 Roesler.</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Roesler</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<p>
<bold>Introduction:</bold> Utilizing anthropomorphic features in industrial robots is a prevalent strategy aimed at enhancing their perception as collaborative team partners and promoting increased tolerance for failures. Nevertheless, recent research highlights the presence of potential drawbacks associated with this approach. It is still widely unknown, how anthropomorphic framing influences the dynamics of trust especially, in context of different failure experiences.</p>
<p>
<bold>Method:</bold> The current laboratory study wanted to close this research gap. To do so, fifty-one participants interacted with a robot that was either anthropomorphically or technically framed. In addition, each robot produced either a comprehensible or an incomprehensible failure.</p>
<p>
<bold>Results:</bold> The analysis revealed no differences in general trust towards the technically and anthropomorphically framed robot. Nevertheless, the anthropomorphic robot was perceived as more transparent than the technical robot. Furthermore, the robot&#x2019;s purpose was perceived as more positive after experiencing a comprehensible failure.</p>
<p>
<bold>Discussion:</bold> The perceived higher transparency of anthropomorphically framed robots might be a double-edged sword, as the actual transparency did not differ between both conditions. In general, the results show that it is essential to consider trust multi-dimensionally, as a uni-dimensional approach which is often focused on performance might overshadow important facets of trust like transparency and purpose.</p>
</abstract>
<kwd-group>
<kwd>human-robot interaction</kwd>
<kwd>trust</kwd>
<kwd>multi-dimensional trust</kwd>
<kwd>anthropomorphism</kwd>
<kwd>failure experience</kwd>
</kwd-group>
<custom-meta-wrap>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Human-Robot Interaction</meta-value>
</custom-meta>
</custom-meta-wrap>
</article-meta>
</front>
<body>
<sec id="s1">
<title>1 Introduction</title>
<p>Industrial robots are increasingly working hand in hand with their human coworkers. Hand in hand can be meant literally here, as close collaboration requires physical and temporal proximity (<xref ref-type="bibr" rid="B12">Onnasch and Roesler, 2021</xref>). For efficient collaboration, humans have to trust the robotic interaction partner (<xref ref-type="bibr" rid="B4">Hancock et al., 2011</xref>; <xref ref-type="bibr" rid="B22">Sheridan, 2016</xref>). While human-robot trust research is still an evolving field, trust has been studied extensively in human-automation and human-human interaction, both fields that are strongly related to human-robot interaction (HRI) (<xref ref-type="bibr" rid="B8">Lewis et al., 2018</xref>). Most theoretical models of trust in automation as well as trust in humans consider trust as multi-dimensional. For instance, for trust in automation, (<xref ref-type="bibr" rid="B7">Lee and See, 2004</xref>), performance, purpose, and process are described as separate dimensions of trust. Even though a transferability of these dimensions to human-robot trust is assumed (<xref ref-type="bibr" rid="B8">Lewis et al., 2018</xref>), recent research focused on using single-items of trust (e.g., <xref ref-type="bibr" rid="B19">Salem et al., 2015</xref>; <xref ref-type="bibr" rid="B21">Sarkar et al., 2017</xref>; <xref ref-type="bibr" rid="B17">Roesler et al., 2020</xref>; <xref ref-type="bibr" rid="B11">Onnasch and Hildebrandt, 2021</xref>) or uni-dimensional trust questionnaires (e.g., <xref ref-type="bibr" rid="B20">Sanders et al., 2019</xref>; <xref ref-type="bibr" rid="B5">Kopp et al., 2022</xref>). These approaches are not able to capture different dimensions, and thus cannot contribute much to a more detailed understanding of the underlying determinants of trust and trust dynamics in interaction with robots.</p>
<p>The multi-dimensional trust-in-automation questionnaire (MTQ) originally proposed by <xref ref-type="bibr" rid="B24">Wiczorek (2011)</xref> and translated, adapted, and validated by <xref ref-type="bibr" rid="B16">Roesler et al. (2022a)</xref> might also be used for investigating trust in HRI. Theoretically, it is based on the concept of <xref ref-type="bibr" rid="B7">Lee and See (2004)</xref> and assesses the dimensions performance, utility, purpose, and transparency. This allows for a more fine-grained assessment of trust in order to gain a better understanding of which trust dimensions are impacted from a given characteristic of a robot. Factors on part of the robot that influence trust can be classified as performance- and attribute-based characteristics (<xref ref-type="bibr" rid="B4">Hancock et al., 2011</xref>). In particular, performance-based factors such as reliability are the largest current influence on perceived trust in HRI. However, actual reliability is rarely correctly weighted for the formation of trust (<xref ref-type="bibr" rid="B14">Rieger et al., 2022</xref>). One decisive factor for this discrepancy could be the type of error experienced in the interaction (<xref ref-type="bibr" rid="B9">Madhavan et al., 2006</xref>). In particular, obvious failures made by a robot might dramatically reduce trust as expectations are violated (<xref ref-type="bibr" rid="B9">Madhavan et al., 2006</xref>). Based on this <italic>easy-error hypothesis</italic> in human-automation interaction, we hypothesized a comparable pattern in HRI. Thus, we assumed that comprehensible failures that might happen to humans as well are more forgivable than incomprehensible failures.</p>
<p>This effect could even be enhanced by one of the most popular design features in HRI&#x2014;the application of anthropomorphic characteristics (<xref ref-type="bibr" rid="B19">Salem et al., 2015</xref>; <xref ref-type="bibr" rid="B15">Roesler et al., 2021</xref>). Anthropomorphism by design refers to the incorporation of human-like qualities and characteristics into the design and behavior of robots (<xref ref-type="bibr" rid="B2">Fischer, 2021</xref>). Anthropomorphic design extends beyond mere robotic appearances, encompassing elements such as communication, movement dynamics, and contextual integration (<xref ref-type="bibr" rid="B12">Onnasch and Roesler, 2021</xref>). Different factors collectively contribute to shaping perceived anthropomorphism of a robot. Even something subtle like an anthropomorphic framing of a robot can serve as a trigger that activates human-human interaction schemes (<xref ref-type="bibr" rid="B13">Onnasch and Roesler, 2019</xref>; <xref ref-type="bibr" rid="B5">Kopp et al., 2022</xref>). Due to the activation of humanlike expectations, failures that might have happened to a human as well [i.e., comprehensible failures (<xref ref-type="bibr" rid="B9">Madhavan et al., 2006</xref>)] could lead to less pronounced trust decrease in the anthropomorphically compared to the technically framed robot.</p>
<p>In addition to this presumed positive effect, anthropomorphism also comes with it potential pitfalls, especially in industrial HRI. In this application domain, anthropomorphism can undermine the perceived tool-like character of the robot, which can result in lower trust and perceived reliability (<xref ref-type="bibr" rid="B17">Roesler et al., 2020</xref>; <xref ref-type="bibr" rid="B11">Onnasch and Hildebrandt, 2021</xref>). The results in regard to anthropomorphic framing are currently mixed in task-related interactions (<xref ref-type="bibr" rid="B13">Onnasch and Roesler, 2019</xref>; <xref ref-type="bibr" rid="B17">Roesler et al., 2020</xref>; <xref ref-type="bibr" rid="B5">Kopp et al., 2022</xref>). Whereas studies which combined anthropomorphic framing and appearance in industrial HRI found negative effects (<xref ref-type="bibr" rid="B13">Onnasch and Roesler, 2019</xref>; <xref ref-type="bibr" rid="B17">Roesler et al., 2020</xref>), another study which investigated anthropomorphic framing without an exposure to an industrial robot found a positive effect on trust (<xref ref-type="bibr" rid="B5">Kopp et al., 2022</xref>). However, this was only the case if the anthropomorphic framing was combined with a cooperativeness framing (<xref ref-type="bibr" rid="B5">Kopp et al., 2022</xref>). As participants in this study were exposed to an actual robot and no additional framing in regard to the cooperativeness was given, it might be assumed that the possible mismatch of appearance, context, and framing reduces trust (<xref ref-type="bibr" rid="B3">Goetz et al., 2003</xref>; <xref ref-type="bibr" rid="B18">Roesler et al., 2022b</xref>). Thus, we hypothesized that anthropomorphic framing of an industrial robot leads to lower initial and learned trust compared to technical framing.</p>
<p>To investigate the joint effects of failure comprehensibility and anthropomorphic framing, we conducted a laboratory experiment. Participants collaborated with an industrial robot in a collaborative task. The robot either had an anthropomorphic framing or a technical framing based on perceived human-likeness framings used by <xref ref-type="bibr" rid="B5">Kopp et al. (2022)</xref>. The dynamics of trust were investigated by measuring trust once initially before the actual collaboration started, after a period of perfectly reliable robotic performance, and after the experience of a failure, which was either comprehensible or incomprehensible.</p>
</sec>
<sec sec-type="methods" id="s2">
<title>2 Methods</title>
<p>The experiment was preregistered via the Open Science Framework (OSF) (<ext-link ext-link-type="uri" xlink:href="https://osf.io/nvmqk">https://osf.io/nvmqk</ext-link>) and approved by the local ethics committee. Also the collected data can be assessed via the OSF <ext-link ext-link-type="uri" xlink:href="https://osf.io/2vzxj/">https://osf.io/2vzxj/</ext-link>.</p>
<sec id="s2-1">
<title>2.1 Participants</title>
<p>The sample consisted of 51 participants (<italic>M</italic>
<sub>age</sub> &#x3d; 26.94; <italic>SD</italic>
<sub>age</sub> &#x3d; 7.72) who were recruited via the participant pool of the local university and online postings. Of those participants, 50.98% were female, 47.06% male, and 1.96% non-binary. Participants signed consent forms at the beginning of the experiment and received five Euros as compensation at the end of the experiment. Due to time constraints of the project, we were unable to achieve the intended sample size as planned and preregistered. Hence, it is crucial to consider the issue of limited statistical power.</p>
</sec>
<sec id="s2-2">
<title>2.2 Task and materials</title>
<p>The aim of the human-robot collaboration was to solve multiple times a four-disk version of the Tower of Hanoi together with the industrial robot <italic>Panda</italic> (<xref ref-type="fig" rid="F1">Figure 1</xref>). In this mathematical game, a stack of disks has to be moved from the leftmost to the rightmost peg by carrying only one disk at a time and never dragging a larger disk on a smaller one in the fewest possible moves. The tower was situated in front of the robot vis-&#xe0;-vis the participant. The required movement sequences of the robot were preprogrammed and included movements in the following chronology. First, the robot moved toward one peg as a sign to remove the top disk from this peg. Subsequently, the robot moved toward another peg as a prompt to place the previously picked disk there. Afterward, the robot moved back to the resting position to start the next sequence. The participant&#x2019;s task was to move the disks by following exactly the robot&#x2019;s directives to solve the Tower of Hanoi in an optimal sequence. Moreover, the participant had the task to monitor the robot&#x2019;s behavior by comparing the steps shown by the robot with an optimal procedure. The participants received a printed copy of the precise instructions of the Tower of Hanoi as can be seen on the table in <xref ref-type="fig" rid="F1">Figure 1</xref>. Whenever the robot deviated from the optimal procedure, the participants needed to intervene by pushing a (mock-up) emergency button.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption>
<p>Photograph from a participant&#x2019;s perspective of the shared human-robot workspace (&#xa9; W. Richter received via <ext-link ext-link-type="uri" xlink:href="https://www.tu.berlin/themen/campus-leben/roboter-mit-fehlern">https://www.tu.berlin/themen/campus-leben/roboter-mit-fehlern</ext-link>).</p>
</caption>
<graphic xlink:href="frobt-10-1235017-g001.tif"/>
</fig>
</sec>
<sec id="s2-3">
<title>2.3 Dependent variables</title>
<p>Single items were used to assess general trust (How much do you trust the robot?) and reliability (How reliable is the robot?) both assessed on a scale from 0 to 100. In addition, the MTQ with four subscales (i.e, performance, utility, purpose, transparency) was assessed via 16 items (e.g., <italic>The way the system works is clear to me.</italic>) on a four-point Likert scale from <italic>disagree</italic> to <italic>agree</italic> (<xref ref-type="bibr" rid="B24">Wiczorek, 2011</xref>; <xref ref-type="bibr" rid="B16">Roesler et al., 2022a</xref>). Both the German and English versions of the questionnaire can be accessed through the OSF via <ext-link ext-link-type="uri" xlink:href="https://osf.io/56cwx/">https://osf.io/56cwx/</ext-link>.</p>
<p>To prevent confounding effects of participants&#x2019; interindividual differences we included two control variables. First, the disposition to trust technology was assessed (<xref ref-type="bibr" rid="B6">Lankton et al., 2015</xref>). Second, we asked participants to fill in a 5-item short version of the Interindividual Differences in Anthropomorphism Questionnaire <xref ref-type="bibr" rid="B23">Waytz et al. (2010)</xref>. The short version comprised solely of items that directly addressed technological aspects (<italic>To what extent does technology&#x2014;devices and machines for manufacturing, entertainment, and productive processes (e.g., cars, computers, television sets)&#x2014;have intentions?</italic>).</p>
<p>To test whether the manipulation of anthropomorphism via framing was successful we incorporated a self-constructed questionnaire with ten items that addressed aspects of anthropomorphic context (e.g., the character, task, and preferences of the robot). All items were rated on a 0%&#x2013;100% human-likeness scale. The manipulation of failure comprehensibility was checked by asking the participants to rate on a five-point Likert scale whether they too could have committed the failure (<xref ref-type="bibr" rid="B17">Roesler et al., 2020</xref>).</p>
</sec>
<sec id="s2-4">
<title>2.4 Procedure</title>
<p>All participants were randomly assigned to one of the four conditions and received corresponding written instructions including the framing of the robot. After filling out the initial questionnaire compromising single items of trust and perceived reliability, participants were informed that they will be working together with the robot for three blocks each including three Towers of Hanoi. After the first fault-free block, again the single items of trust and perceived reliability were assessed. The next block started and in the second block, either a comprehensible failure (i.e., showing the wrong position of a disc without the violation of rules) or an incomprehensible failure (i.e., showing the wrong position of a disc and breaking the rule of never putting a large disc on a smaller one) occurred. After the failure experience, participants needed to push the (mock-up) emergency button. This was done to ensure that all participants realized the failure. Subsequently, the single items of trust and perceived reliability, the MTQ, sociodemographics, control variables, and manipulation checks were measured. After this, all participants were debriefed and obtained the 5 Euro compensation. The entire experiment lasted approximately 35 min.</p>
</sec>
<sec id="s2-5">
<title>2.5 Design</title>
<p>The study consisted of a 2 &#xd7; 2 &#xd7; 3 mixed design with the two between-factors robots framing (anthropomorphic vs technical) and failure comprehensibility (low vs high) and the within-factor experience (initial vs pre failure vs post failure).</p>
<p>The different robot framing conditions were implemented via written instructions (<xref ref-type="bibr" rid="B5">Kopp et al., 2022</xref>). In the anthropomorphic conditions, the robot was framed as a colleague and named Paul with humanlike characteristics. In contrast, in the technical conditions, the framing characterized the robot as a tool with some technical specifications and the model name PR-5. The framings can also be accessed via the OSF (<ext-link ext-link-type="uri" xlink:href="https://osf.io/3xgcp">https://osf.io/3xgcp</ext-link>). The failures were represented by wrong instructions on part of the robot. The comprehensibility was manipulated by the obviousness of the failure. In incomprehensible conditions, the robot suggested moving a bigger disk on a smaller one, which is forbidden by the general rules of the Tower of Hanoi. In the comprehensible conditions, the robot suggested a wrong position of a disk without breaking a general rule.</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>3 Results</title>
<sec id="s3-1">
<title>3.1 Control variables</title>
<p>First, the variables regarding the individual differences concerning attitudes toward technology and tendency to anthropomorphize were analyzed between the four conditions using one-way ANOVAs. The analyses revealed no significant differences between the four groups in the disposition to trust technology (<italic>F</italic>(3, 47) &#x3d; 1.25; <italic>p</italic> &#x3d; .303), as well as the tendency to anthropomorphize (<italic>F</italic>(3, 47) &#x3d; 2.48; <italic>p</italic> &#x3d; .072).</p>
</sec>
<sec id="s3-2">
<title>3.2 Manipulation check</title>
<p>To investigate whether the manipulations were successful, independent t-tests were conducted. Surprisingly, the anthropomorphically framed robot was not perceived as significantly more anthropomorphic on the self-constructed scale compared to the technically framed one (<italic>t</italic>(49) &#x3d; 0.34; <italic>p</italic> &#x3d; .732). Moreover, the comprehensible and incomprehensible failures did not lead to a different understandability of the failure (<italic>t</italic>(49) &#x3d; &#x2212;0.96; <italic>p</italic> &#x3d; .341).</p>
</sec>
<sec id="s3-3">
<title>3.3 Initial trust</title>
<p>Initial trust and perceived reliability were analyzed in regard to differences between differently framed robots via independent t-tests. The analyses revealed neither a difference in general trust (<italic>t</italic>(49) &#x3d; &#x2212;0.63; <italic>p</italic> &#x3d; .529) nor in perceived reliability (<italic>t</italic>(49) &#x3d; 1.48; <italic>p</italic> &#x3d; .145) between the framing conditions.</p>
</sec>
<sec id="s3-4">
<title>3.4 Learned trust</title>
<p>General trust and perceived reliability were analyzed via 2 &#xd7; 2 &#xd7; 2 mixed ANOVAs with the between-factors framing (anthropomorphic vs technical) and failure comprehensibility (low vs high) as well as the within-factor failure experience (pre-vs. post-failure). The analysis of trust revealed only a significant main effect of failure experience (<italic>F</italic>(1, 47) &#x3d; 40.73; <italic>p</italic> &#x3c; .001) with higher trust before (<italic>M</italic> &#x3d; 84.75; <italic>SD</italic> &#x3d; 17.90) compared to after the failure experience (<italic>M</italic> &#x3d; 64.31; <italic>SD</italic> &#x3d; 24.65). No further main or interaction effects were revealed in the analysis (all <italic>ps</italic> &#x3e; .068). A comparable pattern of results was revealed for perceived reliability. Again, a significant main effect of failure experience was found (<italic>F</italic>(1, 47) &#x3d; 71.15; <italic>p</italic> &#x3c; .001). Participants perceived the robot prior failure experience (<italic>M</italic> &#x3d; 93.51; <italic>SD</italic> &#x3d; 8.94) as significantly more reliable than after failure experience (<italic>M</italic> &#x3d; 66.16; <italic>SD</italic> &#x3d; 23.65). No further effects were revealed (all <italic>ps</italic> &#x3e; .349).</p>
<p>As the MTQ was measured after failure experience 2 &#xd7; 2 between-factors ANOVAs with the factors framing (anthropomorphic vs technical) and failure comprehensibility (low vs high) were used. Neither the analysis of the performance scale nor the analysis of the utility scale revealed any significant effects (all <italic>ps</italic> &#x3e; .132). However, the analysis of the purpose scale showed a significant main effect of failure comprehensibility (<italic>F</italic>(1, 47) &#x3d; 6.20; <italic>p</italic> &#x3d; .016) depicted in <xref ref-type="fig" rid="F2">Figure 2</xref> (left). Incomprehensible failures (<italic>M</italic> &#x3d; 3.05; <italic>SD</italic> &#x3d; 0.54) received significantly lower scores on this scale compared to comprehensible failures (<italic>M</italic> &#x3d; 3.38; <italic>SD</italic> &#x3d; 0.35). Moreover, the analysis of the transparency scale revealed a significant main effect of robot framing (<italic>F</italic>(1, 47) &#x3d; 7.08; <italic>p</italic> &#x3d; .011) as can be seen in <xref ref-type="fig" rid="F2">Figure 2</xref> (right). The anthropomorphically framed robot (<italic>M</italic> &#x3d; 3.02; <italic>SD</italic> &#x3d; 0.52) was perceived as significantly more transparent than the technically framed one (<italic>M</italic> &#x3d; 2.59; <italic>SD</italic> &#x3d; 0.62). No further significant effects were revealed for the purpose and transparency scale (all <italic>ps</italic> &#x3e; .161).</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption>
<p>Means, standard errors and exact values of each participant for the type of failure concerning purpose (left) and the framing concerning transparency (right).</p>
</caption>
<graphic xlink:href="frobt-10-1235017-g002.tif"/>
</fig>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>4 Discussion</title>
<p>The purpose of the presented study was to examine the joint effects of anthropomorphic robot framing and the experience of more or less comprehensible failures on human trust in a realistic industrial human-robot collaboration. Based on previous research in task-related HRI (<xref ref-type="bibr" rid="B13">Onnasch and Roesler, 2019</xref>; <xref ref-type="bibr" rid="B17">Roesler et al., 2020</xref>; <xref ref-type="bibr" rid="B11">Onnasch and Hildebrandt, 2021</xref>) it was assumed that anthropomorphic framing would lead to lower trust and perceived reliability compared to a technical framing. The present results were not consistent with this claim, as no significant differences in initial and learned trust as well as perceived reliability were revealed. This might be explained by the interplay of framing and appearance. Earlier studies in industrial HRI manipulated framing and appearance together (<xref ref-type="bibr" rid="B17">Roesler et al., 2020</xref>; <xref ref-type="bibr" rid="B11">Onnasch and Hildebrandt, 2021</xref>). The comparison to the current results could indicate that the negative effect of the decorative anthropomorphism in industrial HRI might be mainly attributable to appearance rather than to framing. In addition, recent research of <xref ref-type="bibr" rid="B5">Kopp et al. (2022)</xref> showed a positive effect of anthropomorphic framing on trust in industrial HRI if the relation is perceived as cooperative. Even though it often remains unclear if and why people perceive the relation to an industrial robot in a cooperative or competitive manner (<xref ref-type="bibr" rid="B10">Oliveira et al., 2018</xref>), our interaction scenario was designed in a cooperative way. This might explain why anthropomorphic framing was influencing at least one facet of trust&#x2014;transparency.</p>
<p>As anthropomorphism is assumed to activate well-known human-human interaction scripts, knowledge about the otherwise highly unknown novel technology is elicited (<xref ref-type="bibr" rid="B1">Epley et al., 2007</xref>). The imputation of human-like functions and behaviors can thus reduce uncertainty and, in this case, increase perceived transparency. Of course, this is a double-edged sword, as perceived transparency does not refer to actual transparency in this case. The illusion of higher transparency might even lead to unintentional side effects, such as a wrong mental model of the robot. In terms of future research, it would be important to consolidate the current findings by further examining the effect of anthropomorphic framing on transparency. However, the general effectiveness of framing in regard to human-robot trust should be interpreted with caution as no significant results were revealed for general trust and the other subscales of the MTQ. This pattern of results is consistent with a current meta-analysis showing no significant effect of context anthropomorphism for subjective as well as objective outcomes (<xref ref-type="bibr" rid="B15">Roesler et al., 2021</xref>). However, the meta-analysis has shed light on a notable research gap concerning anthropomorphic context, which has received comparably less attention than studying the effectiveness of robot appearances. The findings of this study, coupled with insights from <xref ref-type="bibr" rid="B5">Kopp et al. (2022)</xref> &#x2019;s previous work, tentatively suggest a potential effectiveness of anthropomorphic framing for industrial HRI in regard to trust. The previous and current results underscore the necessity for further exploration and empirical investigation of possible benefits of anthropomorphic framing in industrial HRI.</p>
<p>Therefore, it might be not surprising that3no interaction effect of framing and failure comprehensibility was found. The possible effect might have been covered by the rather non-salient manipulations of both anthropomorphism and failure comprehensibility. This assumption is further supported by the non-significant manipulation checks for both variables. Nonetheless, the comprehensibility of failures did significantly influence the perceived purpose of the robot. Purpose refers to motives, benevolence, and intentions (<xref ref-type="bibr" rid="B7">Lee and See, 2004</xref>) and not to the performance of the interaction partner. This leads to the assumption that failure number and types affect different facets of trust.</p>
<p>Both the result that anthropomorphic framing and failure comprehensibility can affect different dimensions of trust but not general trust shows the importance to integrate multi-dimensional approaches to investigate trust in HRI. Uni-dimensional trust measures most commonly relate to performance aspects (<xref ref-type="bibr" rid="B18">Roesler et al., 2022b</xref>). Even though performance-attributes of a robot are one of the most important determinants of trust, they are by far not the only one (<xref ref-type="bibr" rid="B4">Hancock et al., 2011</xref>). Therefore, it is highly relevant to also include trust facets that go beyond performance. Thus, future research should include a multi-dimensional view at trust, particularly with novel embodied technologies like robots.</p>
<p>Although the generality of the current results must be established by future research, especially with bigger samples sizes to investigate the joint effect of both factors, the present study has provided clear support that uni-dimensional trust measurements might overshadow certain important facets of trust. Not only was anthropomorphic framing leading to higher transparency compared to technical framing, but more comprehensible failures to more perceived purpose of the robot compared to incomprehensible failures. Furthermore, this research opens up multiple avenues for future research to investigate more detailed different dimensions of trust.</p>
</sec>
</body>
<back>
<sec sec-type="data-availability" id="s5">
<title>Data availability statement</title>
<p>The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: <ext-link ext-link-type="uri" xlink:href="https://osf.io/2vzxj/">https://osf.io/2vzxj/</ext-link>.</p>
</sec>
<sec id="s6">
<title>Ethics statement</title>
<p>The studies involving humans were approved by Ethics Board of the Institute of Psychology and Ergonomics of the TU Berlin. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.</p>
</sec>
<sec id="s7">
<title>Author contributions</title>
<p>The author confirms responsibility for the following: study conception and design, data analysis and interpretation of results, and manuscript preparation.</p>
</sec>
<sec id="s8">
<title>Funding</title>
<p>This research was funded by the Federal Ministry of Education and Research (BMBF) and the state of Berlin under the Excellence Strategy of the Federal Government and the L&#xe4;nder in the context of the X-Student Research Group &#x201c;Team Member or Tool&#x2014;Anthropomorphism and Error Experience in Human-Robot Interaction.&#x201d;</p>
</sec>
<ack>
<p>Many thanks to all members of the funded X-Student Research Group: Jana Appel, Fiona Feldhus, Ella Heinz, Samira Kunz, Marie-Elisabeth Makohl, and Alexander Werk, for their support in data collection. I would also like to express appreciation to Marie-Elisabeth Makohl for her contributions to science communication within the project. Furthermore, I would like to thank Tobias Kopp for his valuable assistance in providing the necessary framings.</p>
</ack>
<sec sec-type="COI-statement" id="s9">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s10">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Epley</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Waytz</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Cacioppo</surname>
<given-names>J. T.</given-names>
</name>
</person-group> (<year>2007</year>). <article-title>On seeing human: a three-factor theory of anthropomorphism</article-title>. <source>Psychol. Rev.</source> <volume>114</volume>, <fpage>864</fpage>&#x2013;<lpage>886</lpage>. <pub-id pub-id-type="doi">10.1037/0033-295x.114.4.864</pub-id>
</citation>
</ref>
<ref id="B2">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fischer</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Tracking anthropomorphizing behavior in human-robot interaction</article-title>. <source>ACM Trans. Human-Robot Interact. (THRI)</source> <volume>11</volume>, <fpage>1</fpage>&#x2013;<lpage>28</lpage>. <pub-id pub-id-type="doi">10.1145/3442677</pub-id>
</citation>
</ref>
<ref id="B3">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Goetz</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Kiesler</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Powers</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2003</year>). &#x201c;<article-title>Matching robot appearance and behavior to tasks to improve human-robot cooperation</article-title>,&#x201d; in <conf-name>The 12th IEEE International Workshop on Robot and Human Interactive Communication, 2003. Proceedings. ROMAN 2003</conf-name>, <conf-loc>Millbrae, CA, USA</conf-loc>, <conf-date>02-02 November 2003</conf-date> (<publisher-name>IEEE</publisher-name>), <fpage>55</fpage>&#x2013;<lpage>60</lpage>.</citation>
</ref>
<ref id="B4">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hancock</surname>
<given-names>P. A.</given-names>
</name>
<name>
<surname>Billings</surname>
<given-names>D. R.</given-names>
</name>
<name>
<surname>Schaefer</surname>
<given-names>K. E.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>J. Y. C.</given-names>
</name>
<name>
<surname>de Visser</surname>
<given-names>E. J.</given-names>
</name>
<name>
<surname>Parasuraman</surname>
<given-names>R.</given-names>
</name>
</person-group> (<year>2011</year>). <article-title>A meta-analysis of factors affecting trust in human-robot interaction</article-title>. <source>Hum. Factors J. Hum. Factors Ergonomics Soc.</source> <volume>53</volume>, <fpage>517</fpage>&#x2013;<lpage>527</lpage>. <pub-id pub-id-type="doi">10.1177/0018720811417254</pub-id>
</citation>
</ref>
<ref id="B5">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kopp</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Baumgartner</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Kinkel</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>2022</year>). <article-title>&#x201c;It&#x2019;s not paul, it&#x2019;s a robot&#x201d;: the impact of linguistic framing and the evolution of trust and distrust in a collaborative robot during a human-robot interaction</article-title>. <source>SSRN Electron. J.</source> <volume>178</volume>, <fpage>1</fpage>&#x2013;<lpage>15</lpage>. <pub-id pub-id-type="doi">10.2139/ssrn.4113811</pub-id>
</citation>
</ref>
<ref id="B6">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lankton</surname>
<given-names>N. K.</given-names>
</name>
<name>
<surname>McKnight</surname>
<given-names>D. H.</given-names>
</name>
<name>
<surname>Tripp</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2015</year>). <article-title>Technology, humanness, and trust: rethinking trust in technology</article-title>. <source>J. Assoc. Inf. Syst.</source> <volume>16</volume>, <fpage>1</fpage>. <pub-id pub-id-type="doi">10.17705/1jais.00411</pub-id>
</citation>
</ref>
<ref id="B7">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lee</surname>
<given-names>J. D.</given-names>
</name>
<name>
<surname>See</surname>
<given-names>K. A.</given-names>
</name>
</person-group> (<year>2004</year>). <article-title>Trust in automation: designing for appropriate reliance</article-title>. <source>Hum. factors</source> <volume>46</volume>, <fpage>50</fpage>&#x2013;<lpage>80</lpage>. <pub-id pub-id-type="doi">10.1518/hfes.46.1.50_30392</pub-id>
</citation>
</ref>
<ref id="B8">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Lewis</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Sycara</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Walker</surname>
<given-names>P.</given-names>
</name>
</person-group> (<year>2018</year>). &#x201c;<article-title>The role of trust in human-robot interaction</article-title>,&#x201d; in <source>Foundations of trusted autonomy</source>. Editors <person-group person-group-type="editor">
<name>
<surname>Abbass</surname>
<given-names>H. A.</given-names>
</name>
<name>
<surname>Scholz</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Reid</surname>
<given-names>D. J.</given-names>
</name>
</person-group> (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer International Publishing</publisher-name>), <fpage>135</fpage>&#x2013;<lpage>159</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-319-64816-3_8</pub-id>
</citation>
</ref>
<ref id="B9">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Madhavan</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Wiegmann</surname>
<given-names>D. A.</given-names>
</name>
<name>
<surname>Lacson</surname>
<given-names>F. C.</given-names>
</name>
</person-group> (<year>2006</year>). <article-title>Automation failures on tasks easily performed by operators undermine trust in automated aids</article-title>. <source>Hum. Factors</source> <volume>48</volume>, <fpage>241</fpage>&#x2013;<lpage>256</lpage>. <pub-id pub-id-type="doi">10.1518/001872006777724408</pub-id>
</citation>
</ref>
<ref id="B10">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Oliveira</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Arriaga</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Alves-Oliveira</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Correia</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Petisca</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Paiva</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2018</year>). &#x201c;<article-title>Friends or foes? socioemotional support and gaze behaviors in mixed groups of humans and robots</article-title>,&#x201d; in <conf-name>Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (HRI)</conf-name>, <conf-loc>Chicago, IL, USA</conf-loc>, <conf-date>05-08 March 2018</conf-date>. <fpage>279</fpage>&#x2013;<lpage>288</lpage>. <pub-id pub-id-type="doi">10.1145/3171221.3171272</pub-id>
</citation>
</ref>
<ref id="B11">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Onnasch</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Hildebrandt</surname>
<given-names>C. L.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Impact of anthropomorphic robot design on trust and attention in industrial human-robot interaction</article-title>. <source>J. Hum.-Robot Interact.</source> <volume>11</volume>, <fpage>1</fpage>&#x2013;<lpage>24</lpage>. <pub-id pub-id-type="doi">10.1145/3472224</pub-id>
</citation>
</ref>
<ref id="B12">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Onnasch</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Roesler</surname>
<given-names>E.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>A taxonomy to structure and analyze human&#x2013;robot interaction</article-title>. <source>Int. J. Soc. Robotics</source> <volume>13</volume>, <fpage>833</fpage>&#x2013;<lpage>849</lpage>. <pub-id pub-id-type="doi">10.1007/s12369-020-00666-5</pub-id>
</citation>
</ref>
<ref id="B13">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Onnasch</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Roesler</surname>
<given-names>E.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Anthropomorphizing robots: the effect of framing in human-robot collaboration</article-title>. <source>SageJournals</source> <volume>63</volume>, <fpage>1311</fpage>&#x2013;<lpage>1315</lpage>. <pub-id pub-id-type="doi">10.1177/1071181319631209</pub-id>
</citation>
</ref>
<ref id="B14">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rieger</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Roesler</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Manzey</surname>
<given-names>D.</given-names>
</name>
</person-group> (<year>2022</year>). <article-title>Challenging presumed technological superiority when working with (artificial) colleagues</article-title>. <source>Sci. Rep.</source> <volume>12</volume>. <pub-id pub-id-type="doi">10.1038/s41598-022-07808-x</pub-id>
</citation>
</ref>
<ref id="B15">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Roesler</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Manzey</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Onnasch</surname>
<given-names>L.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>A meta-analysis on the effectiveness of anthropomorphism in human-robot interaction</article-title>. <source>Sci. Robotics</source> <volume>6</volume>, <fpage>eabj5425</fpage>. <pub-id pub-id-type="doi">10.1126/scirobotics.abj5425</pub-id>
</citation>
</ref>
<ref id="B16">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Roesler</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Naendrup-Poell</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Manzey</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Onnasch</surname>
<given-names>L.</given-names>
</name>
</person-group> (<year>2022a</year>). <article-title>Why context matters: the influence of application domain on preferred degree of anthropomorphism and gender attribution in human-robot interaction</article-title>. <source>Int. J. Soc. Robotics</source> <volume>14</volume>, <fpage>1155</fpage>&#x2013;<lpage>1166</lpage>. <pub-id pub-id-type="doi">10.1007/s12369-021-00860-z</pub-id>
</citation>
</ref>
<ref id="B17">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Roesler</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Onnasch</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Majer</surname>
<given-names>J. I.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>The effect of anthropomorphism and failure comprehensibility on human-robot trust</article-title>. <source>SageJournals</source> <volume>64</volume>, <fpage>107</fpage>&#x2013;<lpage>111</lpage>. <pub-id pub-id-type="doi">10.1177/1071181320641028</pub-id>
</citation>
</ref>
<ref id="B18">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Roesler</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Rieger</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Manzey</surname>
<given-names>D.</given-names>
</name>
</person-group> (<year>2022b</year>). <article-title>Trust towards human vs. automated agents: using a multidimensional trust questionnaire to assess the role of performance, utility, purpose, and transparency</article-title>. <source>Proc. Hum. Factors Ergonomics Soc. Annu. Meet.</source> <volume>66</volume>, <fpage>2047</fpage>&#x2013;<lpage>2051</lpage>. <pub-id pub-id-type="doi">10.1177/1071181322661065</pub-id>
</citation>
</ref>
<ref id="B19">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Salem</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Lakatos</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Amirabdollahian</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Dautenhahn</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2015</year>). &#x201c;<article-title>Would you trust a (faulty) robot?</article-title> <article-title>effects of error, task type and personality on human-robot cooperation and trust</article-title>,&#x201d; in <conf-name>2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI)</conf-name>, <conf-loc>Portland, OR, USA</conf-loc>, <conf-date>02-05 March 2015</conf-date>. <pub-id pub-id-type="doi">10.1145/2696454.2696497</pub-id>
</citation>
</ref>
<ref id="B20">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sanders</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Kaplan</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Koch</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Schwartz</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Hancock</surname>
<given-names>P. A.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>The relationship between trust and use choice in human-robot interaction</article-title>. <source>Hum. Factors J. Hum. Factors Ergonomics Soc.</source> <volume>61</volume>, <fpage>614</fpage>&#x2013;<lpage>626</lpage>. <pub-id pub-id-type="doi">10.1177/0018720818816838</pub-id>
</citation>
</ref>
<ref id="B21">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Sarkar</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Araiza-Illan</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Eder</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2017</year>). <source>Effects of faults, experience, and personality on trust in a robot co-worker</source>.</citation>
</ref>
<ref id="B22">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sheridan</surname>
<given-names>T. B.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Human-robot interaction: status and challenges</article-title>. <source>Hum. Factors J. Hum. Factors Ergonomics Soc.</source> <volume>58</volume>, <fpage>525</fpage>&#x2013;<lpage>532</lpage>. <pub-id pub-id-type="doi">10.1177/0018720816644364</pub-id>
</citation>
</ref>
<ref id="B23">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Waytz</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Cacioppo</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Epley</surname>
<given-names>N.</given-names>
</name>
</person-group> (<year>2010</year>). <article-title>Who sees human? the stability and importance of individual differences in anthropomorphism</article-title>. <source>Perspect. Psychol. Sci.</source> <volume>5</volume>, <fpage>219</fpage>&#x2013;<lpage>232</lpage>. <pub-id pub-id-type="doi">10.1177/1745691610369336</pub-id>
</citation>
</ref>
<ref id="B24">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Wiczorek</surname>
<given-names>R.</given-names>
</name>
</person-group> (<year>2011</year>). &#x201c;<article-title>Entwicklung und evaluation eines mehrdimensionalen fragebogens zur messung von vertrauen in technische systeme</article-title>,&#x201d; in <source>Reflexionen und Visionen der Mensch-Maschine-Interaktion&#x2013;Aus der Vergangenheit lernen, Zukunft gestalten</source>, <volume>9</volume>, <fpage>621</fpage>&#x2013;<lpage>626</lpage>.</citation>
</ref>
</ref-list>
</back>
</article>