Skip to main content

ORIGINAL RESEARCH article

Front. Robot. AI, 04 June 2021
Sec. Human-Robot Interaction
Volume 8 - 2021 | https://doi.org/10.3389/frobt.2021.644529

An Immersive Investment Game to Study Human-Robot Trust

www.frontiersin.orgSebastian Zörner* www.frontiersin.orgEmy Arts www.frontiersin.orgBrenda Vasiljevic www.frontiersin.orgAnkit Srivastava www.frontiersin.orgFlorian Schmalzl www.frontiersin.orgGlareh Mir www.frontiersin.orgKavish Bhatia www.frontiersin.orgErik Strahl www.frontiersin.orgAnnika Peters www.frontiersin.orgTayfun Alpay www.frontiersin.orgStefan Wermter
  • Knowledge Technology Group, Department of Informatics, Universität Hamburg, Hamburg, Germany

As robots become more advanced and capable, developing trust is an important factor of human-robot interaction and cooperation. However, as multiple environmental and social factors can influence trust, it is important to develop more elaborate scenarios and methods to measure human-robot trust. A widely used measurement of trust in social science is the investment game. In this study, we propose a scaled-up, immersive, science fiction Human-Robot Interaction (HRI) scenario for intrinsic motivation on human-robot collaboration, built upon the investment game and aimed at adapting the investment game for human-robot trust. For this purpose, we utilize two Neuro-Inspired COmpanion (NICO) - robots and a projected scenery. We investigate the applicability of our space mission experiment design to measure trust and the impact of non-verbal communication. We observe a correlation of 0.43 (p=0.02) between self-assessed trust and trust measured from the game, and a positive impact of non-verbal communication on trust (p=0.0008) and robot perception for anthropomorphism (p=0.007) and animacy (p=0.00002). We conclude that our scenario is an appropriate method to measure trust in human-robot interaction and also to study how non-verbal communication influences a human’s trust in robots.

1 Introduction

As robot capabilities become more and more sophisticated, we not only want them to solve increasingly complex tasks independently but ultimately aid humans in their day-to-day life. Moreover, such social robots should act in a way that is reliable, transparent, and builds trust in their capabilities as well as their intentions (Felzmann et al., 2019). As soon as humans and robots autonomously work in a team on collaborative tasks, trust becomes essential for effective human-robot interaction (Casper and Murphy, 2003). This shows the need for a deeper understanding of what makes us willing to cooperate with robots and which factors enhance or destroy trust during interactions.

We approach this topic by adopting the investment game by Berg et al. (1995), a widely used experiment to measure trust in human-human collaboration. In the investment game, trust is measured as the amount of money a person is willing to give to an anonymous counterpart, in the prospect of a future profit. While others have used it in an HRI setting, some report limitations and differences when applying it to human-robot collaboration (which we elaborate on in Section 2). We, therefore, adapt the original investment game toward a persuasive HRI cooperative scenario by scaling up both the robotic agent as well as the environment. With scaling up we allude to the progression toward a human-like interaction: a realistic cooperative scenario as opposed to an abstract exchange of money. We do this by introducing a plausible currency for both humans as well as robotic agents, along with a weighted choice between two trustees, and removing the ability of the participant to make choices based on domain knowledge. The result is an HRI scenario, concealed as a futuristic, immersive spaceship adventure containing multiple rounds of the investment game for participants to develop intrinsic motivation to collaborate with robots.

In this scenario, we utilize two Neuro-Inspired COmpanion (NICO) humanoid robots by Kerzel et al. (2020) to advise the participant who acts as a spaceship commander. A voice-controlled artificial intelligence system which we refer to by “Wendigo” guides the participant through the experiment, where a large curved projector screen with an interactive video feed simulates the inside of the ship’s cockpit (see Figure 1). The setup is fully autonomous, with automatic speech recognition, visual detection as well as dialogue management implemented as ROS (Quigley et al., 2009) services, yet allows the experimenter to intervene on necessity. During the scenario, participants encounter four similar challenges (navigation malfunctioning, impending asteroids, engine failures and leaks in the cooling system): after the problem is announced by the ship AI, the robotic advisers propose two diverging solutions. Subsequently, the participants are asked to make a choice by distributing the ship’s energy resources between the two robots and themselves, which we evaluate as a quantitative measurement for trust.

FIGURE 1
www.frontiersin.org

FIGURE 1. The experimental setup. On the table there are three compartments, with the one in the front (closest to the participant) containing the total amount of 7 energy cells to distribute.

The immersive setup allows controlling the emergence, destruction, and reconstruction of trust in the robotic companions throughout the game. To improve the robot’s image and to ensure an experience which results in a more human-like interaction, we add non-verbal cues to our robots such as eye gaze toward the participants, facial expressions and gestures (see Section 3.2.3 for details). Such features of non-verbal communication (NVC), generally defined as “unspoken dialogue” (Burgoon et al., 2016), have previously been shown to account for over 60% of the meaning in communication for human interactions (Saunderson and Nejat, 2019), as they allow us to communicate mental states such as thoughts and feelings (Ambady and Weisbuch, 2010). They are also thought to play an important role in human-robot interaction, as the implicit, robotic, non-verbal communication improves the efficiency and transparency of the interaction, leading to a better cooperation between human subjects and robots (Breazeal et al., 2005).

As non-verbal communication is essential to both human-human and human-robot trust (DeSteno et al., 2012; Burgoon et al., 2016), we strive to measure the effect of NVCs in our HRI scenario to assess how well it simulates a natural interaction. Therefore, we utilize our novel investment game scenario to investigate two research questions related to both evaluating trust as well as the impact of NVCs on trust:

1. Does our variant of the investment game provide a reliable measurement for human-robot trust?

2. Does non-verbal communication (NVC) affect human-robot trust positively?

After surveying the latest research on measuring trust in human-robot interaction and its shortcomings (Chapter 2) we describe our approach (Chapter 3) and introduce an empirical study to evaluate our hypotheses (Chapter 4). We discuss the results as well as the limitations of this study (Chapter 5) and conclude our findings (Chapter 6) with an outlook on further research.

2 Related Work

2.1 Trust and the Investment Game

One of the biggest challenges in human-robot interaction is to develop a more natural relationship with robots. Previous research shows that people refrain from accepting, tolerating, and using robotic agents in everyday tasks, mainly because robots still appear like intruders (Zanatto, 2019). A survey by the institute DemoSCOPE (N=1007) has found that while 50% would accept information from a robot, only 16% would be willing to work in a team with one ([Dataset] Statista, 2019). A considerable portion of the general population still fears robots and artificial intelligence, caused by a range of concerns about the negative impact on interpersonal relationships and potential job displacement (Liang and Lee, 2017; Gherheş, 2018).

This begs the question of what could aid in easing humans into collaboration with a robot. As robots become more advanced and take greater responsibility in social jobs such as in the education sector (Kubilinskiene et al., 2017; Hameed et al., 2018; Neumann, 2019) or healthcare industry (Mukai et al., 2010; Logan et al., 2019), this requires humans to be able to trust them. Whereas human-human trust has been extensively studied, human-robot trust poses new and complex research challenges. According to Rempel et al. (1985) in human-human trust we can distinguish cognitive trust - the willingness to rely on another person’s competence and reliability - from affective trust - the confidence that the other person’s actions are intrinsically motivated.

In both cases, the prediction and predictability of behavior are fundamental (Wortham and Theodorou, 2017). Constructs such as emotional empathy, shared attention, and mental perspective-taking are essential to understand, recognize, and predict human behavior, as well as adhere to people’s expectations of appropriate behavior given circumstances (Breazeal et al., 2008). The behavioral prediction is transferred when assessing human-robot trust (Wortham and Theodorou, 2017), as humans build a mental model, thus anthropomorphizing the machine. During the first encounter, humans tend to apply social norms to robots just as they do to humans (Rai and Diermeier, 2015). Cognitive trust is measured by assessing the robot’s performance and affective trust by assessing a robot’s motives. Prominent factors that influence cognitive trust in a robot are its task performance and characteristics (Hancock et al., 2011; Bernotat et al., 2019), the timing and magnitude of errors (Rossi et al., 2017a; Rossi et al., 2017b) and even physical appearance such as a gender-specific body shape (Bernotat et al., 2019). In contrast to this however stands the “uncanny valley” phenomenon: when a robot exhibits aesthetic characteristics too similar to a human, this can negatively impact trust. (Mathur and Reichling, 2016).

To quantitatively measure human-human trust, previous work relies heavily on the investment game (also referred to as the trust game) (Berg et al., 1995), an economic experiment derived from game theory. Berg et al. introduced the investment game in 1995, where a subject (the trustor) invests money in a counterpart (the trustee). At the beginning of the experiment, the trustor is provided with a monetary resource amount r. They can then anonymously decide which fraction p of their monetary resource r they want to give to the trustee. This fraction is then multiplied by a predetermined factor to incentivize investment. The receiving person (trustee) is free to keep the whole of the increased amount or can opt to send a fraction q of the received sum back to the trustor, thereby reciprocating. Trust then is quantitatively measured as the amount of money invested by the trustor in the trustee.

As the investment game has been established to measure trust between humans, some researchers have also used it to empirically measure trust between humans and robots, to varying degrees of success. While most studies kept the original setup, some extended the environment toward a virtual reality setup (Hale et al., 2018), settings with multiple robots (George et al., 2018; Zanatto et al., 2020) or switched the roles so that the human becomes the trustee dependant on the robot’s willingness to invest (Schniter et al., 2020). Other variants such as the Give-Some Game slightly change the rules toward an economic analogue of the prisoner’s dilemma (DeSteno et al., 2010; DeSteno et al., 2012). In the original Investment Game, interaction among the trustor and trustee is intentionally prohibited. Designed as a double-blind procedure, neither the participant nor the experimenter knows which trustor is matched to which trustee. A different approach by Glaeser et al. (2000) specifically fosters participants to get to know each other before the experiment, instead of the double-blind procedure originally proposed, thereby opening up possibilities to study the influence of social interaction on trust.

As previously mentioned, in every social interaction involving trust, predictability is essential. This predictability is where non-verbal communication (NVC) plays a major role (DeSteno et al., 2012): various studies show supportive evidence that implicit robotic non-verbal communication improves the efficiency and transparency of interaction (Breazeal et al., 2005) and report increased measures of trustworthiness when displaying non-verbal cues. Haring et al. (2013) measured the impact of proximity (physical distance) and character of the subject (trustor) on trust. DeSteno et al. (2012) demonstrate that the accuracy of judging the trustworthiness of robotic partners is heightened when the trustee displays non-verbal cues while holding voice constant. Robotic arm gestures have been shown to reinforce anthropomorphism, liveliness and sympathy (Salem et al., 2011; Salem et al., 2013) - regardless of gesture congruency (Saunderson and Nejat, 2019). In fact, a lack of social cues of a robot may cause the participant to employ unwanted testing behavior where they try to outwit the machine (Mota et al., 2016).

A lot of research has gone into the study of non-verbal communication via the investment game in human-agent interaction (Duffy, 2008; Haring et al., 2013; Mota et al., 2016; Hale et al., 2018; Zanatto, 2019; Zanatto et al., 2020). However, only a few of them have used robots that can be considered anthropomorphic and humanoid, which leaves doubt to whether the trust measured is comparable to human-human trust. How much people invest in the investment game may in fact reflect a mixture of the generalized trust (a stable individual characteristic) and their specific trust toward the trustee (Hale et al., 2018), thus suggesting a different scenario setup to measure specific trust separately. It also remains questionable to what extent humans perceive money as valuable currency for robotic agents.

To the best of our knowledge, there has not yet been any research definitively confirming whether the investment game is indeed suitable for measuring human-robot trust. While it is a valid, established trust measuring experiment, the original version lacks certain features to make it suitable for a human-robot interaction scenario: a plausible currency for both humans as well as robotic agents and a human-like interaction without the possibility to make choices based on domain knowledge. The current work addresses this gap and aims to create a scenario that provides these features under which trust in robots can be built and destroyed, in order to clearly measure the correlation between the trust experienced by a human, and the trust that is displayed in the trust game.

2.2 Study Design in the Context of Game Design

To keep participants engaged and immersed in a study that is built around a game or scenario with gamification elements, it is important to consider generally established guidelines for the design of game mechanics and the overall gameplay. In game design, the Mechanics-Dynamics-Aesthetics [MDA; Hunicke et al. (2004)] framework is often used to break down a player’s gameplay experience into three components: the formal rules of a game (mechanics), how they react to player input (dynamics), and the player’s emotional experience of the game (aesthetics). From a design perspective, a game’s mechanics determine its dynamics, which generate the aesthetics experienced by the player.

Consequently, careful design of game mechanics is critical in eliciting specific responses from the player. According to Fabricatore (2007), minimizing the learning time required to master core game mechanics is an essential guideline for successful design. This is particularly important in user studies where the amount of time spent in the game is limited. Additional important guidelines are limiting the number of core mechanics, making them simple to learn, and keeping them relevant throughout most of the game.

For the purpose of collecting data from a scientific study, it is desirable to limit the possibilities of experiencing different narratives and events between different players to be able to infer that different gameplay experiences are solely a result of different subjective experiences. This can be particularly important to control for confounding variables in small to medium-scale sample sizes (Saint-Mont, 2015). At the same time, the player’s choice has to feel meaningful such that their actions have consequences (Stang, 2019). Therefore, the ideal game design requires a balance between a player’s need to influence the game’s environment and a study designer’s need to limit the set of game states and player actions for the purpose of drawing conclusions.

One important approach for achieving this balance is to provide an illusion of choice (Fendt et al., 2012) within a set of predetermined outcomes that are nevertheless dependent on the user’s actions. The success of this approach is tied to the well-studied illusion of control, first described by Langer (1975) as people’s tendency to overestimate their ability to control outside events.

Another important factor for player engagement is the reward design (Jakobsson et al., 2011). According to Wang and Sun (2011), well-designed reward systems offer positive experiences: balance between challenge and skill, clear goals, and immediate feedback. Clear goals and immediate feedback are especially important for comparability to the original investment game in this case, as these are shared characteristics. Reward is the primary driver in how the player progresses the game and how resources are shared in multi-agent games. Reward is often tied to a currency or item and the perceived value is its impact on the reward or the advantage it provides to progress in the game.

These aspects, i.e. the chosen reward system, set of available actions, perceived control over choices, and easy-to-follow rules can contribute to the overall immersion that a player feels. Immersion plays a key role in the design of our experiment as it fosters a more natural-like human-robot interaction. Murray (1997) defines immersion as a metaphorical term derived from the physical experience of being submerged in water: “the sensation of being surrounded by a completely other reality […] that takes over all of our attention, our whole perceptual apparatus.” Such a cognitive state of involvement can span across multiple forms of media such as digital games, films, books or pen-and-paper role-playing games (Cairns et al., 2014). Massively multiplayer online role-playing game (MMORPG) fantasy games are known to immerse the player, as they can engage in real-time communication, role-play, and character customization (Peterson, 2010).

Slater and Wilbur (1997) and Cummings and Bailenson (2016), however, distinguish presence - the subjective psychological experience of “being there” - from immersion as an objective characteristic of a technology: Slater and Wilbur (1997) propose to assess immersion as a system’s ability to create a vivid illusion of reality to the senses of a human participant. Presence then is the state of submerged consciousness that may be induced by immersion. By looking at immersion as a property of the (virtual) environment one can measure its influencing factors. Cummings and Bailenson (2016) summarize that immersion can be achieved by:

1. high-fidelity simulations through multiple sensory modalities

2. mapping a participant’s physical actions to their virtual counterparts

3. removing the participant from the external world through self-contained plots and narratives.

Such properties then let participants become psychologically engaged in the virtual task at hand rather than having to deal with the input mechanisms themselves Cummings and Bailenson (2016).

In our experiment, we provide a high fidelity simulation through visual and auditory sensory modalities by the use of curved screen projections, dry ice fog upon entrance, and surround sound audio. We map the participant’s physical actions to their virtual counterparts’ by providing a tangible currency consisting of cubes that are physically moved to represent energy distribution. Lastly, the participant is removed from the external world through self-contained plots and narrative drawn from science fiction.

Science fiction is used to further enhance immersion as it is known to have a positive impact on engagement (Mubin et al., 2016). The more immersive the system, the more likely individuals feel present in the environment, thereby letting the virtual setting dominate over physical reality in determining their responses (Cummings and Bailenson, 2016). An example would be a jump scare reaction during a horror movie, or when being ambushed while playing a first-person shooter.

Put differently: the greater the degree of immersion, the greater the chance that participants will behave as they do in similar circumstances of everyday reality (Slater and Wilbur, 1997). This concept of presence as realism however has two aspects that need to be distinguished: social and perceptual realism. According to Lombard and Ditton (1997), social realism is the extent to which a media portrayal is plausible in that it reflects events that could occur. As an example, characters and events in animated series may reflect high social realism but - because they are not “photorealistic” - low perceptual realism. A scene from a science fiction program, on the other hand, may be low in social realism but high in perceptual realism, i.e. although the events portrayed are unlikely, objects and characters in the program look and sound as one would expect if they did in fact exist (Lombard et al., 2009).

We strive to minimize social realism to prohibit that participants draw from past experience while retaining high perceptual realism to psychologically engage them in the virtual task.

3 HRI Scenario Design

3.1 An Immersive Extension of the Investment Game

We base our study design around a variant of the investment game, in which two robotic counsellors compete for investments from the human participant. However, in contrast to previous competitive variants (Hale et al., 2018), our design allows the human participant to allocate their investment proportionally between the two robots and themselves.

Motivated by the goal to avoid prior experience in the game as an influence for player investments, we deliberately exaggerate the design of our game scenario: in our space mission, the participants impersonate the commander of a spaceship with the task to deliver critical cargo to a distant planet. For this mission, they are accompanied by two robotic officers. Throughout their journey through outer space, the crew encounters challenges such as asteroid fields and ship malfunctions that require immediate intervention and collaborative solutions. The robotic officers counsel the participant by individually proposing solutions, and the participant proportionally decides on their preferred action by moving energy cubes into respective compartments. However, the two robots’ advice is designed to be incomprehensible technical jargon, leaving the participant with no other choice than to base their decision on the officer’s persona’s subjective impression.

By allocating energy resources, we hypothesize that the participant effectively invests in the robotic officer’s trustworthiness. This scenario setup entails two important requirements: i) making the participant reliant on the robots’ expertize to foster cooperation, and ii) ensuring that the invested currency and investment outcome have an inherent value to both the participant and the robots. We achieve the former by designing a challenging scenario setting of a space journey: all participants will have negligible expertize regarding space travel. Thus, the robotic officers that are introduced as specifically designed to advise in interstellar travel will be perceived as more knowledgeable in the subject matter. In combination, this should prevent participants from making decisions based on their previous experiences, leaving the participant primarily reliant on the robots’ advice.

To achieve the second requirement, we employ a currency that is considered valuable for both the human trustor and the robotic trustee, to create intrinsic motivation to distribute the currency. As we anticipate that participants do not perceive money as a valuable currency for robotic agents, we adopt a fictional currency of energy cells, represented by cubes. From the perspective of game design, the value of items is often determined by their aesthetics and functionality (Ho, 2014), i.e. their usefulness to progress within the game. Therefore, we use cubes that visually fit into the given science fiction setting and tie their value to the ability to invest in the robots’ choices. Consequently, these energy cells have a value to the player as they function as a resource that can provide the ship’s engine with the extra power to reach the destination planet faster. At the same time, the robotic officers require such energy to execute their solutions to ensure safety during the journey. To ascertain that players feel the impact of their choices and investments, the ship AI gives feedback at the end of each round, explaining the consequences of the taken actions for the crew.

A comparison between the original and our immersive extension in terms of defining features can be seen in Table 1. In contrast, the original investment game uses the same monetary currency for both the investment and the return, which forms the basis for an exchange of benefits and characterizes the reciprocity of the game’s interaction (Sandoval et al., 2016). In our case, rather than a return of the invested currency, we provide a different benefit that is tied to the game progression: a reduction of the mission time, which brings participants closer to their goal. A successful distribution causes the presented emergency to be resolved by the robot that received most of the currency. As such, a participant’s distribution of the energy cells is followed by feedback from the ship AI with regards to whether the robots invested in were successful or not in executing their proposed strategies. This builds the basis for reward within our scenario as the return of investment is countered by the robots to execute their problem-solving strategies. We aim to resolve the challenges that i) the participant could perceive a real-world currency as less “useful” for the robots than for themselves, and ii) the energy cubes may be perceived as not valuable enough for the participant to make a meaningful investment choice. Therefore, we add the reward to the game progression caused by the robots.

TABLE 1
www.frontiersin.org

TABLE 1. Comparison of features from the original investment game and our immersive version.

Each resolved emergency reduces the delivery time of the cargo, progresses the game and rewards the player. An unsuccessful distribution of the energy cells indicates the loss of the invested currency, comparable to the original investment game. The loss of the currency increases the delivery time since the energy not invested in either robot’s solution speeds up the ship. By giving a functional value to the energy cells for all, the participant and the robots, and providing a return for the investment, we create a currency that is perceived as valuable to both trustor and trustee.

Lastly, participants can proportionally choose how much they invest, i.e., they can freely distribute their energy cells between both robots and themselves. However, as 7 cells are provided in total, they are unable to distribute all energy cells evenly among the 3 options (officer A, officer B, ship engine), effectively forcing them to voice a preference.

These three aspects - i) the inability of the participants to make choices based on prior existing domain knowledge, ii) a shared currency between human trustor and robot trustee, and iii) the weighted choice between two agents - allow us to go beyond an anonymous exchange of money while maintaining the structure of the investment game, and meet the requirements for a suitable human-robot interaction scenario.

3.2 Experimental Setup

One of the main goals of our scenario design is to achieve an immersive and enjoyable experience for the participants. Besides concealing our research question, our scenario needs to establish enough involvement to allow trust-building toward the robots. For this purpose, we developed a fully autonomous system that only requires intervention in case of larger technical failures or misunderstandings, which most likely would then result in a cancellation of the experiment run. A schematic of our experimental setup can be seen in Figure 2. The participant (P) is seated in the cockpit of the ship (depicted by the interactive video feed screen [S]), containing the two robots (R1 and R2) and a table with three compartments containing the total of seven energy cubes (E). Separated by a curtain, the experimenter (X) and operator (O) monitor the experiment, to intervene only in case of technical difficulties. Otherwise, the system acts through a state machine, implemented in Python using the SMACH1 state management library. The state machine orchestrates and synchronizes several ROS (Quigley et al., 2009) services built on top of the following components:

FIGURE 2
www.frontiersin.org

FIGURE 2. Schematic of the experiment setup from above: The participant (P) sits at the cockpit table, with the two robots (R1, R2) opposite on each side. Behind the robots, the curved screen (S) displays the virtual interior. In the middle of the table, three heptagonal compartments depict where energy cubes (E) can be placed. The top view camera (C1) tracks the energy cube allocation, while two additional cameras (C2) allow to monitor the participant during the experiment. A microphone (M) and loudspeakers (L) allow for voice interactivity and auditory immersion. Behind a privacy curtain, the experimenter (X) keeps additional notes, while an operator (O) monitors the experiment to intervene in case of technical difficulties.

3.2.1 The Environment

For our environment setup we utilized the multi-sensory Virtual Reality lab of the Knowledge Technology group at the Universität Hamburg (Bauer et al., 2012). The participant is seated at a small table in the center of a half-spherical screen canvas with a diameter of 2.6 m and a height of 2.2 m. On the table, in front of the player, there are three heptagonal-shaped compartment areas containing in total seven plastic cubes, as can be seen in Figures 1, 3. A condensator microphone is located in the middle of the table for speech recognition. Next to the microphone lies a laminated sheet with possible questions that can be asked to the robots during the game. Four Optoma GT 750 4k projectors aimed at the canvas in front of the participant display still images as well as video feeds, simulating the inside view of a spaceship cockpit.

FIGURE 3
www.frontiersin.org

FIGURE 3. Top view of the commanding table. One energy cell is assigned to each robot and the participant kept five.

The canvas shows the journey through the galaxy by displaying transition videos between scenes and provides visual feedback such as warnings in case of emergency situations. We use multiple surround loudspeakers installed behind the canvas for the ship AI’s voice and special sound effects such as ambient music, engine noise and alarm sounds. Turquoize ambient lighting and dry ice fog create an atmospheric environment throughout the game, while red lights are used occasionally to indicate the emergency encounters.

3.2.2 The Robots

The two robot officers, non-descriptively named 732-A and 732-B, are located at 45 and 135 respectively from the circle origin, at a maximum angular distance to each other and the participant. We chose their names to be as neutral and unrelated to any prior experience of participants as possible.

We utilize NICO (Neuro-Inspired COmpanion) (Kerzel et al., 2020, 2017), an open-source social robotics platform for humanoid robots (see Figure 4) designed by the Knowledge Technology group at the Universität Hamburg. NICO is a child-sized humanoid robot that has a range of programmable human-like sensory and motor capabilities, accessible and customisable through the Robot Operating System (ROS) (Quigley et al., 2009), characterized in particular by combining social interaction capabilities. It has 10 degrees-of-freedom in the torso (head and arms) and 22 degrees-of-freedom in the hands (under-actuated, 8 motors) with additional joints for fingers, which allows for fine-grained gestures and body language.

FIGURE 4
www.frontiersin.org

FIGURE 4. The neuro-inspired COmpanion (NICO).

NICO is also capable of displaying a range of programmable facial expressions through LED matrices in its eyebrows and mouth. The utterance of spoken messages is enabled via an Embodied Dialogue System, integrated with loudspeakers in the robotic torsos to produce enhanced speech.

3.2.3 Non-Verbal Communication

As elaborated in Section 2, non-verbal communication (NVC) plays a key role in human-human trust (DeSteno et al., 2012). For our investigation of the effect of non-verbal communication on human-robot trust we equip both robotic officers with sets of non-verbal cues, one set more elaborate than the other.

These more elaborate cues include: adapting the gaze direction via head movements toward the participant and the other robot, four different facial expressions (happiness, sadness, surprise, anger), as well as gestures toward the participant such as pointing, saluting or beat gestures. These facial expressions and body movements show evidence to improve the transparency of the interaction and reinforce the spoken word (Breazeal et al., 2008).

The other robot adheres to a minimal set of neutral movements to keep the illusion of life (Mota et al., 2016), such as looking down at the allocated energy cells and turning their head toward the speaker. We alternate the condition between participants in order to control for potential biases.

3.2.4 The Vision System

To support the full autonomy of the system, we developed an automatic object detection system. It handles the energy cell counting during allocation as well confirms that the robot’s compartments are empty before proceeding to the next scene.

On the table, in front of the participant are three heptagonal-shaped compartments holding the energy cells. All compartments have seven quadratic markers on which the energy cells must be placed for successful allocation. Am RGB-camera is mounted on top of the commanding table near the ceiling to count and track energy cubes allocation and de-allocation from the robot compartments. A picture of the commanding table taken by this camera can be seen in Figure 3.

After a request from the dialogue manager state machine, the object detection algorithm processes an image taken from the RGB-camera mounted on top of the commanding table using the OpenCV library (Bradski, 2000), to detect the number of energy cells allocated to each heptagon-shaped compartment. The allocation distribution is sent back to the dialogue manager via ROS service response. Two additional cameras are used by the experimenter and operator to observe the participant and monitor the experiment flow. Using a camera mounted behind the participant, the operator verifies the movements of the robots for technical faults, with the other placed on top of the canvas the experimenter examines the participants’ expressions and movements for possible difficulties.

3.2.5 The Speech Systems

Interactive dialogue via spoken words is a cornerstone to enable natural human-like human-robot interaction (Spiliotopoulos et al., 2001; Kulyukin, 2006). We, therefore, built the spaceship AI named Wendigo as a closed dialogue manager utilizing the SMACH state management library, the Automatic Speech Recognition system DOCKS2 developed by Twiefel et al. (2014), and the Amazon Polly2 Speech Synthesis service.

The participants can directly interact with Wendigo and the robotic officers via a microphone located in the middle of the commanding table. The dialogue is restricted in allowing the participants to only pick questions from a predefined list and confirming that they are ready to go on with the experiment. Both NICO robot officers exhibit the same voice persona represented by loudspeakers embodied in their torso, allowing for a natural sound-source localisation.

3.3 Protocol and Game Scenes

As formulated in Section 3.2, we strive to automate the experiment procedure as much as possible to limit variability and experimenter bias. In the remaining human interventions, the experimenter, therefore, follows a scripted protocol (all detailed lines can be inspected in the full experiment protocol publicly available at3): The participants are welcomed and brought to the anteroom, where they are asked to fill out the consent and data privacy forms as well as a pre-experiment questionnaire.

This questionnaire asks for standard demographic questions such as age, sex, former experience with robots and computers, and general attitude toward robots. We include the 30-item Big Five Inventory-2 Short Form questionnaire (Soto and John, 2017) to assess the Big Five personality domains, which measure individual differences in people’s characteristic patterns of thinking, feeling, and behaving (Goldberg and Kilkowski, 1985). Participants rate each item statement using a 5-point Likert scale ranging from “disagree strongly” to “agree strongly”. We choose the shortened forms to minimize assessment time and respondent fatigue while retaining much of the full Big Five measure’s reliability and validity. Moreover, we measure the general risk-taking tendencies via the Risk Propensity Scale (RPS) by Meertens and Lion (2008), as well as the self-reported trust propensity using the 4-item form by Schoorman et al. (1996). The scales use 5-point Likert-type items with anchors of agree and disagree for each scale point.

After completing the pre-experiment questionnaire, the experimenter then guides the participant toward the experiment room with the half-spherical canvas, depicted as the spaceship cockpit. By entering the cockpit, the experiment context is set and immersion is fostered by the screen depicting the outside view of a space cargo hangar, dimmed lights, dry ice, as well as the experimenter from now on addressing the participants as “commander”. Following the scripted introductory narrative, the experimenter instructs the participants to the space mission task, their goal as the commander to deliver important cargo safe and fast, and makes them aware of the two robotic officers who accompany them on their journey. The participants are encouraged to familiarize themselves with the cockpit environment, the energy cells, the allocation compartments, and the list of possible questions that can be asked to the robots during the game. The experimenter also elaborates on the meaning and impact of the energy cubes, and demonstrates how they can be distributed by way of example. The experimenter asks for any remaining questions, then steps back out of the experiment room behind a curtain before the trial scene 0 begins.

In this scene, the voice-controlled artificial intelligence system Wendigo and the robotic officers introduce themselves, then conduct an introductory round of the cube allocation, which is concealed as a system check. This trial round serves to acquaint the participants with the experiment procedure and reveal possible misunderstandings. It familiarizes them with the ship’s visual and auditory feedback mechanics and accustoms them to the delay between voice input and feedback response. The trial round furthermore allows the operator behind the curtain to possibly re-calibrate the microphone sensitivity without breaking immersion.

After the trial scene 0, the experimenter briefly enters the cockpit again to answer any remaining questions before the start of the actual experiment. At this point, we consider participants to be informed about the game mechanics, prepared for the upcoming tasks, and motivated to achieve the game’s objective, following their mental model they have formed about the game.

Figure 5 depicts the overall course of the experiment narrative: every participant passes through the same scripted events, followed by the same type of feedback (neutral, negative, or positive). While the specific feedback lines are adjusted to the individual allocation choices, the resulting feedback characteristic is always predetermined for each round to ensure comparability between different participants’ interactions. In each scene, the participant goes through the following steps (as visualized in Figure 6):

1. Wendigo draws attention to the challenge at hand (Scene 1: malfunctioning navigation system, Scene 2: interfering asteroids, Scene 3: entering the atmosphere, Scene 4: leaking cooling system).

2. Both robotic officers advertise their solution for which they require energy cells.

3. The participant can ask a question from the list of predefined options, to which the robotic officers reply one after another and in a randomized order.

4. The participant is asked to distribute the energy cells as they see fit, and say ‘Wendigo, I am done! ‘when they are done.

5. Wendigo provides feedback on the decision outcome (Scene 1: neutral, Scene 2: negative, Scene 3 and 4: positive).

6. After the participant places all energy cells back into their own compartment, the state machine autonomously transitions to the next scene.

FIGURE 5
www.frontiersin.org

FIGURE 5. General course of the experiment. Each scene is followed by a feedback statement with predetermined characteristic (neutral, negative or positive).

FIGURE 6
www.frontiersin.org

FIGURE 6. Each of the four scenes follows the same structure: the participant is presented with an emergency for which the robots suggest different solutions. The player can engage in a conversation with both robots to determine their investment. Based on which round is played, the player’s investments have lead to robot actions with either positive or negative consequences, resolving the emergency and transitioning to the next scene.

Note that after each cube allocation, we employ rich visual and auditory feedback (see step 5) in terms of ambient light and spoken response lines disguised as status reports, such as “Unsuccessful. Ship damaged. The breach has been closed but the life support system is damaged.” as an example for negative feedback. By design, in the second scene, the feedback for the investment decision (regardless of how the energy cubes were distributed) will be portrayed as unsuccessful, while on each of the other investments the participant receives positive feedback instead. This control of the narrative, regardless of the participant’s concrete decision, enables us to reproducibly observe the effects of building and destroying trust.

During the experiment, the experimenter behind the curtain, observing on the extra camera view (provided by the cameras indicated as C2 in Figure 2), takes free-form observation notes about the progression of the experiment, as well as any noteworthy occurrence that could invalidate the participant’s data. After the final scene, the experimenter steps back in, congratulates the participant on a successful mission, and escorts them back into the anteroom. The participant is provided with the post-study questionnaire that asks to evaluate their perception of the experiment and their impression of each robot.

For the purpose of rating the robot’s impression, we employ the Godspeed questionnaire (Bartneck et al., 2008), a standardized measurement tool for human-robot interaction using semantic differential scales on five key concepts in human-robot interaction: anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety. We omit questions related to Perceived Safety, since there is no physical interaction between the participants and the robots and distance is kept throughout the experiment. The post-study questionnaire furthermore asks the participant to rate the trustworthiness (Bernotat et al., 2019) and performance of each robot. Inspired by Bernotat et al. (2017), we adapt seven items on the measurement of cognitive trust (grouped into “content” and “speech” clusters), and six items on affective trust (grouped into “cooperation” and “sociability” clusters) (Johnson and Grayson, 2005). Lastly, the participants are asked to choose which robot they preferred as an assistant, and to provide additional feedback about shortcomings, immersion, and their overall experience during the experiment.

4 Results

The study was conducted over two consecutive weeks at the end of February 2020 on the campus of the computer science department of Universität Hamburg. It was advertised via flyers and word of mouth to people with at least some experience and familiarity with computers and robots, who are comfortable with participating in a science fiction game and could understand and speak English fairly well. In the following sections, we start by discussing general population statistics and overall perception of the robots. We then proceed to evaluate whether our scenario is a valid augmentation of the Investment Game. For this, we introduce two derived metrics from the energy cube allocation to compare trust measurements among two conditions. Lastly, we report on the results of the trust measurements and the effect of non-verbal communication (NVC) on trust.

4.1 Population Statistics

Our study was conducted with 53 participants, of whom 45 finished the experiment successfully. For 8 participants the experiment was started but had to be aborted because of technical issues such as robot actuator overloading, language barriers or a misunderstanding of the game rules. All following statistics, therefore, apply to the 45 participants who completed the experiment without complications. Our participants’ mean age (M=26.8, SD=7.0) lies in the range of young adults, with 95% between ages 19 and 34.60% of them identified as male, 38% as female, 2% made no statement. All of the participants were familiar with computers and 51% of them have programming experience. While 29% of the participants had worked with robots previously as a developer, 42% had never interacted with a robot prior to the experiment.

We compared our participants to the general German population of a similar age group with results obtained from other studies (Lang et al., 2011). The comparison was conducted with a Welch’s t-test for independent samples on descriptive statistics with significance level 0.01. Based on the personality questionnaire (Section 3) results, the participants had average scores for extroversion (M=4.68, SD=1.30), agreeableness (M=5.22, SD=1.03) and neuroticism (M=3.62, SD=1.78). However they scored below-average in conscientiousness (M=4.79, SD=0.99) and above-average in openness (M=5.50, SD=1.04) compared to the general German population of a similar age group (Lang et al., 2011). We refrained from assessing the detailed facet-level trait properties of the Big Five domains, as this is recommended by the authors for a sample size below 400 (Soto and John, 2017).

The trust and risk propensity questionnaires showed that our participants were less prone to take risks (M=4.05, SD=1.32) than the general population (Meertens and Lion, 2008) yet more prone to trust (Mayer and Davis, 1999) (M=2.93, SD=0.61). We used the cognitive trust items described in Section 3.3 as a rating of the robot’s performance to compare our population to other findings compiled by Esterwood and Robert (2020): we found a very strong correlation between the cognitive and affective trust items (r=0.82, p=7.2e23), confirming that cognitive and affective trust go hand in hand.

We can confirm the finding by Sehili et al. (2014) for a positive relationship between neuroticism and an anthropomorphic perception of the robot (r=0.22, p=0.035). In contrast to Looije et al. (2010), we cannot confirm any relationship between self-reported conscientiousness of a participant and the perceived sociability of a robot. We moreover cannot confirm a significant relationship between perceived anthropomorphism and robotic performance like Powers and Kiesler (2006) did, however similar to Broadbent et al. (2013) we find a moderate relationship between perceived anthropomorphism and affective trust (r=0.2, p=0.057).

4.2 Metrics and Grouping Criteria

We now introduce two metrics specific to our scenario that allow us to quantify the differences in the trust placed between the robots.

4.2.1 Allocation Metric

Measures the investment displayed via energy cells allocated to each single robot. The allocation metric is calculated as

A(R)=cubes(R2)cubes(R1)cubes(R2)+cubes(R1)(1)

where cubes(R) stands for the energy cells allocated to one of the robots R{R1,R2}. A(R)<0 indicates a preference for R1, A(R)>0 a preference for R2, while the magnitude in the differences is indicated by |A(R)|.

4.2.2 Relative Trust Metric

Measures the trust expressed in each robot according to the post-experiment questionnaire. Relative trust is calculated as

T(R)=trust(R2)trust(R1)(2)

where trust(R) is the value obtained from the different trustworthiness Likert items in the post-interaction questionnaire, normalized to lie within [0,1]. As before, T(R)>0 indicates a preference for R2 or a preference for R1 otherwise, and the magnitude in the differences is indicated by |T(R)|.

Inspecting both the Allocation Metric and the Relative Trust metric over consecutive scenes, we now segment the participants into two groups:

4.2.3 The Alternating-Minimum Investment Group (N = 16)

During the exploratory data analysis, two outstanding gameplay patterns were observed. These two patterns are defined by specific behavior throughout the game, participants that showed either one or both of these behaviors were grouped together:

Minimum Investment Behavior: This behavior resembles a lack of engagement in the game. Three of the participants investing less than one-third of the available cubes were considered disengaged. A threshold of fewer than 10 energy cells allocated in total throughout the four scenes was considered as a criterion for this group.

Alternating Investment Behavior: The energy cell allocation results indicated that some participants changed their minds about the robot they trusted more throughout the game. A group of 14 participants changed their mind at every scene as they would alternate between either allocating more energy cells to one robot or the other, or allocating an equal amount to both robots. These alternating participants did not particularly trust or prefer one robot over another to invest in throughout the game.

Figure 7 highlights these two behaviors in the context of the number of preference changes and amount of cubes invested throughout the game. The group of participants showing either of those behaviors is further referred to as the alternating-minimum investment group and consists of 16 participants. Further analysis of the alternating-minimum investment group showed that there is no link between these patterns and one specific robot, nor the NVC variable. As such, this behavior did not depend on the content of speech or appearance of either of the robots.

FIGURE 7
www.frontiersin.org

FIGURE 7. Occurrence of alternating behavior (A) and minimum investment behavior (B), highlighted in red in the distribution of relevant gameplay metrics (amount of preference changes and total cubes allocated, shown in steps of 4 cubes).

4.2.4 The Main Group (N = 29)

This is the group of participants that did not show either of the two aforementioned behaviors: the majority of the participants. With a Mann-Whitney U test for independent samples, we found that these participants had no notable differences to the alternating-minimum investment group with regards to risk and trust propensity. They, however, obtained a lower score in Neuroticism (p=0.024) in the personality questionnaire than the alternating-minimum investment group.

4.3 Transferability of the Investment Game

The aim of our study is to verify that our scaled-up version of the investment game can be used to measure trust in HRI. The results were evaluated separately on the main group (N=29) and the alternating-minimum investment group (N=16). For this the coherence between measured trust and self-assessed trust was evaluated by means of the Spearman test for correlation on the previously introduced metrics: the allocation metric represents the measured trust and the relative trust metric represents the self-assessed trust.

A statistically significant correlation can be observed for the main group (correlation=0.43, p=0.02), however not for the alternating-minimum investment group (correlation=0.24, p=0.37). A comparison between both groups can be seen in Figure 8. In the standard human-human investment game, the amount of money invested by the trustor represents the trust in the trustee. As such, the observed correlation supports the hypothesis that our variation of the investment game between human and robot works much like the investment game between two humans.

FIGURE 8
www.frontiersin.org

FIGURE 8. Correlation of relative trust and allocation metric for the two participant groups: Main group (A) and Alternating-Minimum Investment group (B).

The fact that alternating-minimum investment behavior was found also in a simple setting (Mota et al., 2016) and that there was no relationship between the alternating behavior of the participants and the robot characteristics show that the setting had no impact on the effectiveness of the trust game. This supports our hypothesis, that our scaled-up version of the investment game can indeed be used as a measure of trust.

4.4 Impact of Non-Verbal Communication on the Perception of the Robot

After ensuring that it is indeed possible to measure trust in human-robot interaction with our scaled-up version of the investment game, we further look into the impact of NVC on trust in the robot but also at other characteristics of the robot. As has been mentioned previously, NVC plays a significant role in human interaction but also in the efficiency and transparency of the interaction between humans and robots (Casper and Murphy, 2003). In our case, we find that these non-verbal cues have indeed made an impact on the trust in the robot as well as on its perceived anthropomorphism and animacy.

We analyze the main group which didn’t show alternating-minimum investment behavior (N=29) where it has been established that the game does measure trust. For this main group, the non-verbal communication of the robot had an impact on the number of energy cells received. This impact was observed in the first scene, the only scene where the participant had no previous disappointment related to any of the robots, but had already gotten to know the robot. In this scene, the robot that showed non-verbal communication obtained a significantly higher amount of energy cells compared to the other. The one-sided Wilcoxon test for independent samples between the distribution of the energy cells for the robot with NVCs and the robot with minimal NVC (MNVC) confirmed this (p=0.0008).

Independent of the gameplay choices, for all participants (N=45) the robot showing NVC seemed more human-like and animated. As can be seen in Figure 9, the Godspeed values for anthropomorphism (p=0.008) and animacy (p=0.00001) are significantly distinct when comparing the NVC/MNVC conditions with a Mann-Whitney U test, whereas this is not the case for likeability (p=0.23) and intelligence (p=0.24). The observed values for anthropomorphism support our hypothesis that the NVC robot invokes more trust, which is consistent with findings of similar studies. Wortham and Theodorou (2017) state that the perceived anthropomorphism of the robot increases the trust in the robot, especially for non-specialist humans, as the human needs to create a mental model for the robot to trust it. Furthermore, an increase in NVC leads to an increase in motion which subsequently leads to more perceived animacy (Parisi and Schlesinger, 2002).

FIGURE 9
www.frontiersin.org

FIGURE 9. Effect of non-verbal communication (NVC) and minimal non-verbal communication (MNVC) on Godspeed items.

However, likeability does not seem to be affected by the use of NVC, potentially because the quantity and type of gestures used for non-verbal communication vary with culture (DeVito et al., 2000). Thus the degree to which a robot moves does not necessarily influence the likeability of the robot, as this is a personal preference that can vary across participants. Consistent with previous research (Deshmukh et al., 2018), there are no perceived differences in intelligence either.

Our results show a correlation between trust measured by the investment game and the self-reported trust from the questionnaire. This gives us evidence that the scaled-up investment game can be used as a tool for measuring human-robot trust and therefore it can have practical applications in future experiments to study the impact of different variables (such as NVCs) between robots on how trustworthy the human perceives them. We anticipate that this serves as a positive example of extending socioeconomic experiments to a human-robot social interaction setting.

5 Discussion and Future Work

Our experiment revolves around three main characteristics: the weighted choice between two agents, the participants’ inability to make choices based on prior domain knowledge, and the additional incentive for interaction between the trustor and the trustee. Maintaining these characteristics, we believe our game design can be adapted to various situations and environments where trust and NVCs play a role. Such environments comprise, but are not limited to, a work environment or a public service environment.

Overall, our results show that our variant of the investment game provides a reliable measure for human-robot trust and that non-verbal communication positively affects human-robot trust. However, there are some points of discussion which we address further in the following section.

5.1 Science Fiction and Immersion

In our study, we chose a futuristic environment since most people know robots from media and science fiction stories (Horstmann and Krämer, 2019). While we hypothesize that this is not a limiting factor for our study’s replication, this should be subject to further research. It is essential to note that participants likely acted following a mental model, acting as a player in a game based around a fictional narrative (see Section 3.1 for a summary or4 for the full narrative). As such, our presented results should be interpreted within this context. For example, as a byproduct of high immersion, we cannot exclude that some participants might have engaged so strongly in role-playing their alter ego so that their observed behavior might have started to differ from their usual self. Consequently, generalisability from contained game studies to real-world settings is an additional open question that is subject to academic debate and research, even in normal trust games with minimal role-play (Levitt and List, 2007; Johnson and Mislin, 2011). Furthermore, we argue that our investigated NVCs and trust factors are likely to be experienced on a more intuitive level and therefore difficult to “fake” when role-playing, given a certain degree of independence from the actual decision-making process in our game.

5.2 Gameplay Behavior

We found two different gameplay behaviors that identify the two groups on which results were compared: the main group (N=29) and the alternating-minimum investment group (N=16). The alternating-minimum investment group (described in Section 4.2.3) either alternated their investment or invested little in the robots, which shows no engagement in the game. There was no significant trust correlation for the alternating-minimum investment group, whereas the main group showed a significant correlation. As 16 participants is quite a high number, we hypothesize that the participants in the alternating-minimum investment group could have been alternating their strategies to infer the experiment research question or to simply test the system similar to as experienced by Mota et al. (2016), possibly due to the fact of the experiment being advertised in a computer science department. This group also showed higher scores for neuroticism compared to the main group in our personality test. Some participants may have not liked the experimental setup or may have felt not immersed enough to participate. However, the lack of immersion does not reflect the game’s general perception, since most participants in the post-interview stated toward the experimenter to have felt immersed and motivated to win the game.

Mota et al. (2016) observed that when a human needs to judge a robot’s trustworthiness, they draw on past social experiences with humans or try to build social experience with the robot. Due to insufficient shared social cues between humans and robots, humans are mostly incapable of determining a robot’s trustworthiness based on past experiences. The alternating and minimum investment behavior observed could indicate an insufficient social experience, thus preventing the establishment. However, further research is necessary to study the particular motivations.

In our experiment, almost half of all 45 participants had never interacted with a robot previously. We fostered building social experience with the robot by making the participant ask them one question before each of the four rounds of cube allocation. Potentially, participants in the alternating-minimum group may have needed more rounds to build social experience reliably. From this perspective, adding more rounds to the game could potentially lead to the behavior regularizing over time. Future work might want to investigate the optimal number of rounds, thereby balancing the trade-off between the experiment’s length and the number of collected data points.

For the small number of three participants who showed non-engaging behavior (see Section 4.2.3), this could result from misunderstanding the rules of the game, the relative worth of the energy cubes, or a general aversion to decision-making or the presented scenario. The non-engaging behavior may also be an attempt to delay decision-making until enough social experience has been built between the participant and the robots.

5.3 Improvements for Future Studies

While we observed and measured trust through the player’s investments, we suggest weighing the following points in future studies. Clearer and more detailed results could likely be obtained with a more prolonged experiment and a bigger participant pool with a revised scenario, mitigating some of this experiment’s limitations.

Since our robots functioned fully autonomously, the natural language interface sometimes malfunctioned due to usage or technical errors, potentially prolonging the time until feedback. The participants who had to repeat themselves, some multiple times, may have experienced a break in immersion. Although our post-interviews did not reflect it, we cannot eliminate that some participants may have felt frustrated by a bumpy interaction. Future experiments could investigate the effect of simplified design choices on our measurements, for example, by substituting our autonomous setup with a wizard-of-oz design for timely interaction. The processing time of the many parts of the experimental setup sometimes leads to slight delays between user action and robot reaction, which could have led to a break of the immersion and frustration.

Our study is limited to the NICO robots. We have encountered some technical limitations, including the lack of a more extensive range of different facial expressions and a wider range of human-like movements. Moreover, NICO has a childlike appearance. It is unclear how the perceived robot age can affect human perception of honesty and reliability, even though we introduced the NICOs as specialists in the complex field of space exploration.

It is important to note that we merely compared non-verbal communication (NVC) against minimal non-verbal communication (MNVC). There is currently no widely established baseline or notion of minimal NVC, and the impact of our interpretation and subsequent design choices on the participants is an open question. Our study showed that the mere presence of NVC has a positive impact on both the trust in the robot and the perceived characteristics of it. Future studies should investigate where the boundaries of minimal and too much NVC lie. As both robots showed at least a baseline of non-verbal cues, the difference between the two conditions may have been diminished. Future studies may also investigate how different gestures affect trust, as there is no clear consensus that social cues translate to “reliable” or “unreliable”, and no obvious way to categorize these cues.

6 Conclusion

We provided an elaborate HRI scenario to model the building of trust more closely to human relationships than in the original investment game. Our experimental setup includes social interaction, non-verbal communication, a shared goal, and intrinsic motivation, thereby allowing participants to collaborate with robots more realistically than in the original investment game, and measuring trust reliably. The environmental variables that our scenario (and its life-like agents) adds to the data are a natural reflection of the many factors, internal and external, that influence human trust and how different levels of trust affect human behavior in different contexts, modeling aspects of human-robot trust that the original trust game does not cover.

We found a correlation between the self-assessed trust and the trust measured from the game for the majority of participants (main group). These same participants allocated more energy cells to the robot with non-verbal communication (NVC) in the first scene of the game. We were therefore able to replicate the positive effect of non-verbal communication on trust and robot perception. The Godspeed (Bartneck et al., 2008) values for anthropomorphism and animacy were increased by NVC for all participants.

Future research should comprise an investigation of the gameplay behaviors observed and could explore the effects of the use of different robots in this setup. Moreover, a similar setup can be used in future studies as a platform for studying trust and other potential factors that influence trust, in a real-world scenario and without losing the complex dynamics of building, breaking and maintaining trust given life-like agents and complex real-world situations. We can use it to formulate an in-depth trust analysis without losing the complex dynamics between internal and external factors that influence the human ability to trust others - be they humans or robots.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics Statement

The studies involving human participants were reviewed and approved by Universität Hamburg Ethics Committee. The patients/participants provided their written informed consent to participate in this study.

Author Contributions

SZ and EA have contributed to the conception, writing and organization of the manuscript. BV, AS, FS, GM, and KB contributed to writing the first draft. SZ, EA, BV, AS, FS, GM, and KB have implemented the system, conducted, and evaluated the study. EA and GM performed the statistical analysis, SZ aided in interpreting the results. ES has aided in the technical realization of the presented study. ES, AP, and TA have supervised the project. AP and TA have helped in editing and revising the paper. SW has contributed to manuscript reading and revision and approved the submitted version.

Funding

The authors gratefully acknowledge partial support from the German Research Foundation (DFG) under Project CML (TRR-169).

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

This article builds upon Arts et al. (2020), presented at HAI 2020/8th International Conference on Human-Agent Interaction, 10–13th November 2020 — Sydney, Australia, and was invited by the Frontiers HRI Topic Editors for a 30% extension of the original contribution. The study would not have been possible without Ahmed Abdelghany, Connor Gäde, Vadym Gryshchuk, Matthew Ng, Shahd Safarani, Nilesh Vijayrania, Christoper Glenn Wulur, and Sophia Zell.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/frobt.2021.644529/full#supplementary-material

Footnotes

1http://wiki.ros.org/smach (accessed 2021-03-10)

2https://aws.amazon.com/polly/ (accessed 2021-03-10)

3https://www.frontiersin.org/articles/10.3389/frobt.2021.644529/full#supplementary-material

4https://gist.github.com/SZoerner/12cefe9ca612b4ae57385b9ea47bf999

References

Ambady, N., and Weisbuch, M. (2010). Nonverbal Behavior. Handbook Soc. Psychol. 1, 464–487. doi:10.1002/9780470561119.socpsy001013

Google Scholar

Arts, E., Zörner, S., Bhatia, K., Mir, G., Schmalzl, F., Srivastava, A., et al. (2020). “Exploring Human-Robot Trust through the Investment Game: An Immersive Space Mission Scenario,” in Proceedings of the 8th International Conference on Human-Agent Interaction, 121–130.

Google Scholar

Bartneck, C., Kulić, D., Croft, E., and Zoghbi, S. (2008). Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots. Int. J. Soc. Robotics 1, 71–81. doi:10.1007/s12369-008-0001-3

CrossRef Full Text | Google Scholar

Bauer, J., Dávila-Chacón, J., Strahl, E., and Wermter, S. (2012). “Smoke and Mirrors — Virtual Realities for Sensor Fusion Experiments in Biomimetic Robotics,” in 2012 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI). IEEE (IEEE), 114–119. doi:10.1109/MFI.2012.6343022

CrossRef Full Text | Google Scholar

Berg, J., Dickhaut, J., and McCabe, K. (1995). Trust, Reciprocity, and Social History. Games Econ. Behav. 10, 122–142. doi:10.1006/game.1995.1027

CrossRef Full Text | Google Scholar

Bernotat, J., Eyssel, F., and Sachse, J. (2017). Shape it - the Influence of Robot Body Shape on Gender Perception in Robots. In International Conference on Social Robotics. Springer, 75–84. doi:10.1007/978-3-319-70022-9_8

CrossRef Full Text | Google Scholar

Bernotat, J., Eyssel, F., and Sachse, J. (2019). The (Fe)male Robot: How Robot Body Shape Impacts First Impressions and Trust towards Robots. Int. J. Soc. Robotics 11, 1–13. doi:10.1007/s12369-019-00562-7

CrossRef Full Text | Google Scholar

Bradski, G. (2000). The OpenCV Library. Dr. Dobb’s J. Softw. Tools. 25, 120-125.

Google Scholar

Breazeal, C., Kidd, C. D., Thomaz, A. L., Hoffman, G., and Berlin, M. (2005). Effects of Nonverbal Communication on Efficiency and Robustness in Human-Robot Teamwork. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE), 708–713. doi:10.1109/IROS.2005.1545011

CrossRef Full Text | Google Scholar

Breazeal, C., Takanishi, A., and Kobayashi, T. (2008). Social Robots that Interact with People. Berlin, Heidelberg: Springer Berlin Heidelberg, 1349–1369. doi:10.1007/978-3-540-30301-5_59

CrossRef Full Text

Broadbent, E., Kumar, V., Li, X., Sollers, J., Stafford, R. Q., MacDonald, B. A., et al. (2013). Robots with Display Screens: a Robot with a More Humanlike Face Display Is Perceived to Have More Mind and a Better Personality. PloS one 8, e72589. doi:10.1371/journal.pone.0072589

PubMed Abstract | CrossRef Full Text | Google Scholar

Burgoon, J. K., Guerrero, L. K., and Floyd, K. (2016). Nonverbal Communication (Routledge)doi:10.4324/9781315663425

CrossRef Full Text | Google Scholar

Cairns, P., Cox, A., and Nordin, A. I. (2014). Immersion in Digital Games: Review of Gaming Experience Research. Handbook of Digital Games 1, 767.

Google Scholar

Casper, J., and Murphy, R. R. (2003). Human-robot Interactions during the Robot-Assisted Urban Search and Rescue Response at the World Trade Center. IEEE Trans. Syst. Man. Cybern. B 33, 367–385. doi:10.1109/tsmcb.2003.811794

PubMed Abstract | CrossRef Full Text | Google Scholar

Cummings, J. J., and Bailenson, J. N. (2016). How Immersive Is Enough? a Meta-Analysis of the Effect of Immersive Technology on User Presence. Media Psychol. 19, 272–309. doi:10.1080/15213269.2015.1015740

CrossRef Full Text | Google Scholar

[Dataset] Statista (2019). Roboter Wie Pepper Übernehmen Immer Mehr Tätigkeiten in Unserem Alltag. Available at: https://de.statista.com/statistik/daten/studie/1005815/umfrage/akzeptanz-von-roboter-dienstleistungen-in-der-schweiz (accessed 03 2021, 10).

Google Scholar

Deshmukh, A., Craenen, B., Vinciarelli, A., and Foster, M. E. (2018). Shaping Robot Gestures to Shape Users' Perception: Proceedings of the 6th International Conference on Human-Agent Interaction, 18. New York, NY, USA: Association for Computing Machinery), 293–300. doi:10.1145/3284432.3284445

CrossRef Full Text

DeSteno, D., Bartlett, M. Y., Baumann, J., Williams, L. A., and Dickens, L. (2010). Gratitude as Moral Sentiment: Emotion-Guided Cooperation in Economic Exchange. Emotion 10, 289–293. doi:10.1037/a0017883

PubMed Abstract | CrossRef Full Text | Google Scholar

DeSteno, D., Breazeal, C., Frank, R. H., Pizarro, D., Baumann, J., Dickens, L., et al. (2012). Detecting the Trustworthiness of Novel Partners in Economic Exchange. Psychol. Sci. 23, 1549–1556. doi:10.1177/0956797612448793

PubMed Abstract | CrossRef Full Text | Google Scholar

DeVito, J. A., O’Rourke, S., and O’Neill, L. (2000). Human Communication. New York: Longman. doi:10.21236/ada377245

CrossRef Full Text

Duffy, J. (2011)). Trust in Second Life. Southern Econ. J. 78 53-62. doi:10.4284/0038-4038-78.1.53

CrossRef Full Text | Google Scholar

Esterwood, C., and Robert, L. P. (2020). “Personality in Healthcare Human Robot Interaction (H-HRI),” in Personality in Healthcare Human Robot Interaction (H-HRI): A Literature Review and Brief Critique, Proceedings of the 8th International Conference on Human-Agent Interaction. Editors Esterwood, C., and Robert, L. P..10–13. doi:10.1145/3406499.3415075

CrossRef Full Text | Google Scholar

Fabricatore, C. (2007). ENLACES (MINEDUC Chile) -OECD Expert Meeting On Videogames And Education.Gameplay and Game Mechanics: A Key to Quality in Videogames

Google Scholar

Felzmann, H., Fosch-Villaronga, E., Lutz, C., and Tamo-Larrieux, A. (2019). Robots and Transparency: The Multiple Dimensions of Transparency in the Context of Robot Technologies. IEEE Robot. Automat. Mag. 26, 71–78. doi:10.1109/mra.2019.2904644

CrossRef Full Text | Google Scholar

Fendt, M. W., Harrison, B., Ware, S. G., Cardona-Rivera, R. E., and Roberts, D. L. (2012). Achieving the Illusion of Agency. In International Conference on Interactive Digital Storytelling. Springer, 114–125. doi:10.1007/978-3-642-34851-8_11

CrossRef Full Text | Google Scholar

George, C., Eiband, M., Hufnagel, M., and Hussmann, H. (2018). “Trusting Strangers in Immersive Virtual Reality,” in Proceedings of the 23rd International Conference on Intelligent User Interfaces Companion. 1–2.

Google Scholar

Gherheş, V. (2018). Why Are We Afraid of Artificial Intelligence (AI)? Eur. Rev. Appl. Sociol. 11, 6–15. doi:10.1515/eras-2018-0006

CrossRef Full Text | Google Scholar

Glaeser, E. L., Laibson, D. I., Scheinkman, J. A., and Soutter, C. L. (2000). Measuring Trust*. Q. J. Econ. 115, 811–846. doi:10.1162/003355300554926

CrossRef Full Text | Google Scholar

Goldberg, L. R., and Kilkowski, J. M. (1985). The Prediction of Semantic Consistency in Self-Descriptions: Characteristics of Persons and of Terms that Affect the Consistency of Responses to Synonym and Antonym Pairs. J. Personal. Soc. Psychol. 48, 82–98. doi:10.1037/0022-3514.48.1.82

PubMed Abstract | CrossRef Full Text | Google Scholar

Hale, J., Payne, M. E., Taylor, K. M., Paoletti, D., and De C Hamilton, A. F. (2018). The Virtual Maze: A Behavioural Tool for Measuring Trust. Q. J. Exp. Psychol. 71, 989–1008. doi:10.1080/17470218.2017.1307865

CrossRef Full Text | Google Scholar

Hameed, I. A., Strazdins, G., Hatlemark, H. A. M., Jakobsen, I. S., and Damdam, J. O. (2018). Robots that Can Mix Serious with Fun, 595–604. doi:10.1007/978-3-319-74690-6–5810.1007/978-3-319-74690-6_58

CrossRef Full Text | Google Scholar

Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., de Visser, E. J., and Parasuraman, R. (2011). A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction. Hum. Factors 53, 517–527. doi:10.1177/0018720811417254

PubMed Abstract | CrossRef Full Text | Google Scholar

Haring, K. S., Matsumoto, Y., and Watanabe, K. (2013). How Do People Perceive and Trust a Lifelike Robot? In Proceedings of the world congress on engineering and computer science (Citeseer), vol. 1

Google Scholar

Ho, A. (2014). The Value of Being Powerful or Beautiful in Games - How Game Design Affects the Value of Virtual Items. Comput. Game J. 3, 54–61. doi:10.1007/bf03392357

CrossRef Full Text | Google Scholar

Horstmann, A. C., and Krämer, N. C. (2019). Great Expectations? Relation of Previous Experiences with Social Robots in Real Life or in the Media and Expectancies Based on Qualitative and Quantitative Assessment. Front. Psychol. 10, 939. doi:10.3389/fpsyg.2019.00939

PubMed Abstract | CrossRef Full Text | Google Scholar

Hunicke, R., Leblanc, M., and Zubek, R. (2004). Mda: A Formal Approach to Game Design and Game Research. In Proceedings of the Challenges in Games AI Workshop, Nineteenth National Conference of Artificial Intelligence (Press), 1–5.

Google Scholar

Jakobsson, M., Sotamaa, O., Moore, C., Begy, J., Consalvo, M., Gazzard, A., et al. (2011). Game Reward Systems. Game Stud. 11.

Google Scholar

Johnson, D., and Grayson, K. (2005). Cognitive and Affective Trust in Service Relationships. J. Business Res. 58, 500–507. doi:10.1016/s0148-2963(03)00140-1

CrossRef Full Text | Google Scholar

Johnson, N. D., and Mislin, A. A. (2011). Trust Games: A Meta-Analysis. J. Econ. Psychol. 32, 865–889. doi:10.1016/j.joep.2011.05.007

CrossRef Full Text | Google Scholar

Kerzel, M., Pekarek-Rosin, T., Strahl, E., Heinrich, S., and Wermter, S. (2020). Teaching Nico How to Grasp: An Empirical Study on Crossmodal Social Interaction as a Key Factor for Robots Learning from Humans. Front. Neurorobot. 14, 28. doi:10.3389/fnbot.2020.00028

PubMed Abstract | CrossRef Full Text | Google Scholar

Kerzel, M., Strahl, E., Magg, S., Navarro-Guerrero, N., Heinrich, S., and Wermter, S. (2017). Nico—neuro-inspired Companion: A Developmental Humanoid Robot Platform for Multimodal Interaction. In 2017 26th IEEE International Symposium on Robot and Hum.an Interactive Communication. (RO-MAN: IEEE), 113–120.

Google Scholar

Kubilinskiene, S., Zilinskiene, I., Zilinskiene, I., Dagiene, V., and Sinkevičius, V. (2017). Applying Robotics in School Education: a Systematic Review. Bjmc 5, 50–69. doi:10.22364/bjmc.2017.5.1.04

CrossRef Full Text | Google Scholar

Kulyukin, V. A. (2006). “On Natural Language Dialogue with Assistive Robots,” in Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-robot Interaction, 164–171. doi:10.1145/1121241.1121270

CrossRef Full Text | Google Scholar

Lang, F. R., John, D., Lüdtke, O., Schupp, J., and Wagner, G. G. (2011). Short Assessment of the Big Five: Robust across Survey Methods except Telephone Interviewing. Behav. Res. 43, 548–567. doi:10.3758/s13428-011-0066-z

PubMed Abstract | CrossRef Full Text | Google Scholar

Langer, E. J. (1975). The Illusion of Control. J. Personal. Soc. Psychol. 32, 311–328. doi:10.1037/0022-3514.32.2.311

CrossRef Full Text | Google Scholar

Levitt, S. D., and List, J. A. (2007). Viewpoint: On the Generalizability of Lab Behaviour to the Field. Can. J. Economics/Revue canadienne d'économique 40, 347–370. doi:10.1111/j.1365-2966.2007.00412.x

CrossRef Full Text | Google Scholar

Liang, Y., and Lee, S. A. (2017). Fear of Autonomous Robots and Artificial Intelligence: Evidence from National Representative Data with Probability Sampling. Int. J. Soc. Robotics 9, 379–384. doi:10.1007/s12369-017-0401-3

CrossRef Full Text | Google Scholar

Logan, D. E., Breazeal, C., Goodwin, M. S., Jeong, S., O’Connell, B., Smith-Freedman, D., et al. (2019). Social Robots for Hospitalized Children. Pediatrics 144, e20181511. doi:10.1542/peds.2018-1511

PubMed Abstract | CrossRef Full Text | Google Scholar

Lombard, M., and Ditton, T. (1997). At the Heart of it All: The Concept of Presence. J. Computer-mediated Commun. 3, JCMC321.

Google Scholar

Lombard, M., Ditton, T. B., and Weinstein, L. (2009). “Measuring Presence: the Temple Presence Inventory,” in Proceedings of the 12th annual International Workshop on Presence, 1–15.

Google Scholar

Looije, R., Neerincx, M. A., and Cnossen, F. (2010). Persuasive Robotic Assistant for Health Self-Management of Older Adults: Design and Evaluation of Social Behaviors. Int. J. Human-Computer Stud. 68, 386–397. doi:10.1016/j.ijhcs.2009.08.007

CrossRef Full Text | Google Scholar

Mathur, M. B., and Reichling, D. B. (2016). Navigating a Social World with Robot Partners: A Quantitative Cartography of the Uncanny Valley. Cognition 146, 22–32. doi:10.1016/j.cognition.2015.09.008

PubMed Abstract | CrossRef Full Text | Google Scholar

Mayer, R. C., and Davis, J. H. (1999). The Effect of the Performance Appraisal System on Trust for Management: A Field Quasi-Experiment. J. Appl. Psychol. 84, 123–136. doi:10.1037/0021-9010.84.1.123

CrossRef Full Text | Google Scholar

Meertens, R. M., and Lion, R. (2008). Measuring an Individual's Tendency to Take Risks: The Risk Propensity Scale. J. Appl. Soc. Pyschol 38, 1506–1520. doi:10.1111/j.1559-1816.2008.00357.x

CrossRef Full Text | Google Scholar

Mota, R. C. R., Rea, D. J., Le Tran, A., Young, J. E., Sharlin, E., and Sousa, M. C. (2016). Playing the ‘trust Game’with Robots: Social Strategies and Experiences. In 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). NY: Columbia University; IEEE, 519–524.

Google Scholar

Mubin, O., Obaid, M., Jordan, P., Alves-Oliveria, P., Eriksson, T., Barendregt, W., Sjolle, D., Fjeld, M., Simoff, S., and Billinghurst, M. (2016). “Towards an Agenda for Sci-Fi Inspired Hci Research,” in Proceedings of the 13th International Conference on Advances in Computer Entertainment Technology (New York, NY, USA: Association for Computing Machinery)). doi:10.1145/3001773.3001786

CrossRef Full Text | Google Scholar

Mukai, T., Hirano, S., Nakashima, H., Kato, Y., Sakaida, Y., Guo, S., et al. (2010). Development of a Nursing-Care Assistant Robot Riba that Can Lift a Human in its Arms, 5996–6001. doi:10.1109/IROS.2010.5651735

CrossRef Full Text

Murray, J. (1997). Hamlet on the Holodeck: The Future of Narrative in Cyberspace. MIT Press.

Neumann, M. M. (2019). Social Robots and Young Children's Early Language and Literacy Learning. Early Child. Educ J 48, 157–170. doi:10.1007/s10643-019-00997-7

CrossRef Full Text | Google Scholar

Parisi, D., and Schlesinger, M. (2002). Artificial Life and Piaget. Cogn. Develop. 17, 1301–1321. doi:10.1016/s0885-2014(02)00119-3

CrossRef Full Text | Google Scholar

Peterson, M. (2010). Massively Multiplayer Online Role-Playing Games as Arenas for Second Language Learning. Comp. Assist. Lang. Learn. 23, 429–439. doi:10.1080/09588221.2010.520673

CrossRef Full Text | Google Scholar

Powers, A., and Kiesler, S. (2006). “The Advisor Robot: Tracing People’s Mental Model from a Robot’s Physical Attributes,” in Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-robot Interaction, 218–225.

Google Scholar

Quigley, M., Conley, K., Gerkey, B., Faust, J., Foote, T., Leibs, J., et al. (2009). Ros: an Open-Source Robot Operating System. In ICRA Workshop on Open Source Software, 3. Kobe, Japan (IEEE), 5.

Google Scholar

Rai, T. S., and Diermeier, D. (2015). Corporations Are Cyborgs: Organizations Elicit Anger but Not Sympathy when They Can Think but Cannot Feel. Organizational Behav. Hum. Decis. Process. 126, 18–26. doi:10.1016/j.obhdp.2014.10.001

CrossRef Full Text | Google Scholar

Rempel, J. K., Holmes, J. G., and Zanna, M. P. (1985). Trust in Close Relationships. J. Personal. Soc. Psychol. 49, 95–112. doi:10.1037/0022-3514.49.1.95

CrossRef Full Text | Google Scholar

Rossi, A., Dautenhahn, K., Koay, K. L., and Saunders, J. (2017a). “Investigating Human Perceptions of Trust in Robots for Safe Hri in Home Environments,” in Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, 375–376. doi:10.1145/3029798.3034822

CrossRef Full Text | Google Scholar

Rossi, A., Dautenhahn, K., Koay, K. L., and Walters, M. L. (2017b). Social Robotics. (Cham: Springer International Publishing), 42–52. doi:10.1007/978-3-319-70022-9_5How the Timing and Magnitude of Robot Errors Influence Peoples' Trust of Robots in an Emergency Scenario

CrossRef Full Text | Google Scholar

Saint-Mont, U. (2015). Randomization Does Not Help Much, Comparability Does. PLOS ONE 10, e0132102–24. doi:10.1371/journal.pone.0132102

PubMed Abstract | CrossRef Full Text | Google Scholar

Salem, M., Eyssel, F., Rohlfing, K., Kopp, S., and Joublin, F. (2013). To Err Is Human(-like): Effects of Robot Gesture on Perceived Anthropomorphism and Likability. Int. J. Soc. Robotics 5, 313–323. doi:10.1007/s12369-013-0196-9

CrossRef Full Text | Google Scholar

Salem, M., Rohlfing, K., Kopp, S., and Joublin, F. (2011). A Friendly Gesture: Investigating the Effect of Multimodal Robot Behavior in Human-Robot Interaction. In , 2011. (Ro-Man, IEEE), 247–252. doi:10.1109/ROMAN.2011.6005285

CrossRef Full Text | Google Scholar

Sandoval, E. B., Brandstetter, J., and Bartneck, C. (2016). “Can a Robot Bribe a Human? the Measurement of the Negative Side of Reciprocity in Human Robot Interaction,” in 11th ACM/IEEE International Conference on Human-Robot Interaction (IEEE), 117–124. doi:10.1109/HRI.2016.7451742

CrossRef Full Text | Google Scholar

Saunderson, S., and Nejat, G. (2019). How Robots Influence Humans: A Survey of Nonverbal Communication in Social Human-Robot Interaction. Int. J. Soc. Robotics 11, 575–608. doi:10.1007/s12369-019-00523-0

CrossRef Full Text | Google Scholar

Schniter, E., Shields, T. W., and Sznycer, D. (2020). Trust in Humans and Robots: Economically Similar but Emotionally Different. J. Econ. Psychol. 78, 102253. doi:10.1016/j.joep.2020.102253

CrossRef Full Text | Google Scholar

Schoorman, F. D., Mayer, R. C., and Davis, J. H. (1996). Organizational Trust: Philosophical Perspectives and Conceptual Definitions. Acad. Manage. Rev. 21, 337–340.

Google Scholar

Sehili, M., Yang, F., Leynaert, V., and Devillers, L. (2014). A Corpus of Social Interaction between Nao and Elderly People. In 5th International Workshop on Emotion, Social Signals, Sentiment & Linked Open Data. Reykjavik, Iceland. LREC. doi:10.1145/2666499.2666502

CrossRef Full Text | Google Scholar

Slater, M., and Wilbur, S. (1997). A Framework for Immersive Virtual Environments (Five): Speculations on the Role of Presence in Virtual Environments. Presence: Teleoperators & Virtual Environments 6, 603–616. doi:10.1162/pres.1997.6.6.603

CrossRef Full Text | Google Scholar

Soto, C. J., and John, O. P. (2017). Short and Extra-short Forms of the Big Five Inventory-2: The Bfi-2-S and Bfi-2-Xs. J. Res. Personal. 68, 69–81. doi:10.1016/j.jrp.2017.02.004

CrossRef Full Text | Google Scholar

Spiliotopoulos, D., Androutsopoulos, I., and Spyropoulos, C. D. (2001). “Human-robot Interaction Based on Spoken Natural Language Dialogue,” in Proceedings of the European Workshop on Service and Humanoid Robots, 25–27.

Google Scholar

Stang, S. (2019). “This Action Will Have Consequences”: Interactivity and Player Agency. Game Stud. 19.

Google Scholar

Twiefel, J., Baumann, T., Heinrich, S., and Wermter, S. (2014). Improving Domain-independent Cloud-Based Speech Recognition with Domain-dependent Phonetic Post-processing. Proceedings of the AAAI Conference on Artificial Intelligence 28

Google Scholar

Wang, H., and Sun, C.-T. (2011). Game Reward Systems: Gaming Experiences and Social Meanings. DiGRA Conf. (Citeseer) 114.

Google Scholar

Wortham, R. H., and Theodorou, A. (2017). Robot Transparency, Trust and Utility. Connect. Sci. 29, 242–248. doi:10.1080/09540091.2017.1313816

CrossRef Full Text | Google Scholar

Zanatto, D., Patacchiola, M., Goslin, J., Thill, S., and Cangelosi, A. (2020). Do Humans Imitate Robots? Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction. New York, NY, USA: Association for Computing Machinery), HRI ‘20, 449–457. doi:10.1145/3319502.3374776

CrossRef Full Text

Zanatto, D. (2019). “When Do We Cooperate with Robots?,” (University of Plymouth). Ph.D. thesis.

Google Scholar

Keywords: human-robot interaction, investment game, non-verbal communication, science fiction, human-robot trust

Citation: Zörner S, Arts E, Vasiljevic B, Srivastava A, Schmalzl F, Mir G, Bhatia K, Strahl E, Peters A, Alpay T and Wermter S (2021) An Immersive Investment Game to Study Human-Robot Trust. Front. Robot. AI 8:644529. doi: 10.3389/frobt.2021.644529

Received: 21 December 2020; Accepted: 28 April 2021;
Published: 04 June 2021.

Edited by:

Mohammad Obaid, Chalmers University of Technology, Sweden

Reviewed by:

Francesco Rea, Italian Institute of Technology (IIT), Italy
Michael Heron, Chalmers University of Technology, Sweden

Copyright © 2021 Zörner, Arts, Vasiljevic, Srivastava, Schmalzl, Mir, Bhatia, Strahl, Peters, Alpay and Wermter. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Sebastian Zörner, sebastian.zoerner@informatik.uni-hamburg.de

These authors have contributed equally to this work and share first authorship

Download