AUTHOR=Lin Jinchao , Panganiban April Rose , Matthews Gerald , Gibbins Katey , Ankeney Emily , See Carlie , Bailey Rachel , Long Michael TITLE=Trust in the Danger Zone: Individual Differences in Confidence in Robot Threat Assessments JOURNAL=Frontiers in Psychology VOLUME=Volume 13 - 2022 YEAR=2022 URL=https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2022.601523 DOI=10.3389/fpsyg.2022.601523 ISSN=1664-1078 ABSTRACT=Effective human-robot teaming (HRT) increasingly requires humans to work with intelligent, autonomous machines. However, novel features of intelligent autonomous systems such as social agency and incomprehensibility may influence the human’s trust in the machine. The human operator’s mental model for machine functioning may be critical for trust. People may consider an intelligent machine partner as either an advanced tool or as a human-like team-mate. A new instrument, the Robot Threat Assessment (RoTA), identifies systematic individual differences in the person’s propensity to apply these two contrasting mental models. This article reports a study that explored the role of individual differences in the mental model in a simulated environment. Participants (N=118) were paired with an intelligent robot tasked with making threat assessments in an urban setting. Half the sample received transparency information referring to physics-based analysis whereas the remainder received information on analysis of psychological cues, such as facial expression. The former type of analysis was considered likely to activate an advanced tool mental model, the latter a teammate mental model. We also manipulated situational danger cues present in the simulated environment. Participants rated their trust in the robot’s decision as well as threat and anxiety, for each of 24 urban scenes. They also completed the RoTA and additional individual-difference measures. Findings showed that trust assessments reflected the degree of congruence between the robot’s decision and situational danger cues, consistent with participants acting as Bayesian decision-makers. Several scales, including the RoTA, were more predictive of trust when the robot was making psychology-based decisions, implying that trust reflected individual differences in the mental model of the robot as a teammate. These findings suggest scope for designing training that uncovers and mitigates the individual’s biases towards intelligent machines.