Event Abstract

Assessing human reaction to a virtual agent’s facial feedback in a simple Q&A setting

  • 1 Drexel University, Computer Science, United States

We explore the interaction between humans and Embodied Virtual Agents (EVAs) which are a specific type of intelligent agents. We design an EVA as an assistant in a general-knowledge Question and Answer (Q&A) task. The EVA provides feedback to the user about each of the possible answers by expressing different facial expressions. Two types of agents are used in this study: cooperative and uncooperative. The research question that we are exploring is: “Would a human’s trust towards an agent be affected by their previous interaction with another agent?”. The four hypotheses are presented in table 1. To determine the difficulty of the questions, in a preliminary online survey, we asked 187 participants to answer 25 questions online, without interacting with any agent. Then, we ask them to see a video of the agent performing six different facial expressions and rate their positiveness/negativeness and intensity (Slightly-Positive, Moderately-Positive, Highly-Positive, Slightly-Negative, Moderately-Negative, Highly-Negative). The main system includes a Q&A page with the agent’s face visible on the same screen. Participants answer two sets of 50 multiple-choice questions. While hovering over each answer, they see a facial feedback from the agent. The first 30 questions are easy to medium in difficulty to ensure that the participants have a good level of certainty about the correctness of the answers so that they can judge the behavior of the agent as cooperative or uncooperative. The last 20 questions are difficult, requiring participants to rely more on the agent. Figure 1 shows the system interface. The facial expressions and some random movements (e.g. blinking, moving the head, looking sideways) are created based on previous works which investigated the role of different facial expression on positive/negative implications of the face (Rehm & Andre, 2005), (Elkins & Derrick, 2013), (Hyde, Carter, Kiesler & Hodgins, 2016). Figure 2 shows the six different facial expressions on both agents. This is a between-subjects study, meaning each participant does only one of the four experimental conditions. The independent variable is the agent behavior pattern over two sessions. There are four levels, depending on whether the agent is cooperative or uncooperative in each of two sessions. Thus, the four levels are 1) cooperative first - uncooperative second (CN), 2) uncooperative first - cooperative second (NC), 3) both cooperative (CC), and 4) both uncooperative (NN). Dependent variables would be time to answer each individual question, time to answer each set of 50 questions in a condition, percentage of correct answers in each set, and the amount of trust towards the agent. The trust towards the agent would be assessed by the questionnaire introduced by Jian et al. (Jian, Bisantz & Drury, 2000). In addition, we collect subjective self-report data through an in-person interview after completing all conditions. Condition 1-(Cooperative agent): In this scenario, the agent is cooperative and helpful to the study participant. However, to be more like a real-world scenario, there is some uncertainty and variance in the agent behavior, since an agent may not always have the necessary knowledge for all questions. Thus, for 80% of the questions, the agent shows Highly-Positive facial expression for correct answers and random facial expressions for wrong answers. In the other 20% of the times, the Highly-Positive facial expression is assigned to a random answer. The users find the correct answer from the answer grid which contains 9 potential answers. Users can click on the correct answer to choose it. As for the items in the grid, the agent shows Moderately-Positive facial feedback for two of the answers, no feedback for two other answers and each of the remaining feedbacks for the rest of the answers. Condition 2-(Uncooperative agent): The only difference in this condition is that the agent shows Highly-Positive feedback for the correct answer only 20% of the times. Participants We asked 187 participants to fill in the preliminary survey. There are at least eight participants for the ongoing pilot study and 40 participants for the main study. The participants are young adults, between 18-30 years old, from all gender identities, and national/cultural background. Discussion and Future Work The purpose of the study is to see how interaction with one EVA can affect the user’s perception about another. Also, it is on interest to see how users collaborate with cooperative/uncooperative agents. Do they decide to ignore the agent, collaborate with it, or completely rely on it? It is also interesting to see if the agent’s behavior impacts the user’s decision about the type of collaboration. In future experiments, we want to observe if an agent can effectively guide or deceive the user towards a certain answer. Additionally, we would like to observe the changes in mental state of participants by looking at the change of oxygenation of the brain tissue, using a functional near-infrared spectroscopy (fNIRS) brain-sensing device. We would like to monitor the cognitive workload (change of oxygenation in prefrontal cortex of the brain) of the participants during their interaction with the agent. It would be interesting to investigate if a reliable agent can in fact decrease the cognitive workload and result in a better performance and vice-versa. In the next steps of this project, we will build a virtual agent which can adapt to a person’s mental state. In previous work, there have been some studies which tried to imitate the user’s facial expression (Chowanda, Blanchfield, Flintham, & Valstar, 2014) or express a negative or positive facial feedback based on the user’s mental state (Aranyi, Pecune, Charles, Pelachaud & Cavazza, 2016). However, there has not been any studies on the effect of this feedback on one’s performance during a task. Some of the potential systems that can employ such agents to improve the interaction between the user and system are Intelligent Tutoring Systems (Liu, Walker & Solovey, 2017) and Creativity Support Systems (Chan et al., 2017). In addition, the results of this study could open a new window in interaction between humans and virtual agents in mission critical systems which require the collaboration of humans and computers in a fast and efficient way.

Figure 1
Figure 2

Acknowledgements

We would like to thank Weidi Tang and Juan Garcia Lopez who helped us in developing the interface.

References

Aranyi, G., Pecune, F., Charles, F., Pelachaud, C., & Cavazza, M. (2016). Affective interaction with a virtual character through an fNIRS brain-computer interface. Frontiers in computational neuroscience, 10, 70.

Chan, J., Siangliulue, P., Qori McDonald, D., Liu, R., Moradinezhad, R., Aman, S., ... & Dow, S. P. (2017, June). Semantically Far Inspirations Considered Harmful?: Accounting for Cognitive States in Collaborative Ideation. In Proceedings of the 2017 ACM SIGCHI Conference on Creativity and Cognition (pp. 93-105). ACM.

Chowanda, A., Blanchfield, P., Flintham, M., & Valstar, M. (2014, August). Erisa: Building emotionally realistic social game-agents companions. In International Conference on Intelligent Virtual Agents (pp. 134-143). Springer, Cham.

Elkins, A. C., & Derrick, D. C. (2013). The sound of trust: Voice as a measurement of trust during interactions with embodied conversational agents. Group decision and negotiation, 22(5), 897-913.

Hyde, J., Carter, E. J., Kiesler, S., & Hodgins, J. K. (2016). Evaluating animated characters: facial motion magnitude influences personality perceptions. ACM Transactions on Applied Perception (TAP), 13(2), 8.

Jian, J. Y., Bisantz, A. M., & Drury, C. G. (2000). Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, 4(1), 53-71.

Liu, R., Walker, E., & Solovey, E. (2017, July). TOWARD NEUROADAPTIVE PERSONAL LEARNING ENVIRONMENTS. In The First Biannual Neuroadaptive Technology Conference (p. 59).

Rehm, M., & André, E. (2005, July). Catch me if you can: exploring lying agents in social settings. In Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems (pp. 937-944). ACM.

Keywords: Human Computer Interaction (HCI), Trust, Virtual agents, Brain computer interfaces (BCI), Facial Expression

Conference: 2nd International Neuroergonomics Conference, Philadelphia, PA, United States, 27 Jun - 29 Jun, 2018.

Presentation Type: Oral Presentation

Topic: Neuroergonomics

Citation: Moradinezhad R and Solovey E (2019). Assessing human reaction to a virtual agent’s facial feedback in a simple Q&A setting. Conference Abstract: 2nd International Neuroergonomics Conference. doi: 10.3389/conf.fnhum.2018.227.00011

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 11 Apr 2018; Published Online: 27 Sep 2019.

* Correspondence: Mr. Reza Moradinezhad, Drexel University, Computer Science, Philadelphia, 19104, United States, reza.moradinezhad@drexel.edu