AUTHOR=Konishi Shunta , Kuwata Masaki , Matsumoto Yoshio , Yoshikawa Yuichiro , Takata Keiji , Haraguchi Hideyuki , Kudo Azusa , Ishiguro Hiroshi , Kumazaki Hirokazu TITLE=Self-administered questionnaires enhance emotion estimation of individuals with autism spectrum disorders in a robotic interview setting JOURNAL=Frontiers in Psychiatry VOLUME=15 YEAR=2024 URL=https://www.frontiersin.org/journals/psychiatry/articles/10.3389/fpsyt.2024.1249000 DOI=10.3389/fpsyt.2024.1249000 ISSN=1664-0640 ABSTRACT=Background

Robots offer many unique opportunities for helping individuals with autism spectrum disorders (ASD). Determining the optimal motion of robots when interacting with individuals with ASD is important for achieving more natural human-robot interactions and for exploiting the full potential of robotic interventions. Most prior studies have used supervised machine learning (ML) of user behavioral data to enable robot perception of affective states (i.e., arousal and valence) and engagement. It has previously been suggested that including personal demographic information in the identification of individuals with ASD is important for developing an automated system to perceive individual affective states and engagement. In this study, we hypothesized that assessing self-administered questionnaire data would contribute to the development of an automated estimation of the affective state and engagement when individuals with ASD are interviewed by an Android robot, which will be linked to implementing long-term interventions and maintaining the motivation of participants.

Methods

Participants sat across a table from an android robot that played the role of the interviewer. Each participant underwent a mock job interview. Twenty-five participants with ASD (males 22, females 3, average chronological age = 22.8, average IQ = 94.04) completed the experiment. We collected multimodal data (i.e., audio, motion, gaze, and self-administered questionnaire data) to train a model to correctly classify the state of individuals with ASD when interviewed by an android robot. We demonstrated the technical feasibility of using ML to enable robot perception of affect and engagement of individuals with ASD based on multimodal data.

Results

For arousal and engagement, the area under the curve (AUC) values of the model estimates and expert coding were relatively high. Overall, the AUC values of arousal, valence, and engagement were improved by including self-administered questionnaire data in the classification.

Discussion

These findings support the hypothesis that assessing self-administered questionnaire data contributes to the development of an automated estimation of an individual’s affective state and engagement. Given the efficacy of including self-administered questionnaire data, future studies should confirm the effectiveness of such long-term intervention with a robot to maintain participants’ motivation based on the proposed method of emotion estimation.