Skip to main content

ORIGINAL RESEARCH article

Front. Psychiatry, 24 January 2022
Sec. Autism

Group-Based Online Job Interview Training Program Using Virtual Robot for Individuals With Autism Spectrum Disorders

\nHirokazu Kumazaki,,
Hirokazu Kumazaki1,2,3*Yuichiro YoshikawaYuichiro Yoshikawa4Taro MuramatsuTaro Muramatsu3Hideyuki HaraguchiHideyuki Haraguchi1Hiroko FujisatoHiroko Fujisato1Kazuki SakaiKazuki Sakai4Yoshio Matsumoto,Yoshio Matsumoto2,5Hiroshi IshiguroHiroshi Ishiguro4Tomiki SumiyoshiTomiki Sumiyoshi1Masaru MimuraMasaru Mimura3
  • 1Department of Preventive Intervention for Psychiatric Disorders, National Center of Neurology and Psychiatry, National Institute of Mental Health, Tokyo, Japan
  • 2Department of Clinical Research on Social Recognition and Memory, Research Center for Child Mental Development, Kanazawa University, Ishikawa, Japan
  • 3Department of Neuropsychiatry, Keio University School of Medicine, Tokyo, Japan
  • 4Department of Systems Innovation, Graduate School of Engineering Science, Osaka University, Osaka, Japan
  • 5Service Robotics Research Group, Intelligent Systems Institute, National Institute of Advanced Industrial Science and Technology, Ibaraki, Japan

The rapid expansion of online job interviews during the COVID-19 pandemic is expected to continue after the pandemic has subsided. These interviews are a significant barrier for individuals with autism spectrum disorders (ASD). There is little evidence-based training for online job interviews for individuals with ASD, and the development of new trainings is expected. In an effort to facilitate online job interview skill acquisition for individuals with ASD, we developed a group-based online job interview training program using a virtual robot (GOT). In GOT, the interviewer and interviewee are projected as virtual robots on the screen. Five participants were grouped and performed the role of interviewee, interviewer, and evaluator. The participants performed all roles in a random order. Each session consisted of a first job interview session, feedback session, and second job interview session. The participants experienced 25 sessions. Before and after GOT, the participants underwent a mock online job interview with a human professional interviewer (MOH) to evaluate the effect of GOT. In total, 15 individuals with ASD took part in the study. The GOT improved self-confidence, motivation, the understanding of others' perspectives, verbal competence, non-verbal competence, and interview performance scores. There was also a significant increase in the recognition of the importance of the point of view of interviewers and evaluators after the second MOH compared to after the first MOH. Using a VR robot and learning the importance of interview skills by experiencing other perspectives (i.e., viewpoint of interviewer and evaluator) may have sustained their motivation and enabled greater self-confidence. Given the promising results of this study and to draw definitive conclusions regarding the efficacy of virtual reality (VR) robots for mock online job interview training, further studies with larger, more diverse samples of individuals with ASD using a longitudinal design are warranted.

Introduction

Coronavirus (COVID-19) has slowed things worldwide, but it has not deterred companies from looking for new hires. Online job interviews, though not new, are a useful tool for continued hiring while avoiding the threat of COVID-19. In fact, a host of companies are turning to online job interviews, as they are compelled to cancel in-person meetings amid the spread of the new coronavirus. These interviews also reduce transportation costs, accelerate the interview process, and allow hiring managers to interview non-local candidates. Some employers such as academic institutions would offer applicants the choice of a virtual interview if they were unable to travel after the pandemic has subsided.

Job interviews are significant barriers for individuals with autism spectrum disorders (ASD) (1, 2). These individuals struggle with verbal communication and to convey job-relevant interview content, and they have low self-confidence in their ability to perform in a job interview (3). In addition, they experience difficulties with non-verbal communication, which is directly connected to poor performance during job interviews (2). Online job interviews are not different from in-person job interviewees in the key points required for success. Conveying job-relevant interview content is indispensable. Being confident in front of the screen is also important. Verbal and non-verbal communication still occur during online job interviews. Given these factors, it is natural that individuals with ASD are not good at online or in-person job interviews. As with in-person job interviews, applicants are required to carry out adequate preparation for online job interviews. However, there is little research into the use of online job interviews and evidence-based training for online job interviews for individuals with ASD, and the development of new trainings is expected.

There is concern that individuals with ASD cannot benefit from interventions using real online job interview situations. In our previous experiment, we found that interpersonal tension is high during online interviews for individuals with ASD and that they have low motivation to work in a real online job interview setting. The accumulated intervention literature suggests that social communication training is effective for individuals with high motivation (46). Previous studies have demonstrated that many individuals with ASD show motivation and an aptitude for using technology (69). These individuals sometimes have a preference for virtual agents (1012). There is a growing body of research related to the application of virtual worlds, a technology that allows users to practice context-based social and adaptive skills (2, 1315). In these environments, the user assumes the role of an agent and can safely rehearse initiations and responses. Virtual reality (VR) training offers trainees several advantages, including (1) active participation rather than passive observation, (2) a unique training experience, and (3) low cost and easy access.

A previous study that used a realistic virtual human reported that individuals with ASD looked less frequently at agent peers in an interview setting using VR while talking compared to controls (16). Our preliminary study confirmed that many individuals with ASD are afraid of realistic virtual humans and avoid gazing at these agents because of their realism. When designing objects for use by individuals with ASD, researchers often subscribe to the notion that “simpler is better”; that is, they gravitate toward simple, mechanical objects (8, 1720). Given these factors, we assume that using a simple virtual agent for training may be adequate for individuals with ASD. Research on virtual exposure with clinical samples indicates that even simple virtual agents can induce a significant increase in anxiety and can be effective for phobic people compared to speaking in front of an empty virtual seminar room (10).

When considering interventions using VR for individuals with ASD, it is important to consider how the eyes of agents are designed because individuals with ASD pay less attention to eyes than individuals with typical development (21). Increasing eye contact is widely acknowledged as an important and promising treatment for individuals with ASD (22, 23). To create useful online job interview training for these individuals, it is important for them to look at the eyes of agents during training. If individuals with ASD can practice eye contact with virtual agents, they may overcome their fear of the gaze of an interviewer and experience decreased anxiety in interview settings.

More importantly, challenges to interview performance are believed to occur because individuals with ASD are impaired in their ability to recognize others' perspectives (24); that is, they are impaired in what has been labeled “theory of mind” (ToM) (25). Therefore, they are unable to understand the effect of their behavior on others (2), which is associated with their low motivation to improve their interview skills (3). In designing an intervention to help individuals with ASD improve their interview skills, it is important not only to teach the interview skills required for such an interaction, but also to improve their understanding of the importance of interview skills. Therefore, interventions from a ToM perspective are needed.

In an effort to facilitate online job interview skill acquisition for individuals with ASD, we developed a group-based online job interview training program using a virtual robot (GOT). In the GOT, five individuals with ASD were assigned to a group. The group consists of one interviewee, one interviewer, and three evaluators. The participants perform all roles in random order. In the GOT, the interviewer and interviewee are projected as virtual robots on the screen. The robots can show a range of simplified expressions that are less complex than those of a real human face. The careful design of the eyes and multiple degrees of freedoms (DoF) dedicated to controlling its field of vision contribute to its rich gaze expressions. By not only performing the role of the interviewee but also the role of the interviewer and evaluator, we expected that the participants could learn others' perspectives (i.e., the perspectives of the interviewer and evaluator). Thus, we consider our system to be effective for online job interviews for individuals with ASD. The present study was carried out to investigate the effectiveness of the GOT.

Materials and Methods

Participants

The present study was approved by the ethics committee of Kanazawa University. There was no conflict of interest involved in this study. Participants were recruited by flyers that explained the content of the experiment. After receiving a complete explanation of the study, all participants and their guardians agreed to participate in the study. Written informed consent was obtained from the individuals and/or minors' legal guardian for the publication of any potentially identifiable images or data included in this article. All participants provided written informed consent. The inclusion criteria were as follows: (1) having a diagnosis of ASD based on the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) from the supervising study psychiatrist, (2) being male, (3) aged 18–24 years, (4) being unemployed and actively seeking employment, and (5) not taking medication. We excluded female participants because we wanted to avoid fraternizing amongst the participants. Additionally, it is more difficult to gather female participants as ASD is more prevalent in men than in women. At the time of enrollment, the diagnoses of all participants were confirmed by a psychiatrist with more than 15 years of experience in ASD using standardized criteria taken from the Diagnostic Interview for Social and Communication Disorders (DISCO) (26). The DISCO has been reported to have good psychometric properties (27). To exclude other psychiatric diagnoses, the Mini-International Neuropsychiatric Interview (MINI) (28) was administered.

All participants completed the Autism Spectrum Quotient-Japanese version (AQ-J) (29), which is used in the evaluation of ASD-specific behaviors and symptoms. The AQ-J is a short questionnaire with five subscales (social skills, attention switching, attention to detail, imagination, and communication). Previous work with the AQ-J has been replicated across cultures (30) and ages (31). The AQ is sensitive to the broader autism phenotype (32). In this study, we did not set the cut-off according to the AQ-J score and only used DSM-5 and DICSO to diagnose ASD and to judge whether to include participants in our study.

Full-scale IQ scores were obtained by either the Wechsler Intelligence Scale for Children-Fourth Edition or the Wechsler Adult Intelligence Scale-Third Edition.

The severity of social anxiety symptoms was measured using the Liebowitz Social Anxiety Scale (LSAS) (33). This clinician-administered scale consists of 24 items, including 13 items that describe performance situations and 11 items that describe social interaction situations. Each item was separately rated for “fear” and “avoidance” using a four-point categorical scale. According to receiver operating curve analyses, an LSAS score of 30 is correlated with minimal symptoms and is the best cutoff value for distinguishing individuals with and without social anxiety disorder (34).

The ADHD-RS (35) contains 18 items related to inattentive and hyperactive-impulsive symptoms, scored on a four-point scale (0 = never, 1 = sometimes, 2 = often, 3 = very often), and assesses symptom severity over the past week. The total score was computed as the sum of the scores of all 18 items.

Apparatus

The virtual robot used in this study was a virtual version of a humanoid robot called CommU (3638) (Figure 1; Vstone Co., Ltd.). CommU has 14 DoFs as follows: waist (2), left shoulder (2), right shoulder (2), neck (3), eyes (3), eyelids (1), and lips (1). The careful design of the eyes and multiple DoFs dedicated to controlling gaze contribute to facial expressions. The face of the CommU can show a range of simplified expressions that are less complex than those of a real human face. Its small and cute appearance is expected to help prevent fearfulness among individuals with ASD. In this experiment, two virtual robots were remotely teleoperated by two participants (i.e., interviewee and interviewer).

FIGURE 1
www.frontiersin.org

Figure 1. Virtual version of the CommU.

On the screen of the interviewee's laptop computer, an avatar operated by the interviewee showed its back while the other avatar operated by the interviewer showed its face so that they faced each other. On the screen of the interviewer's and evaluator's laptop computer, an avatar operated by the interviewer showed its back while the other avatar operated by the interviewee showed its face so that they faced each other. The users' voices were captured by the headsets connected to their laptop computers and transmitted to their interlocutor and evaluators. The captured voices of the users were also used to produce the lip movements of the avatars, and the arm movements of the avatars were designed to resemble human hand gestures during speaking so that the participants could feel as if the voice was produced by the user's avatar.

Procedures

All participants participated the GOT. In the GOT, five participants were grouped and participated in the roles of interviewee, interviewer, and evaluator. Each session consisted of a first job interview session, feedback session, and second job interview session. There were screens in front of each participant (Figure 2). In the first and second job interview sessions, the interviewer and interviewee were projected as virtual robots on the screen and played each role in a mock online job interview (Figure 3). The evaluators are not projected on the screen and observed the mock online job interview on a screen. The participants were placed in individual rooms and were presented similar but different images. The interviewees were shown the image of the interviewer from the viewpoint of the interviewee. The interviewers and evaluators were shown the image of the interviewee from the viewpoint of the interviewer.

FIGURE 2
www.frontiersin.org

Figure 2. Example of participants participating in mock online job interview.

FIGURE 3
www.frontiersin.org

Figure 3. The scene of mock online job interview by virtual robots.

In the first job interview session, the interviewees were initially given a document containing recruitment information from which they could select a job, including a data entry clerk, supermarket shelf stocker, custodian, restaurant kitchen assistant, nursing assistant, and paper delivery person. The interviewer asked questions based on prepared lists (see Supplementary Material) and the interviewee answered in a mock online job interview setting.

In the feedback session, no images of virtual robots were presented on the screen. The interviewer and evaluators evaluated the performance of the interviewee based on a prepared scoring sheet that included items on appropriate word use, appropriate question response, speaking calmly, being sincere, being enthusiastic, appropriate eye contact, natural facial expressions, gesturing naturally, appropriate speaking speed, responding with appropriate timing, appropriate vocal volume, and discussion with each other in the online setting. Then, they provided feedback to the interviewee based on their assessment.

Finally, in the second interview session, the interviewee participated in a mock online job interview using virtual robots as in the first setup. The approximate duration of each session was 50 min (i.e., 10 min for the first job interview session, 30 min for the feedback session, 10 min for the second job interview session). The participants completed five sessions in a week to perform every role. The GOT continued for 5 weeks (for a total of 25 sessions).

Before and after the GOT, the participants underwent a mock online job interview with a professional human interviewer (MOH) to evaluate the effect of the GOT.

Subjective Evaluation

After the first MOH (i.e., MOH before GOT) and the second MOH (i.e., MOH after GOT), we asked the participants to complete a questionnaire about their self-confidence in their interview performance. Items were rated on a seven-point Likert scale (ranging from “1 = not at all comfortable” to 7 = “very comfortable”). We also asked the participants to respond using a five-point Likert scale (15) to the question “How motivated are you to learn how to perform in an online job interview?” Responses ranged from 1 (I am not motivated to learn how to perform in online job interviews at all) to 5 (I am very motivated to learn how to perform in online job interviews). In addition, to assess the extent to which each participant in the position of the interviewee understood that their perspective was different from the perspective of the interviewer or evaluator, we asked them to rate, on a scale of 1–5, the extent to which they understood the point of view of the interviewer; 1 meaning, I cannot understand the interviewers' point of view, and 5 being, I can understand the interviewers' point of view perfectly.

Objective Evaluation

Two trainers independently rated the interview performance of the first and second MOH by watching a video recording of the sessions. The interviewees were scored using a seven-point Likert scale related to verbal competence (i.e., appropriate word use and appropriate question responses), non-verbal competence (i.e., speaking calmly, being sincere, being enthusiastic, appropriate eye contact, natural facial expressions, gesturing naturally, appropriate speaking speed, responding with appropriate timing, and appropriate vocal volume), and interview performance (i.e., sharing things positively, sounding honest, sounding interested in the job, and establishing overall rapport). The ratings ranged from 1 (very poor) to 7 (very excellent). Before the experiment, both raters underwent training (approximately 5 h) on scoring the interviews by watching videos of interview scenes. Both raters were blinded to the filming date (i.e., first MOH or second MOH). The primary rater and secondary rater attained a moderate degree of reliability [intraclass correlation coefficient (ICC) = 0.67] on the interview performance score. The score used in this study was the average score between the primary rater and secondary rater. After the intervention, we asked the participants' supporters, their trainers or job coaches, the following question: “Did the participants learn to understand the point of view of the interviewer after the intervention?”

Statistical Analysis

We performed the statistical analyses using SPSS version 24.0 (IBM, Armonk, NY, USA). The descriptive statistics of the sample were calculated. The differences in age, IQ, AQ-J, LSAS, and ADHD-RS scores between the groups were analyzed using an independent samples t-test. A paired t-test was performed to evaluate the improvement of subjective evaluation (i.e., self-confidence, motivation, understanding the point of view of interviewers) and objective evaluation (i.e., appropriate word use, appropriate questions response, speaking calmly, being sincere, being enthusiastic, appropriate eye contact, natural facial expressions, gesturing naturally, appropriate speaking speed, responding with appropriate timing, appropriate vocal volume) between the first and second MOH. Pearson product-moment correlation coefficients were used to explore the relationships between demographic data (i.e., IQ, AQ-J, LSAS, and ADHD-RS) and improvement of subjective evaluation (i.e., self-confidence, motivation, and understanding the point of view of interviewers) and objective evaluation (i.e., appropriate word use, appropriate questions response, speaking calmly, being sincere, being enthusiastic, appropriate eye contact, natural facial expressions, gesturing naturally, appropriate speaking speed, responding with appropriate timing, and appropriate vocal volume). We employed an alpha level of 0.05 for these analyses.

Results

Feasibility and Participation

In total, 15 individuals with ASD took part in the study (see Table 1 for participant details). All participants completed the trial procedures without technological challenges or participant distress that would lead to session termination. We carefully observed participant performance and confirmed that all participants were concentrating during the trials and highly motivated from the start to the finish of the experiment.

TABLE 1
www.frontiersin.org

Table 1. Descriptive statistics of participants.

Primary Analyses

All participants fully participated in the GOT and seemed to be focused on and engaged in the mock online job interview training using robots.

There was a significant increase in self-confidence after the second MOH compared to after the first MOH (3.27 vs. 2.27) (p = 0.002) and in the motivation to learn how to perform online job interviews after the second MOH compared to after the first MOH (3.20 vs. 1.87) (p < 0.001). There was also a significant increase in the recognition of the importance of the point of view of interviewers and evaluators after the second MOH compared to after the first MOH (3.47 vs. 1.87) (p < 0.001). Details are described in Table 2.

TABLE 2
www.frontiersin.org

Table 2. Means and standard error of the mean of subjective evaluation at first MOH and second MOH.

There was a significant increase between the first and second MOH in appropriate word use (4.53 vs. 5.37, p = 0.006), appropriate question response (4.20 vs. 5.07, p = 0.013), speaking calmly (4.20 vs. 4.70, p < 0.030), being sincere (4.67 vs. 5.47, p < 0.001), being enthusiastic (4.50 vs. 5.37, p < 0.001), natural facial expressions (4.07 vs. 4.43, p = 0.044), gesturing naturally (4.07 vs. 4.63, p = 0.035), appropriate speaking speed (3.70 vs. 4.30, p = 0.012), responding with appropriate timing (3.80 vs. 4.60, p = 0.001), appropriate vocal volume (4.40 vs. 5.10, p = 0.001), sharing things positively (4.70 vs. 5.20, p = 0.002), sounding honest (5.10 vs. 5.70, p < 0.001), sounding interested in the job (4.23 vs. 5.50, p = 0.002), and establishing overall rapport (4.17 vs. 4.87, p < 0.001). Details are described in Table 3.

TABLE 3
www.frontiersin.org

Table 3. Means and standard error of the mean of objective evaluation at first MOH and second MOH.

In the semi structured interview, the participants' supporters responded to the following prompt: “All students seemed to learn to better understand the point of view of the interviewer.”

There were no significant relationships between demographic data (i.e., IQ, AQ-J, LSAS, and ADHD-RS) and improvement of subjective evaluation (i.e., self-confidence, motivation, understanding the point of view of interviewers) and objective evaluation (i.e., appropriate word use, appropriate questions response, speaking calmly, being sincere, being enthusiastic, appropriate eye contact, natural facial expressions, gesturing naturally, appropriate speaking speed, responding with appropriate timing, and appropriate vocal volume).

Discussion

In the current study, we assessed the efficacy of the GOT, which is a group-based online job interview training program using virtual robots. The completion rate suggests that participants who received GOT were able to continue to participate in the program without losing motivation. Using a VR robot and learning the importance of interview skills by experiencing other perspectives (i.e., viewpoint of interviewer and evaluator) may have sustained their motivation and enabled greater self-confidence. The GOT helped improve various online job interview skills (i.e., verbal competence, nonverbal competence, and interview performance). Interestingly, this occurred in the absence of specific interview skill training by experts (i.e., improvement based only on practice and feedback by participants with ASD).

In this program, acting as the interviewer using the VR CommU had many advantages compared to conversing face to face in an online setting. Specifically, when conversing face to face, sensory overstimulation from the human interviewer is a serious problem for individuals with ASD, and it interferes with the processing of social signals (39). Furthermore, the technology behind the VR CommU might increase the user's enthusiasm for and concentration on the program. In addition, acting as the interviewee facing the VR CommU may have the advantage of decreasing interpersonal anxiety and promoting intrinsic motivation for the mock online job interview.

In this study, participants performed not only as interviewees but also as interviewers and evaluators, which contributed to their understanding of the perspective of the evaluator. That is, our system may help alleviate the ToM deficit of individuals with ASD in online job interview scenarios. These mechanisms might have enriched the participants' understanding and motivation, led to improvements in their self-confidence and contributed to improvements in various job interview skills.

A previous study suggested that one possible strategy to improve eye contact is not to force individuals with ASD to look others in the eyes but to habituate them to look into the eyes gradually (40). Yoshikawa, Kumazaki (41) reported that interventions using robots and habituating individuals with ASD to make eye contact with robots have the potential to improve eye contact in real interview situations. On the other hand, in this study, the GOT did not improve appropriate eye contact. The study of Yoshikawa, Kumazaki (41) differs from this study in that it used a real robot, which is potentially a more powerful avenue for enhancing skills, whereas this study used a VR robot. In addition, Yoshikawa, Kumazaki (41) used android robot, which are easier to generalize to real-world settings than simple robots (19, 38). In this study, no experts supervised the appropriate use of eye contact, which may also have prevented the GOT from improving appropriate eye contact. Future work using VR android robot, more realistic virtual systems, and expert supervision may further leverage the specific benefits of VR robots.

Several limitations in our study should be acknowledged. First, the number of participants was relatively small. In addition, all participants were male and were of Japanese ethnicity, making the results less generalizable. In Japan, the type of work available differs according to gender. For women, the question “Are you able to carry heavy things?” was unsuitable. To undertake mock online job interview training for women, we would need to change parts of the standard script of the mock online job interview. Future studies involving larger samples that include female participants are needed to provide more meaningful data regarding the potential use of our system. Second, this study is not a controlled study. We did not have a sham training group for comparison with the online virtual robot training. At the time of this experiment, the Japanese government declared a state of emergency due to the spread of COVID-19, so we could not ask participants to participate in control settings. Given that preparing for online job interviews is a pressing issue for individuals with ASD, we had to conduct pilot studies even without a control. In fact, the results of this study provide preliminary evidence supporting the utility of the GOT to some extent. Although there are differences between the first and second interview questions aimed at developing the theme more deeply, it is possible that the improvement between the two interviews resulted from personal growth rather than the intervention. In addition, it is difficult to ascertain whether the improvement is due to the particular characteristics of the virtual program (i.e., specialized eye movements, simple features) or simply from constant rehearsal of interviews and watching participants. Therefore, a multiple baseline design methodology would be more appropriate in this study. The ultimate goal of the program is to enhance communication skills in daily life and to make participants more competitive when searching for employment or volunteer positions. To implement our system, an employment support facility is required. Knowing that the cost of providing care for individuals with ASD is very high (42), supporting these individuals in a competitive position is of great economic importance. To examine whether our program can attain this goal, future studies involving employment support facilities with long-term longitudinal designs are needed. In this study, we could not investigate whether our program translates to actual real-world job interviews. Future studies are needed to determine whether our program can translate into real-world job interviews.

This is the first study to evaluate the effect of mock online job interview training using VR robots. Our program improved self-confidence, motivation, the understanding of others' perspectives, verbal competence, non-verbal competence, and interview performance scores in individuals with ASD. Given the promising results of this study and to draw definitive conclusions regarding the efficacy of VR robots for mock online job interview training, further studies involving individuals with ASD in larger, more diverse samples using a longitudinal design are warranted.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics Statement

The studies involving human participants were reviewed and approved by the Ethics Committee of Kanazawa University. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individuals and/or minors' legal guardian for the publication of any potentially identifiable images or data included in this article.

Author Contributions

HK designed the study, conducted the experiment, carried out the statistical analyses, analyzed and interpreted the data, and drafted the manuscript. YY, TM, HF, KS, YM, HI, TS, and MM. HH conceived of the study, participated in its design, assisted with data collection and scoring of behavioral measures, analyzed and interpreted the data, and were involved in drafting the manuscript and revised it critically for important intellectual content. MM was involved in approving the final version to be published. All authors read and approved the final manuscript.

Funding

This work was supported in part by Grants-in-Aid for Scientific Research from the Japan Society for the Promotion of Science (20K20857, 20H00101) and JST, Moonshot R&D Grant Number JPMJMS2011.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Acknowledgments

We sincerely thank the participants and all the families who participated in this study.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyt.2021.704564/full#supplementary-material

References

1. Higgins KK, Koch LC, Boughfman EM, Vierstra C. School-to-work transition and Asperger Syndrome. Work. (2008) 31:291–8.

PubMed Abstract | Google Scholar

2. Strickland DC, Coles CD, Southern LB. JobTIPS: a transition to employment program for individuals with autism spectrum disorders. J Autism Dev Disord. (2013) 43:2472–83. doi: 10.1007/s10803-013-1800-4

PubMed Abstract | CrossRef Full Text | Google Scholar

3. Kumazaki H, Muramatsu T, Yoshikawa Y, Corbett BA, Matsumoto Y, Higashida H, et al. Job interview training targeting nonverbal communication using an android robot for individuals with autism spectrum disorder. Autism. (2019) 23:1586–95. doi: 10.1177/1362361319827134

PubMed Abstract | CrossRef Full Text | Google Scholar

4. Bradshaw J, Koegel LK, Koegel RL. Improving functional language and social motivation with a parent-mediated intervention for toddlers with autism spectrum disorder. J Autism Dev Disord. (2017) 47:2443–58. doi: 10.1007/s10803-017-3155-8

PubMed Abstract | CrossRef Full Text | Google Scholar

5. Poon KK, Watson LR, Baranek GT, Poe MD. To What extent do joint attention, imitation, and object play behaviors in infancy predict later communication and intellectual functioning in ASD? J Autism Dev Disord. (2011) 42:1064–74. doi: 10.1007/s10803-011-1349-z

PubMed Abstract | CrossRef Full Text | Google Scholar

6. Warren ZE, Zheng Z, Swanson AR, Bekele E, Zhang L, Crittendon JA, et al. Can robotic interaction improve joint attention skills? J Autism Dev Disord. (2015) 45:3726–34. doi: 10.1007/s10803-013-1918-4

PubMed Abstract | CrossRef Full Text | Google Scholar

7. Diehl JJ, Schmitt LM, Villano M, Crowell CR. The clinical use of robots for individuals with autism spectrum disorders: a critical review. Res Autism Spectr Disord. (2012) 6:249–62. doi: 10.1016/j.rasd.2011.05.006

PubMed Abstract | CrossRef Full Text | Google Scholar

8. Robins B, Dautenhahn K, Dubowski J. Does appearance matter in the interaction of children with autism with a humanoid robot? Interact Stud. (2006) 7:509–42. doi: 10.1075/is.7.3.16rob

CrossRef Full Text | Google Scholar

9. Saadatzi MN, Pennington RC, Welch KC, Graham JH. Small-group technology-assisted instruction: virtual teacher and robot peer for individuals with autism spectrum disorder. J Autism Dev Disord. (2018) 48:3816–30. doi: 10.1007/s10803-018-3654-2

PubMed Abstract | CrossRef Full Text | Google Scholar

10. Kumazaki H, Muramatsu T, Kobayashi K, Watanabe T, Terada K, Higashida H, et al. Feasibility of autism-focused public speech training using a simple virtual audience for autism spectrum disorder. Psychiatry Clin Neurosci. (2019) 74:124–31. doi: 10.1111/pcn.12949

PubMed Abstract | CrossRef Full Text | Google Scholar

11. Milne M, Luerssen MH, Lewis TW, Leibbrandt RE, Powers DMW. Development of a virtual agent based social tutor for children with autism spectrum disorders. In: 2010 International Joint Conference on Neural Networks (IJCNN). Barcelona (2010). p. 18–23. doi: 10.1109/IJCNN.2010.5596584

PubMed Abstract | CrossRef Full Text | Google Scholar

12. Ali MR, Razavi SZ, Langevin R, Al Mamun A, Kane B, Rawassizadeh R, et al. A virtual conversational agent for teens with autism spectrum disorder. In: Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents. Vol. 2. Scotland (2020). p. 1–8. doi: 10.1145/3383652.3423900

CrossRef Full Text | Google Scholar

13. Kandalaft MR, Didehbani N, Krawczyk DC, Allen TT, Chapman SB. Virtual reality social cognition training for young adults with high-functioning autism. J Autism Dev Disord. (2013) 43:34–44. doi: 10.1007/s10803-012-1544-6

PubMed Abstract | CrossRef Full Text | Google Scholar

14. Lahiri U, Bekele E, Dohrmann E, Warren Z, Sarkar N, A. physiologically informed virtual reality based social communication system for individuals with autism. J Autism Dev Disord. (2015) 45:919–31. doi: 10.1007/s10803-014-2240-5

PubMed Abstract | CrossRef Full Text | Google Scholar

15. Smith MJ, Ginger EJ, Wright K, Wright MA, Taylor JL, Humm LB, et al. Virtual reality job interview training in adults with autism spectrum disorder. J Autism Dev Disord. (2014) 44:2450–63. doi: 10.1007/s10803-014-2113-y

PubMed Abstract | CrossRef Full Text | Google Scholar

16. Jarrold W, Mundy P, Gwaltney M, Bailenson J, Hatt N, McIntyre N, et al. Social attention in a virtual public speaking task in higher functioning children with autism. Autism Res. (2013) 6:393–410. doi: 10.1002/aur.1302

PubMed Abstract | CrossRef Full Text | Google Scholar

17. Costa S, Lehmann H, Dautenhahn K, Robins B, Soares F. Using a humanoid robot to elicit body awareness and appropriate physical interaction in children with autism. Int J Soc Robot. (2014) 7:265–78. doi: 10.1007/s12369-014-0250-2

CrossRef Full Text | Google Scholar

18. Iacono I, Lehmann H, Marti P, Robins B, Dautenhahn K. Robots as social mediators for children with autism – a preliminary analysis comparing two different robotic platforms. In: 2011 IEEE International Conference on Development and Learning (ICDL). Vol. 11. Frankfurt am Main (2011). p. 24–7. doi: 10.1109/DEVLRN.2011.6037322

PubMed Abstract | CrossRef Full Text | Google Scholar

19. Ricks DJ, Colton MB. Trends and considerations in robot-assisted autism therapy. in 2010 IEEE International Conference on Robotics and Automation. Anchorage, AK (2010). p. 4354–9. doi: 10.1109/ROBOT.2010.5509327

PubMed Abstract | CrossRef Full Text | Google Scholar

20. Robins B, Dautenhahn K, Dickerson P. From isolation to communication: a case study evaluation of robot assisted play for children with autism with a minimally expressive humanoid robot. In: 2009 Second International Conferences on Advances in Computer-Human Interactions. Cancun (2009) p. 205–11. doi: 10.1109/ACHI.2009.32

PubMed Abstract | CrossRef Full Text | Google Scholar

21. Pelphrey KA, Sasson NJ, Reznick JS, Paul G, Goldman BD, Piven J. Visual scanning of faces in autism. J Autism Dev Disord. (2002) 32:249–61. doi: 10.1023/A:1016374617369

PubMed Abstract | CrossRef Full Text | Google Scholar

22. Cook JL, Rapp JT, Mann KR, McHugh C, Burji C, Nuta R, et al. Practitioner model for increasing eye contact in children with autism. Behav Modif. (2017) 41:382–404. doi: 10.1177/0145445516689323

PubMed Abstract | CrossRef Full Text | Google Scholar

23. Ninci J, Lang R, Davenport K, Lee A, Garner J, Moore M, et al. An analysis of the generalization and maintenance of eye contact taught during play. Dev Neurorehabil. (2013) 16:301–7. doi: 10.3109/17518423.2012.730557

PubMed Abstract | CrossRef Full Text | Google Scholar

24. Baron-Cohen S, Tager-Fulsberg H, Cohen DJ. Understanding Other Minds: Perspectives from Developmental Cognitive Neuroscience. Oxford: Oxford University Press (2000).

Google Scholar

25. Baron-Cohen S, Leslie AM, Frith U. Does the autistic child have a “theory of mind”? Cognition. (1985) 21:37–46. doi: 10.1016/0010-0277(85)90022-8

PubMed Abstract | CrossRef Full Text | Google Scholar

26. Leekam SR, Libby SJ, Wing L, Gould J, Taylor C. The diagnostic interview for social and communication disorders: algorithms for ICD-10 childhood autism and Wing and Gould autistic spectrum disorder. J Child Psychol Psychiatry. (2002) 43:327–42. doi: 10.1111/1469-7610.00024

PubMed Abstract | CrossRef Full Text | Google Scholar

27. Wing L, Leekam SR, Libby SJ, Gould J, Larcombe M. The diagnostic interview for social and communication disorders: background, inter-rater reliability and clinical use. J Child Psychol Psychiatry. (2002) 43:307–25. doi: 10.1111/1469-7610.00023

PubMed Abstract | CrossRef Full Text | Google Scholar

28. Lecrubier Y, Sheehan DV, Weiller E, Amorim P, Bonora I, Sheehan KH, et al. The Mini International Neuropsychiatric Interview (MINI). A short diagnostic structured interview: reliability and validity according to the CIDI. Eur Psychiatry. (2020) 12:224–31. doi: 10.1016/S0924-9338(97)83296-8

CrossRef Full Text | Google Scholar

29. Wakabayashi A, Tojo Y, Baron-Cohen S, Wheelwright S. [The Autism-Spectrum Quotient (AQ) Japanese version: evidence from high-functioning clinical group and normal adults]. Shinrigaku Kenkyu. (2004) 75:78–84. doi: 10.4992/jjpsy.75.78

PubMed Abstract | CrossRef Full Text | Google Scholar

30. Wakabayashi A, Baron-Cohen S, Uchiyama T, Yoshida Y, Tojo Y, Kuroda M, et al. The autism-spectrum quotient (AQ) children's version in Japan: a cross-cultural comparison. J Autism Dev Disord. (2007) 37:491–500. doi: 10.1007/s10803-006-0181-3

PubMed Abstract | CrossRef Full Text | Google Scholar

31. Auyeung B, Baron-Cohen S, Wheelwright S, Allison C. The autism spectrum quotient: children's version (AQ-Child). J Autism Dev Disord. (2008) 38:1230–40. doi: 10.1007/s10803-007-0504-z

PubMed Abstract | CrossRef Full Text | Google Scholar

32. Wheelwright S, Auyeung B, Allison C, Baron-Cohen S. Defining the broader, medium and narrow autism phenotype among parents using the Autism Spectrum Quotient (AQ). Mol Autism. (2010) 1:10. doi: 10.1186/2040-2392-1-10

PubMed Abstract | CrossRef Full Text | Google Scholar

33. Liebowitz MR. Social phobia. Mod Probl Pharmacopsychiatry. (1987) 22:141–73. doi: 10.1159/000414022

PubMed Abstract | CrossRef Full Text

34. Mennin DS, Fresco DM, Heimberg RG, Schneier FR, Davies SO, Liebowitz MR. Screening for social anxiety disorder in the clinical setting: using the Liebowitz Social Anxiety Scale. J Anxiety Disord. (2002) 16:661–73. doi: 10.1016/S0887-6185(02)00134-2

PubMed Abstract | CrossRef Full Text | Google Scholar

35. DuPaul GJ, Reid R, Anastopoulos AD, Lambert MC, Watkins MW, Power TJ. Parent and teacher ratings of attention-deficit/hyperactivity disorder symptoms: factor structure and normative data. Psychol Assess. (2016) 28:214–25. doi: 10.1037/pas0000166

PubMed Abstract | CrossRef Full Text | Google Scholar

36. Kumazaki H, Yoshikawa Y, Yoshimura Y, Ikeda T, Hasegawa C, Saito DN, et al. The impact of robotic intervention on joint attention in children with autism spectrum disorders. Mol Autism. (2018) 9:1. doi: 10.1186/s13229-018-0230-8

PubMed Abstract | CrossRef Full Text | Google Scholar

37. Shimaya J, Yoshikawa Y, Matsumoto Y, Kumazaki H, Ishiguro H, Mimura M, et al. Advantages of indirect conversation via a desktop humanoid robot: case study on daily life guidance for adolescents with autism spectrum disorders. In: 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). New York, NY (2016). p. 831–6. doi: 10.1109/ROMAN.2016.7745215

PubMed Abstract | CrossRef Full Text | Google Scholar

38. Kumazaki H, Muramatsu T, Yoshikawa Y, Matsumoto Y, Ishiguro H, Kikuchi M, et al. Optimal robot for intervention for individuals with autism spectrum disorders. Psychiatry Clin Neurosci. (2020) 74:581–6. doi: 10.1111/pcn.13132

PubMed Abstract | CrossRef Full Text | Google Scholar

39. Hurtig T, Kuusikko S, Mattila M-L, Haapsamo H, Ebeling H, Jussila K, et al. Multi-informant reports of psychiatric symptoms among high-functioning adolescents with Asperger syndrome or autism. Autism. (2009) 13:583–98. doi: 10.1177/1362361309335719

PubMed Abstract | CrossRef Full Text | Google Scholar

40. Hadjikhani N, Åsberg Johnels J, Zürcher NR, Lassalle A, Guillon Q, Hippolyte L, et al. Look me in the eyes: constraining gaze in the eye-region provokes abnormally high subcortical activation in autism. Sci Rep. (2017) 7:1. doi: 10.1038/s41598-017-03378-5

PubMed Abstract | CrossRef Full Text | Google Scholar

41. Yoshikawa Y, Kumazaki H, Matsumoto Y, Miyao M, Kikuchi M, Ishiguro H. Rlaxing gaze aversion of adolescents with autism spectrum disorder in consecutive conversations with human and android robot—a preliminary study. Front Psychiatry. (2019) 10:370. doi: 10.3389/fpsyt.2019.00370

PubMed Abstract | CrossRef Full Text | Google Scholar

42. Buescher AVS, Cidav Z, Knapp M, Mandell DS. Costs of autism spectrum disorders in the United Kingdom and the United States. JAMA Pediatr. (2014) 168:8. doi: 10.1001/jamapediatrics.2014.210

PubMed Abstract | CrossRef Full Text

Keywords: autism spectrum disorders, COVID-19, online job interview, virtual robot, other's perspective

Citation: Kumazaki H, Yoshikawa Y, Muramatsu T, Haraguchi H, Fujisato H, Sakai K, Matsumoto Y, Ishiguro H, Sumiyoshi T and Mimura M (2022) Group-Based Online Job Interview Training Program Using Virtual Robot for Individuals With Autism Spectrum Disorders. Front. Psychiatry 12:704564. doi: 10.3389/fpsyt.2021.704564

Received: 03 May 2021; Accepted: 29 December 2021;
Published: 24 January 2022.

Edited by:

Jeffrey C. Glennon, University College Dublin, Ireland

Reviewed by:

Yurgos Politis, Central European University, Austria
Darren William Roddy, Trinity College Dublin, Ireland

Copyright © 2022 Kumazaki, Yoshikawa, Muramatsu, Haraguchi, Fujisato, Sakai, Matsumoto, Ishiguro, Sumiyoshi and Mimura. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Hirokazu Kumazaki, kumazaki@tiara.ocn.ne.jp

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.