Impact Factor 2.574 | CiteScore 4.6
More on impact ›

Original Research ARTICLE

Front. Neurorobot. | doi: 10.3389/fnbot.2021.598895

Speech Driven Gaze in a Face-to-Face Interaction Provisionally accepted The final, formatted version of the article will be published soon. Notify me

  • 1Middle East Technical University, Turkey
  • 2University of Cambridge, United Kingdom

Gaze and language are major pillars in multimodal communication. Gaze is a non-verbal mechanism that conveys crucial social signals in face-to-face conversation. However, compared to language, gaze has been less studied as a communication modality. The purpose of the present study is twofold: (i) To investigate gaze direction (i.e., aversion and face gaze) and its relation to speech in a face-to-face interaction. (ii) To propose a computational model for multimodal communication, which predicts gaze direction using high-level speech features. Twenty-eight pairs of participants participated in data collection. The experimental setting was a mock job interview. The eye movements were recorded for both participants. The speech data were annotated by ISO 24617-2 Standard for Dialogue Act Annotation, as well as manual tags based on previous social gaze studies. A comparative analysis was conducted by Convolutional Neural Network (CNN) models that employed specific architectures, namely VGGNet and ResNet. The results showed that the frequency and the duration of gaze differ significantly depending on the role of participant. Moreover, the ResNet models achieve higher than 70% accuracy in predicting gaze direction.

Keywords: face-to-face interaction, Gaze analysis, deep learning, Speech annotation, multimodal communication

Received: 25 Aug 2020; Accepted: 25 Jan 2021.

Copyright: © 2021 Arslan Aydın, Kalkan and Acarturk. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Mx. Cengiz Acarturk, Middle East Technical University, Ankara, Turkey, acarturk@metu.edu.tr