Event Abstract

Classification of Task Type and Reaction Time of Operator in Simulated Multiple Robot Tele-Exploration

  • 1 University at Buffalo, United States

Motivation and Objective: Tele-operated robots have been widely used in verity of applications including space exploration, under water operations, surgical robotics, and mining applications. These systems generally require higher level of perception, decision making capabilities. As number of remote agents interacting with the human operator increase, the performance of the operator can degrade leading to overall decrease of efficiency of the team highlighting the importance of the operator's performance becomes as a key aspect of Tele-operation (Lathan & Tracey, 2002; Oboe & Fiorini, 1997). Mental workload is the most common cognitive factor used for predicting the performance (Berka et al., 2007). However, workload estimation via physiological measures (such as eye tracking and electroencephalogram -EEG) has shown to be task dependent (Bailey & Iqbal, 2008; Iqbal, Adamczyk, Zheng, & Bailey, 2005). Hence, to properly estimate the operator performance, it is necessary to identify the task type and associated workload (visual, motor, auditory, etc.). In this regard, this study investigates the classification of operator's task type and his/her reaction time in a series of tele-operation tasks by using brain activity and eye movement as features. The operator's task type is classified into three categories: Visual Search (VS), Gross Motor Control (GMC), and Fine Motor Control (FMC). Using multi-modal physiological features, Visual Search and Motor control task can be classified with 90.21% accuracy. Gross Motor and Fine Motor Control can also be classified with 70.3% accuracy. We also classify the reaction time of operators in target detection which is shown to be an indirect measure of performance and workload (Makishita & Matsunaga, 2008). Our method yield a correct classification rate of 68.4% in distinguishing slow reaction time form fast ones. Moreover, we demonstrate that this classification rate can be improved when the individual differences of subjects are considered. Method: The experimental study in conducted in a human-in-the-loop simulation environment developed in V-REP®, where the operator has to identify the targets using camera feedback from two drones. Furthermore, a GUI in MATLAB was developed that allowed operators to interact with the drones through a 4 axes joystick and switchable video streams between front view and top view from each drone, and a rough map of the environment. The experiment consisted of three task types. The first mission consisted of identification of geometrical objects by performing visual scanning of drones' video stream. Both of the drones autonomously tracked a predefined trajectory. In the second mission, first drone was switched to manual control and second drone flew autonomously. Operators were asked to maneuver the first drone along a provided path and simultaneously detect the targets. At first, the drones flow outdoor without any obstacles, hence this stage recorded gross motor control (GMC) with visual scanning. In the second stage the operators were required to (using first drone) explore the inside of abandoned buildings thus demanding fine motor control (FMC). Therefore, second mission recorded two types of task difficulty types namely gross motor control (GMC) and fine motor control (FMC) along with visual search. After IRB approval, 22 subjects (6 females and 16 males) of age 23 to 37 years (Mean = 26.8, SD = 3.7) participated in the above mentioned study. During the experiment the brain activity of subjects were recorded via 9 channels of a wireless brain computer interface (B-Alert X10). The Eye Tribe eye tracker was also used to record eye movements at a sampling rate of 30 Hz. The details of the experimental study and data recording can be find in (Memar & Esfahani, 2018). The extracted physiological measures from eye tracker and brain activities are listed in table 1. Results and Discussion: A 5-sec epoch data were extracted around each presented target. Each epoch was labeled according to the task type. Combining the data of all subject, a linear support vector machine was trained in a five fold validation analysis with different feature sets. A high accuracy of 90% was achieved in classification of VS and MC (figure 1a) with eye and brain features in combination with PS. In this regard, pupil size (PS)is shown to be a significant indicator of visual workload that distinguishes between VS and MC tasks. When combined with pupil size, EEG features improved the accuracy of the classification more significantly comparing to the rest of eye features, suggesting that brain features were providing a complementary source information to the visual workload extracted from PS. Moreover, in separation of Gross Motor and Fine Motor control, an average accuracy of 70% was achieved using all the features. For the classification of reaction time, the top 25% of the epoch were considered as 'fast' and the lower quartile as 'slow' reaction time. The two middle quartiles were excluded from the classification. Reaction time classification indicated a 50% accuracy using eye features which is close to chance indicating that overt aspect of attention extracted from eye features are not an indicator of reaction time. However, the classification accuracy is slightly higher when all features are used (figure 1b). The effect of level of expertise of a subject on reaction time classification has also been studied. Experts and novice are identified using two metrics namely detection rate (above 75%) and false alarm rate (below 35%, figure 2a). Similarly, the eye features were shown to have negative effect on the classification rate whereas brain features (extracting covert aspect of attention) have very significant positive effect (figure 2b).

Figure 1
Figure 2
Figure 3

References

Bailey, B. P., & Iqbal, S. T. (2008). Understanding changes in mental workload during execution of goal-directed tasks and its application for interruption management. ACM Transactions on Computer-Human Interaction (TOCHI), 14(4), 21.

Berka, C., Levendowski, D. J., Lumicao, M. N., Yau, A., Davis, G., Zivkovic, V. T., … Craven, P. L. (2007). EEG correlates of task engagement and mental workload in vigilance, learning, and memory tasks. Aviation, Space, and Environmental Medicine, 78(5), B231–B244.

Iqbal, S. T., Adamczyk, P. D., Zheng, X. S., & Bailey, B. P. (2005). Towards an index of opportunity: understanding changes in mental workload during task execution. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 311–320).

Lathan, C. E., & Tracey, M. (2002). The effects of operator spatial perception and sensory feedback on human-robot teleoperation performance. Presence: Teleoperators and Virtual Environments, 11(4), 368–377.

Makishita, H., & Matsunaga, K. (2008). Differences of drivers’ reaction times according to age and mental workload. Accident Analysis & Prevention, 40(2), 567–575.

Memar, A. H., & Esfahani, E. T. (2018). Physiological Measures for Human Performance Analysis in Human-Robot Teamwork: Case of Tele-Exploration. IEEE Access, 6, 3694–3705. http://doi.org/10.1109/ACCESS.2018.2790838

Oboe, R., & Fiorini, P. (1997). Issues on Internet-based teleoperation. IFAC Proceedings Volumes, 30(20), 591–597.

Keywords: human performance, Brain activity, Eye Movements, Reaction Time, teleoperation

Conference: 2nd International Neuroergonomics Conference, Philadelphia, PA, United States, 27 Jun - 29 Jun, 2018.

Presentation Type: Oral Presentation

Topic: Neuroergonomics

Citation: Manjunatha H, Memar A and Esfahani E (2019). Classification of Task Type and Reaction Time of Operator in Simulated Multiple Robot Tele-Exploration. Conference Abstract: 2nd International Neuroergonomics Conference. doi: 10.3389/conf.fnhum.2018.227.00015

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 11 Apr 2018; Published Online: 27 Sep 2019.

* Correspondence: Dr. Ehsan Esfahani, University at Buffalo, Buffalo, United States, ehsanesf@buffalo.edu