Edited by: Loredana Zollo, Campus Bio-Medico University, Italy
Reviewed by: Baojun Chen, Sant’Anna School of Advanced Studies, Italy; Jose De Jesus Rubio, National Polytechnic Institute (Mexico), Mexico
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
Recent developments in the non-muscular human–robot interface (HRI) and shared control strategies have shown potential for controlling the assistive robotic arm by people with no residual movement or muscular activity in upper limbs. However, most non-muscular HRIs only produce discrete-valued commands, resulting in non-intuitive and less effective control of the dexterous assistive robotic arm. Furthermore, the user commands and the robot autonomy commands usually switch in the shared control strategies of such applications. This characteristic has been found to yield a reduced sense of agency as well as frustration for the user according to previous user studies. In this study, we firstly propose an intuitive and easy-to-learn-and-use hybrid HRI by combing the Brain–machine interface (BMI) and the gaze-tracking interface. For the proposed hybrid gaze-BMI, the continuous modulation of the movement speed via the motor intention occurs seamlessly and simultaneously to the unconstrained movement direction control with the gaze signals. We then propose a shared control paradigm that always combines user input and the autonomy with the dynamic combination regulation. The proposed hybrid gaze-BMI and shared control paradigm were validated for a robotic arm reaching task performed with healthy subjects. All the users were able to employ the hybrid gaze-BMI for moving the end-effector sequentially to reach the target across the horizontal plane while also avoiding collisions with obstacles. The shared control paradigm maintained as much volitional control as possible, while providing the assistance for the most difficult parts of the task. The presented semi-autonomous robotic system yielded continuous, smooth, and collision-free motion trajectories for the end effector approaching the target. Compared to a system without assistances from robot autonomy, it significantly reduces the rate of failure as well as the time and effort spent by the user to complete the tasks.
Assistive robotic systems have demonstrated high potential in enabling people with upper limb physical disabilities, such as traumatic spinal cord injuries (SCI), amyotrophic lateral sclerosis (ALS), and tetraplegic patients, to achieve greater independence and thereby increase quality of life (
To provide HRI for individuals with severe upper extremity impairment, the brain signals or gaze signals have been largely explored through brain–machine interfaces (BMI) and gaze-trackers, respectively. With the advent of invasive BMI technology, the invasively recorded brain signals have facilitated successful manipulation of dexterous robotic arms (
Different EEG paradigm-based BMIs have been employed to control the dexterous robotic arm. Since sufficient number of discrete user commands could be inferred with the steady-state visual evoked potential (SSVEP)-based BMI or P300-based BMI in theory, a lot of studies have utilized such BMIs to control the robotic arm (
We envision continuous-valued velocity control signals to be advantageous for controlling the dexterous assistive robotic arm where a user could intuitively perform volitional control to change the robotic arm end-effector’s velocity, resulting in relatively smooth changes in position over time. Though recent studies show that the continuous-valued velocity of the upper limb could be decoded from the EEG signals with regression models via an MI paradigm (
Thereby, intuitive and easy-to-use interfaces that produce continuous-valued outputs while demanding less training are strongly desired. The gaze-tracking system may shed some light on building such an interface. In fact, gaze constitutes an intuitive input for continuous-valued positions in 2D control tasks (e.g., moving a cursor freely on a computer screen) without extensive training. However, one of the main limitations of gaze tracking is that the input may be intention-free (without necessarily selecting there), even though the user stares at somewhere on the computer screen. To this end, the hybrid gaze-BMI has been proposed to predict the web user’s click intention (
Along with the efforts of current studies to design intuitive and easy-to-use interfaces for motor-impaired people interaction with assistive robots, there are also endeavors made toward devising human–robot coordination strategies catering to specific applications, such as robotic arms and wheelchairs. In general, the noisy, non-stationary, and low-dimensional characteristics of control signals hinder the current interfaces to reliably issue commands in real-time for applications that require high precision and safety. To increase usability and reduce the cognitive burden of the user, the shared control strategies have been commonly adopted by adding autonomous supportive behavior to the system. According to the exact specification of how control is shared between the user and the autonomy, the existing shared control paradigms for human-robot interactions based on the interfaces can be generally divided into two lines.
One line of the paradigms triggers a fully pre-specified autonomous takeover when a specific mental state, e.g., the motor imagery (MI) state or a response state to a visual stimulus, is detected by BMI (
Another line of paradigms enables users to have more control with high-level user commands (e.g., movement directions of the end-effector or the wheelchair, etc.), while fully relying on the autonomy to generate the precise and safe low-level reactive robot motions (e.g., target approaching, collision avoidance, etc.). Researchers have exploited EEG signals for indoor navigation for a telepresence robot (
In this work, we present a semi-autonomous assistive robotic system that could be potentially used by severely motor-impaired people, for deliberate tasks involving the target reaching and obstacles avoiding. With this system, the user constantly utilizes his/her gaze and EEG signals to freely and intuitively direct the movement of the robotic limb end-effector while receiving the dynamical assistance from the robot autonomy. Our contribution is twofold. (1) In addition to the mode for discrete target selection with the previous hybrid gaze-BMI, we extend such hybrid interfaces toward a new mode for asynchronously providing continuous-valued velocity commands by which the user can retain continuous motion control of the end-effector. The proposed new mode constitutes an intuitive and easy-to-learn-and-use input, where the continuous modulation of the movement speed via the motor intention is simultaneous to the unconstrained movement direction control with the gaze signals in a seamless way. (2) Distinguished from previous shared control strategies for non-invasive driven assistive robots where the control authority switches discretely between the user and the autonomy, our shared control paradigm combines user input and the autonomy at all times with the dynamical combination regulation, and this is thanks to the continuous-valued velocity control via the new HRI. The paradigm is devised in this manner to maintain as much volitional control as possible while providing the assistance for the most difficult parts of the task. Although the idea of shared control is not new, the present study is to our knowledge, the first application of shared control to the assistive robotic arm driven by continuous-valued velocity based non-invasive hybrid gaze-BMI. The experiments are performed by a number of able-bodied volunteers, and the results show that the new HRI-driven semi-autonomous assistive robotic system allows for a continuous, smooth, and collision-free motion trajectory for the end-effector approaching the target, significantly reducing the rate of failure as well as time and effort spent by the user to complete the tasks.
The experimental setup used in this study is depicted in
The overview of the experimental setup.
In specific, the reach-and-grasp task was divided into three stages. In stage 1, the user was to specify his intended target for the assistive robotic system, using the hybrid gaze-BMI operating in a discrete selection mode (refer to sub-section “Two operation modes of the hybrid gaze-BMI control”). Upon observing the virtual rectangle appearing around the target, the user got to know that the position of the target has been successfully communicated to the assistive robotic system. Subsequently, the system automatically switches the hybrid gaze-BMI into a continuous-velocity control mode (refer to sub-section “Two operation modes of the hybrid gaze-BMI control”). In stage 2, the user was to employ the hybrid gaze-BMI for moving the end-effector sequentially to reach the target across the horizontal plane parallel to the table while avoiding collisions with obstacles. Once the end-effector entered a pre-specified zone right above the target object (defined by a virtual cylindrical region centered above the target with a radius of 5 mm), it was forced to halt and hover over the target. In stage 3, the system executed a pre-programed procedure, i.e., the end-effector moved down, adjusted its gripper orientation according to the orientation of the target in the workspace, and grasped the object.
The design for the first two sequential stages took advantage of the natural visuomotor coordination behavior of human beings. Specifically, when a human decides to pick up an object, he/she usually first looks at the object, and then performs the hand reaching under the visual guidance. Moreover, following the suggestion in
The block diagram of the proposed semi-autonomous robotic system. The dotted arrows denote the information flows only for Stage 1, and the dot-dashed arrow represents the information flow only for Stage 3.
Hybrid Gaze-BMI, which combines gaze tracking and BMI. It firstly operates in a discrete selection mode for inputting the user’s intended target location in stage 1, and is then automatically switched to operate in a continuous-velocity control mode for inputting the user’s velocity commands to move the robotic arm end-effector horizontally toward the target in stage 2;
Camera and Graphical User Interface (GUI), which provide the live scene of the robotic arm workspace for the normal and enhanced visual feedback as well as the coordinate transformations from camera coordinate system to the robot coordinate system for all the three stages. The Computer Vision implements the object segmentation and the object orientation identification for the target in stage 1 and stage 3, respectively;
Shared Controller, which fuses the user commands from the hybrid gaze-BMI and the robot autonomy commands to form a new one, for directing the end-effector toward the target horizontally while avoiding obstacles in stage 2;
Actuated System and Control, where the resulting end-effector commands are converted into reaching and grasping motions with a 5-Dof robotic arm.
The details about the individual modules of the system and the flow of information between them are described below.
For the gaze tracking, a consumer-level desktop eye tracker, EyeX (Tobii AB Inc., Sweden), was employed. It did not require continuous recalibration and allows moderate head movements. The eye tracker was mounted at the bottom of the host PC monitor, it detects the user’s pupils and then projects the pupils onto the screen, i.e., the outputs of the eye tracker sensor system are the user’s gaze locations on the screen. The raw gaze data were transmitted to the computer via USB 3.0 at a sampling rate of 60 Hz. Since human eyes naturally make many involuntary movements including rolling, microsaccades, and blinking, the gaze signals acquired from the EyeX system were smoothened. In specific, a 10-point moving average filter is utilized to cancel out minor gaze fluctuations, while leaving performance on fast movements as unchanged as possible. Then, the filtered gaze points were fed to the shared control script every 30 ms.
Given the final goal of developing affordable and usable assistive technology, a low-cost commercial EEG acquisition headset, Emotiv EPOC + (Emotiv Systems Inc., United States), is used to record the EEG signals. This device consists of 14 EEG channels (AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, and AF4) according to the 10–20 system. The EEG signals are communicated to the host PC via Bluetooth with a sampling rate of 128 Hz.
In this study, we used the OpenVibe toolbox for the offline calibration of a 2-class BMI classification model and the online detection of the MI state. During the offline calibration phase, the EEG signals for the motor imagery state, and the rest state were recorded. Afterward, the segmented signals were bandpass-filtered between 8 and 30 Hz with a 5th-order Butterworth bandpass temporal filter. Subsequently, the commonly adopted spatial filtering method for the feature extraction in MI-based BMI, i.e., common spatial pattern (CSP), was applied on the signals. This was to find the directions that maximized variance for one class while minimized variance for the other class. The logarithms of the normalized power for the spatially projected signals were ultimately employed as the input features of the Bayesian linear discriminant analysis (LDA) classifier. The Bayesian LDA classifier assumed that the 2-class training data follow multivariate normal density distributions, and they had the same covariance matrix but different mean vectors and were estimated with equations below:
where
During the online phase, a 1 s-long sliding window in a step of 125 ms was used to update the feature values, and then the updated posterior probability value for the MI state was delivered to the shared control script on host PC through the VRPN protocol with the OpenVibe toolbox.
In this work, the hybrid gaze-BMI operated in two modes. In stage 1, as in
A USB camera, with a resolution of 1,280 × 720 pixels, was placed on top of the setup to capture the live video of the robot workspace. It streams the horizontal view in robot coordinates system to the host PC via USB 2.0, and the video was displayed on the monitor with GUI. The user closes the loop by viewing video feedback and directing the end-effector accordingly.
In order to select the target and control the movement direction of the end-effector, the system needs to know the position of the gaze points from GUI in the robot coordinates. An autonomous algorithm was implemented to build the mapping from the gaze coordinates on the screen to the robot coordinates. For this purpose, in the calibration phase (executed only once), four points were selected on the screen with known coordinates on the robot arm frame of reference. Then the identification of perspective transformation was accomplished with the 4-point getPerspective procedure from the OpenCV toolbox. Such a calibration is illustrated in
An illustration of mapping the camera’s coordinates to the robotic arm’s coordinates.
In stage 1, we provided the AR feedback to the user through the GUI to indicate the successful target object selection. Namely, as in our previous work (
The selected target highlighted with a virtual rectangle frame surrounding it.
In stage 2, since this paper mainly focus on the shared control method, the locations of obstacles in the workspace were static and known to the system, and the depth sensor was therefore not used to detect them in our paper.
In stage 3, to grasp the target (i.e., the cuboid) automatically, the orientation of the target had to be communicated to the robot system for adjusting its gripper pose. To this end, the orientation of the target was estimated by performing the geometric fitting of rectangles with a smoothly constrained Kalman fiter (
The devised shared-control paradigm consisted of a movement-speed shared controller and a movement-direction shared controller. In such shared controllers, the commands from the user and the robot autonomy were dynamically blended in order to generate the final velocity control commands for the end-effector sent to the robotic arm control system. The final velocity control command is written below:
where
To achieve a continuous control of the speed of the robotic arm end-effector, the movement speed was modulated by the instantaneous strength of his/her dominant arm motor imagery state constantly detected by the BMI. Specifically, the speed of the end-effector was set to be proportional to the posterior probability assigned to the motor imagery state as follows:
where
To further develop the human–robot blending of the movement speed commands, there were two issues to be addressed. One was to devise the assistance command provided by the robot autonomy, the other was the design of the arbitration scheme. Prior work indicated that users subjectively preferred the assistance when it lead to more efficient task completion (
Here, α represents the dynamical arbitration factor, defining the amount of assistance provided by the robot autonomy. It was calculated using a sigmoid function to enable smooth and continuous blending between the user and robot autonomy command:
where x
The distribution of the arbitration factor α.
For the direction control of the end-effector, a unit directional vector pointing from the end-effector to the user’s gaze position is derived as the user specified movement direction command, as shown in
The principle for the shared control in direction.
where
The distribution of the arbitration factor β.
A proof-of-concept implementation of the proposed semi-autonomous robotic system was carried out using a 5-Dof robotic arm (Dobot Arm, Shenzhen Yuejiang Technology Co Inc., China). The robotic arm control system could automatically determine the joint motion commands based on the specified 3D positions of the end-effector using inverse kinematics. The developers also could specify the orientation of the gripper in order to grasp an object with a certain orientation in the workspace. The robotic arm control system communicated with the host PC through Bluetooth, receiving the input from the shared controller and sending the state parameters of the robotic arm to the host PC every 100 ms.
Ten participants (eight males and two females, 25.2 ± 0.8 years old) were recruited from the campus to perform the objects manipulation tasks using the proposed HRI driven semi-autonomous robotic system. The study was approved by the Ethics Committee of Southeast University. Written informed consent was obtained from each subject.
The task was the 3-stage reach-and-grasp one introduced in detail at the beginning of section “Materials and Methods.” The user was to firstly select the target in stage 1 and then reach the target horizontally while avoiding obstacles in stage 2, using the hybrid gaze-BMI operating in a discrete selection mode and a continuous velocity control model in these two stages, respectively. In stage 3, the end-effector executed moving down, adjusting its gripper orientation and grasping the target and all in an automatic way.
Operative tests were preceded by a calibration session for both the eye-tracker and the BMI.
Firstly, the built-in calibration procedure for the Tobii eye tracker EyeX was performed. It lasted less than 1 min for each subject, during which the user gazed at seven calibration dots sequentially appeared on the monitor.
Secondly, the BMI decoding model was trained for each subject with the offline calibration procedure described in sub-section “Brain-machine Interface.” Specifically, for the recoding of the motor imagery state, the user had to focus on observing the robotic arm end-effector’s predefined motion in the horizontal plane through GUI while imagining to push the end-effector with his/her dominant arm at the same time. For the rest state, the robotic arm did not move, and the user was asked to relax and avoid moving. The training session for each subject was composed of a randomly sorted sequence of 40 trials, 20 for the hand motor imagery tasks and 20 for the relax tasks. The execution of each task lasted for 4 s, and it was spaced from the beginning of the next task with an interval lasting randomly from 1 to 3 s during which the subject could relax concentration. Each task was triggered through visual cues developed with the openVibe toolbox and displayed in GUI. The data acquired during the training session were used to build the 2-class BMI decoding model composed of CSP and Bayesian LDA. The duration of the BMI calibration usually did not exceed 5 min. However, it was difficult to report the testing performance for the BMI decoder built with all the training data in our experimental setting. Thereby, we reported the fivefold cross-validation (CV) BMI decoding performance instead, which could to some extent reflect the performance for the BMI decoder built with all the training data. In the fivefold CV, when the posterior probability for the MI state exceeded 0.6, the mental state was classified to be MI; otherwise it was determined to be the “rest” state.
Before the formal online evaluation, the online decoding model of BMI was obtained by training with all the data from the offline calibration session mentioned above. Subsequently, a rehearsal phase was further launched for the purpose of familiarizing each user with the hybrid HRI-based robotic arm control, and this lasted less than 5 min for each of the 10 subjects. In the end of this phase, most of them could deliberately specify the intended target in stage 1 and constantly fixate gaze on any point on the screen to specify his/her desired movement direction for the end-effector while simultaneously regulate his/her strength of the MI state to modify the speed of the end-effector in stage 2.
The main focus of this study was to apply the blending-based shared control for the robotic arm reaching driven by the proposed continuous-velocity hybrid gaze-BMI (i.e., the stage 2). Thereby, to evaluate the effectiveness of the proposed shared control paradigms for such an interface, the reaching tasks with or without shared control were conducted. Specifically, each subject executed 40 reaching trials with the following four types of control paradigms:
There were 10 trials executed with
To evaluate the effectiveness of the proposed-direction shared controller, the successful reaching rate (SRR) and the end-effector trajectory length (EETL) were acquired on the trials with (i.e.,
To evaluate the effectiveness of the proposed speed shared controller, the completion time (CT) was obtained on the trials applied with (i.e.,
A
The fivefold cross-validation classification accuracy of the BMI for each subject is shown in
The fivefold cross-validation BMI classification performance for each subject.
S1 | 81.6 | 82.9 | 82.3 |
S2 | 79.1 | 86.7 | 82.9 |
S3 | 90.2 | 76.5 | 83.4 |
S4 | 79.1 | 77.6 | 78.3 |
S5 | 93.1 | 81.9 | 87.5 |
S6 | 88.8 | 91.8 | 90.3 |
S7 | 70.4 | 72.7 | 71.6 |
S8 | 73.5 | 89.3 | 81.4 |
S9 | 81.2 | 84.2 | 82.7 |
S10 | 80.1 | 80.9 | 80.5 |
Mean ± STD | 81.7 ± 6.8 | 82.5 ± 5.6 | 82.1 ± 4.8 |
The SRRs for the 10 subjects are listed in
Number of trials with collisions and successful reaching rate in the experiments for the four control paradigms.
S1 | 0 | 0 | 4 | 5 | 10 | 10 | 6 | 5 | 100 | 100 | 60 | 50 |
S2 | 0 | 0 | 3 | 2 | 10 | 10 | 7 | 8 | 100 | 100 | 70 | 80 |
S3 | 0 | 0 | 4 | 3 | 10 | 10 | 6 | 7 | 100 | 100 | 60 | 70 |
S4 | 0 | 0 | 4 | 3 | 10 | 10 | 6 | 7 | 100 | 100 | 60 | 70 |
S5 | 0 | 0 | 5 | 5 | 10 | 10 | 5 | 5 | 100 | 100 | 50 | 50 |
S6 | 0 | 0 | 4 | 4 | 10 | 10 | 6 | 6 | 100 | 100 | 60 | 60 |
S7 | 0 | 0 | 5 | 6 | 10 | 10 | 5 | 4 | 100 | 100 | 50 | 40 |
S8 | 0 | 0 | 2 | 2 | 10 | 10 | 8 | 8 | 100 | 100 | 80 | 80 |
S9 | 0 | 0 | 0 | 1 | 10 | 10 | 10 | 9 | 100 | 100 | 100 | 90 |
S10 | 0 | 0 | 2 | 3 | 10 | 10 | 8 | 7 | 100 | 100 | 80 | 70 |
Mean | 0 | 0 | 3.3 | 3.4 | 10 | 10 | 6.7 | 6.6 | 100 | 100 | 67 | 66 |
0.0037 | ||||||||||||
0.0037 | ||||||||||||
0.0009 | ||||||||||||
0.0009 |
The EETLs for each subject and across subjects are presented in
The EETLs for each subject and across subjects (the across-subjects performance difference with statistical significance is marked by “*”,
In
Trajectories of the robotic arm end-effector in the horizontal plane during the reaching task with or without the direction shared controller (subject 6).
The evolving control weight for the robot autonomy in the direction shared controller during the 9th trial executing with
The CTs for each subject and across subjects (the across-subjects performance difference with statistical significance is marked by “*”,
Recall that
The evolving posterior probability values calculated by the BMI and the evolving speeds of the end-effector with
For illustrating the dynamic speed compensation process above,
The arbitration factor for the robot autonomy from the speed shared controller in a normalized time scale.
To potentially assist individuals suffering from severe motor impairments of upper limbs, the development of effective user control of dexterous robotic assistive manipulators requires intuitive and easy-to-learn-and-use interfaces that produce continuous-valued inputs. In the past decades, invasive BMI approaches have achieved relatively accurate and continuous control of a robot up to 10 DoFs 0. However, the surgical risks associated with current invasive BMIs may outweigh the advantages of effective robotic arm control. The proposed non-invasive and hybrid Gaze-BMI may provide an alternative solution that diminishes medical risks, at the costs of reduced control accuracy and number of controlled DoFs.
In general, the proposed hybrid gaze-BMI operating in the continuous-velocity mode is intuitive and easy-to-learn-and-use. The gaze-tracking system can be calibrated and proficiently driven by a user with no previous experience within 1 min, while maintaining sufficient precision in specifying the movement direction for the end-effector intuitively. Since the input from pure gaze-tracking might be free of intent, it is complemented with an intentional continuous-valued speed input from the 2-class BMI, whose calibration usually did not exceed 5 min. After a short familiarization, all the users could constantly input the movement direction for the end-effector with the gaze-tracking modality, while simultaneously regulating the speed of the end-effector with the BMI modality.
Many studies have utilized the BMI to direct the assistive robot and wheelchairs for a potential population of patients who suffer from severe impairment in upper limbs. Compared with their adopted synchronous BMI or asynchronous BMI, which can only produce discrete-valued velocity commands for the assistive devices, the combination of gaze-tracking and BMI in our work provides users with a flexible HRI for volitionally moving the end-effector continuously and freely on a horizontal plane. Such a continuous-valued velocity (movement direction and speed) control advantage means that not only the end-effector could follow natural movement paths determined by the user in real-time (rather than following predefined ones,
According to the online experimental results in section “Results,” on average, 34% trials failed to prevent the end-effector from colliding with the obstacles with the
Furthermore, for the proposed shared control paradigms during the horizontal reaching task, the control for the end-effector always reflected the simultaneous input from both the user and the robot autonomy by a dynamic linear blending (arbitration). In this way, the paradigms allow the user to directly control the majority of the movement, while smoothly increasing the assistance from robot autonomy during the most difficult parts (e.g., collision avoidance, target approaching) of the task for the user, reaching a balance between the user’s perceived control and the reliable assistance provided by the robot autonomy. By contrast, for the existing studies that apply the shared control strategies in robotic arm, wheelchair, and mobile robot systems driven by the non-invasive HRI, the user commands and the robot autonomy commands switch either in the beginning of the reaching task (i.e., the robot autonomy takes over to finish the remaining routine, triggered by the user) (
This work has presented a proof-of-concept implementation for new shared control paradigms that could potentially help to better integrate the robot autonomy in assistive robotic arm manipulation applications while keeping the user in control with a novel HRI as much as possible. In particular, the current study focus was set primarily on the horizontal reaching task since strategies for maintaining the user in control to the largest extent during other operations (e.g., grasping, lifting, etc.) were presented in our previous paper (
The currently implemented hybrid gaze-BMI is just one of the many systems components that will be improved in future developments. One of the future works will be devoted to re-implementing the detection of the mental state related to motor imagery using a medical EEG acquisition system. Another future study will extend the current 2D gaze tracking into 3D one with a wearable eye-tracker as in
To extend the proposed proof-of-concept semi-autonomous robotic system for performing tasks in realistic environments, the currently used stereo-camera will be replaced with depth sensors. Besides, an advanced computer vision module will be employed to provide more effective object perception and modeling for the robot.
The shared control paradigms in the current study were designed based on the environmental context only, and the same paradigms were applied for each participate throughout the task. In future, the personalized shared control paradigms will be developed, where the paradigms adapt to the user’s evolving capability and needs given not only the environmental context but also the state of the user. This may allow the user to use intelligent assistive devices in their day-to-day lives and for extended periods of time.
This paper presents a semi-autonomous robotic system for performing the reach-and-grasp task. In particular, we propose a new control paradigm for the robotic arm reaching task, where the robot autonomy is dynamically blended with the gaze-BMI control from a user. Meanwhile, the hybrid gaze-BMI constitutes an intuitive and effective input through which the user can continuously control the robotic arm end-effector moving freely in a 2D workspace with an adjustable speed proportional to the user’s motion intention strength. Furthermore, the presented shared control paradigm allows the user to directly control the majority of the movement while smoothly increasing the assistance from robot autonomy the during the most difficult parts (e.g., collision avoidance, target approaching, etc.) of the task for the user, reaching a balance between the user’s perceived control and the reliable assistance provided by the robot autonomy. The experimental results demonstrate that the proposed semi-autonomous robotic system yielded a continuous, smooth and collision-free motion trajectory for the end-effector approaching the target. Compared to the system without the assistance from robot autonomy, it significantly reduces the rate of failure as well as the time and effort spent by the user to complete the tasks.
The datasets generated for this study are available on request to the corresponding author.
The studies involving human participants were reviewed and approved by the Ethics Committee of Southeast University. The patients/participants provided their written informed consent to participate in this study.
HZ and YW designed the study. YS and XH set up the experiment platform. BX and YW performed the experiment. HZ, YS, and YW analyzed the data and wrote the manuscript. AS, HL, and PW were involved in critical revision of the manuscript. All authors read and approved the final manuscript.
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The authors would like to thank all the volunteers who participated in the experiments.