Skip to main content

ORIGINAL RESEARCH article

Front. Bioeng. Biotechnol., 02 October 2023
Sec. Biosensors and Biomolecular Electronics
This article is part of the Research Topic Intelligent Neural Interface for Healthcare and Rehabilitation View all 7 articles

A rehabilitation robot control framework with adaptation of training tasks and robotic assistance

Jiajun Xu
Jiajun Xu1*Kaizhen HuangKaizhen Huang1Tianyi ZhangTianyi Zhang1Kai CaoKai Cao1Aihong JiAihong Ji1Linsen XuLinsen Xu2Youfu LiYoufu Li3
  • 1College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
  • 2College of Mechanical and Electrical Engineering, Hohai University, Changzhou, China
  • 3Department of Mechanical Engineering, City University of Hong Kong, Kowloon, Hong Kong SAR, China

Robot-assisted rehabilitation has exhibited great potential to enhance the motor function of physically and neurologically impaired patients. State-of-the-art control strategies usually allow the rehabilitation robot to track the training task trajectory along with the impaired limb, and the robotic motion can be regulated through physical human-robot interaction for comfortable support and appropriate assistance level. However, it is hardly possible, especially for patients with severe motor disabilities, to continuously exert force to guide the robot to complete the prescribed training task. Conversely, reduced task difficulty cannot facilitate stimulating patients’ potential movement capabilities. Moreover, challenging more difficult tasks with minimal robotic assistance is usually ignored when subjects show improved performance. In this paper, a control framework is proposed to simultaneously adjust both the training task and robotic assistance according to the subjects’ performance, which can be estimated from the users’ electromyography signals. Concretely, a trajectory deformation algorithm is developed to generate smooth and compliant task motion while responding to pHRI. An assist-as-needed (ANN) controller along with a feedback gain modification algorithm is designed to promote patients’ active participation according to individual performance variance on completing the training task. The proposed control framework is validated using a lower extremity rehabilitation robot through experiments. The experimental results demonstrate that the control scheme can optimize the robotic assistance to complete the subject-adaptation training task with high efficiency.

1 Introduction

Due to the rapidly increasing number of physically and neurologically impaired patients around the world, rehabilitation robots have been developed to assist in the therapeutic training of impaired limbs, which improves rehabilitation efficiency and saves human labor through highly autonomous assistance (Xu et al., 2020a; Xu et al., 2020b). The control strategy of a rehabilitation robot significantly influences rehabilitation efficacy. Most clinical cases enable physiotherapists to feed the task trajectories into the robot controller before rehabilitation starts. However, patients can only modify the robot’s current trajectory through physical human-robot interaction (pHRI) without affecting the future task trajectory. Particularly for patients with severe impairment, it is hardly possible to continuously exert adequate force to change the robot’s movement trajectory for a period, and their recovery, comfort, and safety cannot be guaranteed accordingly. Therefore, online adaptation of desired trajectories to patients’ performance is indeed necessary. Dynamic movement primitive (DMP) (Schaal, 2006) and central pattern generator (CPG) (Sproewitz et al., 2008) are two common tools for trajectory generation, and they have been applied in rehabilitation robotics research (Luo et al., 2018; Yuan et al., 2020). In (Xu et al., 2023), coupled cooperative primitives are formulated, where pHRI is expressed as a first-order impedance model and assumed as a modulation term in DMP. The problems existing in tuning the impedance model parameters have been explained above. The human-robot interaction energy is combined with adaptive CPG dynamics to plan gaits for exoskeletons (Sharifi et al., 2021); however, this introduces many uncertain parameters, the resolution of which is time-consuming. In addition, it is found that the robot’s desired trajectory can be deformed in response to subject actions (Lasota and Shah, 2015), but it is unclear which deformed trajectory is optimal. The robot’s future desired trajectory can be modified, and the optimal solution of the trajectory deformation is derived by detecting the human-robot interaction force (Losey and O’Malley, 2018). Similarly, trajectory deformation is applied to robotic rehabilitation, where a position controller is adopted to track the deformed trajectory, ignoring the rehabilitation effectiveness of the AAN training (Zhou et al., 2021). Apart from movement trajectory planning, users’ voluntary participation should be stimulated by adjusting the robotic assistance.

Additionally, passive control is usually employed to drive impaired limbs to move along the predefined task trajectory, where the active participation of patients cannot be encouraged to stimulate motor function recovery (Hogan et al., 2006). To overcome this problem, the assist-as-needed (AAN) control strategy is introduced to adapt the robotic assistance to the patients’ performance (Marchal-Crespo and Reinkensmeyer, 2009). An impedance/admittance control scheme is a common solution for addressing the physical relationship between humans and robots. An impedance controller based on a virtual tunnel regulates the robotic assistance while responding to the tracking error between the current trajectory and the desired trajectory (Krebs et al., 2003). A torque tracking impedance controller is proposed for lower limb rehabilitation robotics to generate assistance while ensuring acceptable trajectory deviation (Shen et al., 2018). An admittance control incorporating electromyography (EMG) signals is developed to improve human-robot synchronization (Zhuang et al., 2019). Both impedance and admittance control can regulate the relationship between the trajectory deviation and interaction effect by tuning the inertia-damping-stiffness parameters. In fact, different patients or even the same patient at different rehabilitation stages can exhibit varying motor capabilities; so, accurate determination of these parameters is essential to realize AAN training for different subjects and tasks. Furthermore, as for practical application in rehabilitation, dynamic human force and uncertain external disturbances often occur and cannot be measured intuitively and accurately. Both inappropriate impedance/admittance parameters and unknown dynamic interactive environments can lead to unstable and oscillating robot behaviors, which may decrease motion smoothness and even threaten human safety (Ferraguti et al., 2019). Furthermore, current AAN controllers are designed to provide only the necessary robotic assistance to complete the prescribed training task, which is not suitable for encouraging patients to challenge themselves with more difficult tasks and stimulate their potential motor capabilities for improved rehabilitation efficacy.

In this article, a control framework is proposed for the simultaneous adaptation of training tasks and robotic assistance according to patient performance. The main contributions of this article are listed as follows.

(1) A trajectory deformation algorithm is developed to plan the robot’s desired trajectory as the high-level controller, where the continuity, smoothness, and compliance of the robotic motion are achieved in response to the subject’s biological performance.

(2) An AAN control strategy with a feedback gain modification algorithm is employed to regulate the robotic assistance as the low-level controller. The controller is designed to encourage active participation by learning the patient’s residual motor capabilities and accurately tracking the deformed trajectory.

(3) Both the training task and robotic assistance adaptation algorithms are integrated into a framework to realize human-in-the-loop optimization, and this control framework is validated using a lower extremity rehabilitation robot.

The remainder of the article is organized as follows. The biological signal processing is described in Section 2. The trajectory generation is presented in Section 3, and the subject-adaptive AAN controller is explained in Section 4. The proposed control framework is verified through experiments in Section 5. Finally, this study is concluded in Section 6.

The schematic view of the proposed control framework is presented in Figure 1, and the detailed explanation is elaborated in the following sections.

FIGURE 1
www.frontiersin.org

FIGURE 1. Control framework of simultaneous adaptation of training tasks and robotic assistance.

2 Biological signal processing

For rehabilitation, robotic motions should be regulated by adapting to the user’s muscle strength, which can be derived from the human skin surface EMG signals. In this section, the EMG-driven musculoskeletal model for torque estimation is presented. EMG sensors (ETS FreeEMG300) are used to measure EMG signals, and the electrodes are attached to the relevant skin surface. Specifically, in the application of lower extremity rehabilitation, six electrodes are attached to gluteus maximus, semimembranosus, biceps femoris iliopsoas, sartorius, and rectus femoris for hip flexion/extension; six electrodes are attached to rectus femoris, vastus medialis, vastus lateralis, biceps femoris, semimembranosus, and semitendinosus for knee flexion/extension. The raw EMG signals are sampled at 1,024 Hz, bandpass filtered from 10 Hz to 500 Hz, and then notch filtered at 50 Hz to remove noise. The muscle activation is calculated by the neural activation function as

au=eAuR11eA1(1)

where u is the post-processed EMG value, R is the maximum voluntary isometric contraction, and A<0 is a nonlinear shape factor, defining the curvature of the function.

Subsequently, a Hill-typed muscle model is constructed to describe the relationship between muscle activation and muscle force (Yao et al., 2018). The force produced by the muscle-tendon unit Fmt is given by the following set of equations:

Fmt=Fmcos=αFpe+Fcecos(2)
lmt=lt+lmcos(3)

where Fmt, Fm, Fce, and Fpe represent the force generated by the muscle-tendon unit, the tendon force, contractile element, and passive element, respectively; lt is the length of the muscle tendon; α is a scaling factor; is the pinnation angle that is given by

=arcsinsin0l0(4)

where 0 is the optimal pinnation angle and l0 is the optimal length of muscle fiber. The scale factor α will be used in the optimization process. The human joint torque was produced by the coupling function of both the agonistic and antagonistic muscles, that is,

τ^h=i=1Jτiagonistj=1Lτjantagonist(5)

where τi=Fimrim and τj=Fjmrjm denote the torques exerted by the agonistic and antagonistic muscles, respectively. Fim and Fjm are the muscle-tendon forces, rim and rjm are the muscle moment arms of the muscle-tendon unit and can be estimated by determining the muscle-tendon length lmt and joint angle q; that is, rm=lmtq. The parameters J and L denote the number of agonistic and antagonistic muscles acting on the joint, respectively.

In the proposed EMG-driven musculoskeletal model, it is essential to determine the model parameters, i.e., the shape factor A and scaling factor α. Furthermore, the human joint torque during the exercise could be directly detected via AnyBody Modeling System (AMS) (Damsgaard et al., 2006). However, the torque generated by AMS is not continuous and realistic. The real joint torque can be measured through calibration experiments with EMG signals, which are quite complex and time-consuming. Therefore, it is proposed that the optimization is undertaken to adjust the EMG-driven musculoskeletal model parameters to minimize the difference between the torque estimated by the EMG signals τ^h and the torque detected via AMS τAMS.

The optimization procedure is illustrated in Figure 2. The processed EMG signals are converted to muscle activation using (2) including the uncertain shape factor A. Then the muscle contraction model (Yao et al., 2018) is established to calculate the muscle-tendon force and the muscle torque through Equations 25, where the muscle-tendon length lmt and the moment arm rm can be obtained from AMS. On the other hand, certain driving motion is loaded to AMS and the human joint torque τAMS can be achieved. The optimization aims to shrink the difference between τ^h and τAMS. Select the parameters to be optimized as p=AαT. The optimization problem is defined as (6).

minJp=i=1Nτ^hiτAMSi2N(6)

where N is the number of samples. The Broyden–Fletcher–Goldfarb–Shanno algorithm (Peña et al., 2019), together with a penalty barrier algorithm, is employed to find the optimal parameters.

FIGURE 2
www.frontiersin.org

FIGURE 2. Optimization procedure of pHRI estimated from biological signals.

3 Trajectory adaptation

Prior to operating the rehabilitation robot, the training task needs to be predetermined by feeding the reference trajectory (task trajectory) into the robot controller, which is usually the natural gait trajectory of healthy subjects. The patient is then encouraged to complete the task with robotic assistance. Once the reference trajectory is preset and fed into the robot controller, however, it is not reasonable to maintain the task difficulty invariant throughout the rehabilitation procedure. In order to ensure the smoothness and compliance of the robotic motion, the robot’s desired trajectory should be modified intuitively and continuously in response to the human force (Losey and O’Malley, 2018). In this regard, the physical human-robot interaction (pHRI) should alter not only the robot’s current state but also its future behavior. In this section, a trajectory deformation algorithm is studied to explore the pHRI influence on the training task, and the modification is made on the original reference trajectory, generating the desired trajectory for the further controller design.

The reference trajectory (predetermined before the training) is defined as qd*, and the desired trajectory (altered during the training) is defined as qd. When the human-robot interaction torque τh is exerted on the robotic joints at time ti, the original desired trajectory qd* starts to deform; such trajectory deformation ends at time tf, and accordingly, the duration of the trajectory deformation is p=tfti. Moreover, the deformed trajectory between ti and tf can be evenly divided into an arbitrary number of waypoints, and the time interval between the consecutive waypoints is δ. Concisely, the deformation process can be expressed as qd*tqdt, where tti,tf, and a diagram of the trajectory deformation is presented as Figure 3. It can be seen that both the magnitude and direction of the human-robot interaction torque influence the shape of the deformed trajectory. The larger the force exerted on the robot, the greater the deviation between the original and deformed trajectories; conversely, smaller human force results in smaller trajectory change. Besides, human force with opposite direction can lead to alteration in the deformation direction. So, both the amplitude and direction of the trajectory deformation should be taken into consideration while proposing the trajectory adaptation method.

FIGURE 3
www.frontiersin.org

FIGURE 3. Diagram of the trajectory deformation.

Apparently, deformed from the reference trajectory may follow different curves to shape qd, and there are many possible trajectory deformations. Constrained over the time interval ti,tf, Γdst is defined as the deformation curve function, in which s is the deformation factor, and changing the value of s can derive different shapes of trajectories. When s=0, the segment of the desired trajectory between times ti and tf is represented as

γdt=Γd0ttti,tf
γd=qd*ti,qd*ti+δqd*tfδ,qd*tf(7)

where γdt=qd*t when tti,tf, and γd is not defined outside this time interval. All other values of s refer to deformations of γd. As shown in Figure 3, in the time interval ti,tf, the original trajectory γdt can be deformed to Γds1t and Γds2t.

When the subject starts to exert force at time ti, the robot’s desired trajectory is changed from γd to γd when s=1, which is defined as

γdt=Γdst
γd=Γdsti,Γdsti+δΓdstfδ,Γdstf(8)

Once γd is determined, the robot’s desired trajectory is updated as qdt=γdt. After time tf, the robot follows its reference trajectory qd* again.

Comparing (7) with (8), the vital factor that causes the trajectory deformation can be formulated as a vector field function Φt, which linearizes the dependency of Γdst on the deformation factor s, i.e.,

Γdst=γdt+sΦt(9)

The vector field function Vt is distributed along γd and yields that Φt=sΓd0t, tti,tf, in particular, when s=1, γd=γd+Φt. The determination of Φt should ensure the continuity, smoothness, and compliance of the deformed trajectory. Specifically, the transition from qd*ti to qdti and from qdtf to qd*tf should be as continuous as possible. Also, a minimum-jerk model (Li et al., 2017) is utilized to generate a smooth trajectory profile to guarantee patients’ comfort and security. The vector field function Φt is highly correlated to the human-robot interaction torque τh, and a cost function is designed and minimized to optimize the deformed trajectory for high compliance. The detailed explanation of determining the vector field function is presented in Appendix 1, and the resultant formulation of the vector field function is obtained as

Φt=μδHβτhti(10)

where

H=Gp+δG(11)
G=IATA1BTBATA1BT1BATA1(12)

The parameter IRN×N in (12) is an identity matrix, where N denotes the number of waypoints. The determination of the matrix A and B is introduced in Appendix 1. The parameter H influences the shape of Φ, and it is formulated in (11). The parameter βRN in (10) is the prediction vector of the interaction torque, i.e., the future interaction torque can be formulated as βτhti, with τhti being the interaction torque applied at time ti. The direction of the interaction torque is included in the prediction vector so that the proposed algorithm can address the magnitude and direction of the trajectory deformation well. Even so, when the human force direction is altered, the deformed trajectory is suggested to be recalculated with the updated prediction vector for delicate modulation of the training task. Besides, in robotics-assisted rehabilitation, the duration of pHRI is relatively long because the patient mostly tries to participate actively to guide the robot, which is different from the statement in (Losey and O’Malley, 2018). The parameter μ in (10) denotes the assistance level, and it can be tuned to arbitrate between human and robot. When μ increases, the induced trajectory deformations arbitrate toward the human, which means smaller input forces cause larger deformations; and vice versa. Herein, the assistance level is formulated as μ=τh/τ^h, where τ^h is the expected human joint torque to complete the task trajectory in the absence of robotic assistance, and it can be obtained from Section 2.

Combining (9), (10), (11), and (12), the relationship between the vector field function and the interaction torque is clarified, and the deformed trajectory is thus obtained as

γd=γd+μδHβτhti(13)

After γd is derived from (13), qd is updated to include γd, and the process iterates at the next trajectory deformation when pHRI occurs again.

4 Assist-as-needed controller

Based on the abovementioned trajectory generation scheme, an actuation controller needs to be designed to track the desired trajectory. More importantly, in response to various motor capabilities of different patients, a subject-adaptive controller is required to realize AAN training. In this section, an AAN control strategy along with a feedback gain modification algorithm is proposed to complete the training task motion and provide the minimum required assistance to encourage patients’ active engagement.

The robot dynamics in joint space can be presented as follows whilst considering pHRI.

Mqq¨+Cq,q˙q˙+Gq+fq˙+τdis=τact+τh(14)

where qRn (n denotes the number of the robotic joints) is the position coordination of the robotic joints, and accordingly, q˙ and q¨ denote the joint velocity and acceleration, respectively. The parameter MqRn×n is the inertia matrix, Cq,q˙Rn×n is the centripetal and Coriolis matrix, GqRn is the gravity torque, fq˙Rn is the friction, τdisRn is the external disturbance, τactRn is the robotic joint torque generated by the actuators, and τhRn is the human-robot interaction torque.

Although the robot dynamics have been modeled as (14), it is impossible to accurately formulate disturbances that may decrease the compliance of robotic motion and the safety of pHRI. The total disturbances τd include the estimation error of the human force (τhτ^h), external disturbance (τdis) and unmodeled dynamics. So, the dynamics (14) can be rewritten as

M^qq¨+C^q,q˙q˙+G^q+f^q˙=τ^h+τact+τd(15)

where M^q, C^q,q˙, G^q, and f^q˙ are the estimation of Mq, Cq,q˙, Gq, and fq˙, respectively; τdRn denotes the total disturbances. The position tracking error is defined with respect to the desired trajectory as q=qqd, and the sliding variables are defined as

e=q˙+Λq(16)

where Λ is a constant. In order to help subjects complete the desired tasks while providing the minimum required assistance, the AAN controller for the robotic actuation can be presented as

τact=M^e˙+C^e+G^+f^τ^hKDe(17)

where M^, C^, G^, and f^ are the estimation of Mq, Cq,q˙, Gq, and fq˙, respectively, and KDRn×n is a positive-definite feedback gain. The selection of the parameter KD will be elaborated in the subsequent feedback gain modification algorithm.

Through the stability analysis in Appendix 1, we can conclude that the proposed control system yields a tracking error with uniformly ultimately bounded stability. The ultimate bound on the tracking error e can be expressed as Be, and its formulation is given in Appendix 1. Should M0, C0 and τd0, the appended analysis demonstrates that e0, and the system proves globally asymptotically stable. The inequality (39) concludes that the trajectory tracking error r is uniformly bounded, and, more importantly, this bound can be explicitly calculated in the following. The Lyapunov function can be basically bounded as α1eVα2e, where α1 and α2 are certain functions and will be defined later. Then, the ultimate bound Be on the tracking error e can be defined as

Be=α11α2zl(18)

where zl is the limiting term that satisfies V˙<0ezl0.

Since the inertia matrix M is positive-definite and bounded, the subsequent inequality can be derived as

12M_e2V12M¯e2(19)

where M_ and M¯ are the minimal and maximal eigenvalues of the inertia matrix M, respectively. It should be noticed that the left and right sides of (40) correspond to α1e and α2e, respectively. By adopting the right side of (39) as the limiting term zl and substituting the inequality (42) into (41), the bound on the tracking error can be calculated as

Be=M¯Me˙+Ceed2M_θ2K_D2(20)

Noticeably, the feedback gain KD is included in the formulation of the bound Be, which means that the bound on the allowable trajectory tracking error can be manipulated by directly varying the value of KD, and the amount of robotic assistance can be consequently adjusted. Although adequately large values of KD results in minimal bound on tracking error, perfect tracking effect is not desirable to stimulate patients’ potential motor capabilities (Pehlivan et al., 2015). Appropriate allowable tracking error can facilitate improved rehabilitation efficacy, especially aiming to promote the patient’s active participation. The increased or decreased value of KD is suitable in the following practical situations.

Situation 1: When the patient with severe motor disability has difficulty in completing the training task or learning a motion, increasing the value of KD leads to reduced allowable tracking error, and larger robotic actuation is provided for assistance.

Situation 2: When the patient attempts to stimulate muscle strength to challenge themselves with more difficult tasks, decreasing the value of KD leads to increased allowable tracking error, and smaller robotic actuation is provided to spare more space for the patient’s effort.

Therefore, the selection of the feedback gain KD plays an important role in addressing the trade-off between accurate trajectory tracking and sufficient participation encouragement. To solve this problem, a feedback gain modification algorithm is put forward to render patients complete and even challenge the task according to their residual motor capabilities and motion intention.

A parameter e* is introduced to define the maximum allowable trajectory tracking error. The average tracking error in a certain task is recorded as ei with the feedback gain KD,i, which will be updated as KD,i+1 in the next task based on the patient’s performance. The performance metric is the human-robot interaction torque to evaluate voluntary movement ability. The muscle activation in the current task can be normalized as ui=τh,i/τ^h,i, where τh,i denotes the average interaction torque in the current task, and τ^h,i denotes the human’s joint torque to complete the task in the absence of robotic assistance. Similarly, the human’s performance in the prior task can be expressed as ui1=τh,i1/τ^h,i. Comparison of the human’s performance in the current task to the previous task is considered the variance tendency of the subject’s motor capability. Concretely, if ui<ui1, the patient shows a downward tendency in muscle strength stimulation, the future feedback gain KD,i+1 should be increased to meet Situation 1; otherwise, if ui>ui1, the patient shows an upward tendency in rehabilitation efficacy, and the future feedback gain KD,i+1 should be decreased to meet Situation 2. The updating of the feedback gain occurs at the end of each task trajectory, and it conforms to the following law

KD,i+1=1+ϖiKD,i(21)

where ϖi is the change rate and satisfies 1<ϖi<1. Specifically, ϖi1,0 means decreasing the future feedback gain with respect to the current one, whereas ϖi0,1 means increasing the tendency. The formulation of the change rate is defined as

ϖi=ui1uiτh,iexpuiui1signui1uiϖnom(22)

where ϖnom is the nominal change rate and is predetermined as a constant to limit the maximal tracking error to less than e*. The sign of ϖi depends on the variance tendency of the subject’s motor capability. For instance, if ui is larger than ui1, ϖi1,0, the algorithm dictates that the subject has the potential to exhibit better performance in the next task, and the feedback gain decreases for larger error bound. Conversely, if ui is larger than ui1, ϖi0,1, the algorithm dictates that the subject fails to complete the current task with improved voluntary muscle strength, and the feedback gain increases for more assistance in the next task. The magnitude of ϖi is decided by both the maximum tracking error and the performance variance.

Combining the AAN controller (17) with the feedback gain modification algorithm (21) and (22), the control framework can provide a highly efficient and autonomous training strategy for robot-assisted rehabilitation.

5 Experiments

In order to validate the proposed control framework, the lower extremity rehabilitation robot mentioned in Xu et al. (2019), Xu et al. (2021) was utilized to conduct a series of experiments. The experiments were carried out on three healthy subjects. All subjects were informed of the detailed operation procedures and potential risks and signed consent forms before participation. The experiments were approved by the ethics committee of Hefei Institutes of Physical Science, Chinese Academy of Sciences (approval number: IRB-2019-0018). Two DOFs of the robot, including hip flexion/extension and knee flexion/extension, were involved in the training. Before operation, the reference trajectories were prescribed by physiotherapists to ensure rhythmic and comfortable training motion, and they were then fed into the robot controller. During the training, the subjects were asked to track the reference trajectories with the assistance of the rehabilitation robot actuation. Once the interaction force exerted by the subjects was detected by sensors, the reference trajectory was deformed to generate another optimal desired trajectory, and the robot was controlled to cooperate with the subject to complete the modified task motion. It should be noted that the subjects were not allowed to voluntarily move in the opposite direction from the task trajectory for accurate calculation of the deformed trajectory and safety guarantee.

Three groups of experiments were performed for the three subjects, and three different reference trajectories were configured. The time interval between the consecutive waypoints was set at δ=0.01s, and the prediction vector of the interaction torque was set at β=1. The vector field function was updated four times for hip flexion, hip extension, knee flexion, and knee extension in one walking cycle, where the computation efficiency was adequate to ensure instantaneous and accurate trajectory deformation. The amount and variance of pHRI differed across the three subjects due to their individual motor capabilities and motion intentions. The experimental results are shown in Figures 46. The subfigures A and B demonstrate the trajectory deformation and tracking of the hip and knee joint, respectively. The variance of the interaction torque at the hip and knee joints is illustrated in subfigure C, and the robotic actuation torque is presented in subfigure D. The experimental results indicate that the proposed trajectory generator can continuously produce a smooth and optimal desired trajectory once the subject exerts force on the robot. When the interaction torque disappears, the desired trajectory gradually converges back to the predetermined reference trajectory. In this regard, the shared control between the robot’s desired trajectory and the human’s voluntary effort is realized. Additionally, based on the proposed AAN controller, the actual trajectory output from the robot actuation can track the desired trajectory well.

FIGURE 4
www.frontiersin.org

FIGURE 4. Experimental results of subject 1. (A) Trajectory deformation and tracking of the hip joint. (B) Trajectory deformation and tracking of the knee joint. (C) Interaction torque at the hip and knee joints. (D) Actuation torque at the hip and knee joints.

FIGURE 5
www.frontiersin.org

FIGURE 5. Experimental results of subject 2. (A) Trajectory deformation and tracking of the hip joint. (B) Trajectory deformation and tracking of the knee joint. (C) Interaction torque at the hip and knee joints. (D) Actuation torque at the hip and knee joints.

FIGURE 6
www.frontiersin.org

FIGURE 6. Experimental results of subject 3. (A) Trajectory deformation and tracking of the hip joint. (B) Trajectory deformation and tracking of the knee joint. (C) Interaction torque at the hip and knee joints. (D) Actuation torque at the hip and knee joints.

In order to exhibit the control performance more intuitively, quantitative evaluation with three metrics was conducted. In terms of trajectory smoothness, the dimensionless squared jerk (Hogan and Sternad, 2009) was adopted, and its definition is presented in (23). A smaller DSJ value indicates a smoother movement trajectory. As for the compliance assessment, the energy per unit distance (EPUD) (Lee et al., 2018) was selected as (24). When improved compliance was shown, the subject could drive the robot with less interaction torque. A smaller EPUD value indicates higher robot compliance. The root mean square error (RMSE) defined in (25) was utilized to reveal the position error between the desired trajectory and actual trajectory. A smaller value of RMSE indicates better tracking effect. The three metrics are formulated as follows.

DSJ=tatbqt2dttbta5qmax2(23)
EPUD=j=1Nτhtjdtjj=1Ndtj(24)
RMSE=1Nj=1Nqtjqdtj2(25)

In (23), the parameters ta and tb are the start and end time of the trajectory, qmax is the maximum amplitude of the trajectory, and qt is the third time-derivative of the trajectory. In (24), j=1,2,,N is the sample number, τhtj is the human-robot interaction torque at the time tj, and dtj is the deviation between the reference trajectory and desired trajectory at the time tj. In (25), qtj and qdtj are actual and desired trajectory at the time tj, respectively.

Additionally, to better manifest the advantage of the proposed control framework, comparison experiments were performed with an admittance control without trajectory deformation and feedback gain modification algorithms (Li et al., 2017). The admittance control was implemented with the same robot and subject. The admittance parameters were regulated while responding to the subjects’ biological actions. The trajectory deformation and tracking performance of both control systems were evaluated with the abovementioned three metrics. Additionally, in order to evaluate rehabilitation efficacy, muscle activation improvement was normalized and recorded, and the Fugl-Meyer assessment (FMA) was also deployed for clinical evaluation. Higher normalized EMG value and FMA score indicate rehabilitation improvement. Each trial was conducted three times for accuracy, and the mean values of these metrics are recorded in Table 1. The robot motion compliance and movement smoothness of the hip and knee joint trajectories generated by the deformation trajectory algorithm are much better than those with the admittance control. Furthermore, the tracking performance under the feedback gain algorithm proved more satisfactory compared to the admittance control. The enhancement of the muscle strength and clinical assessment scores is evident compared with the performance without the proposed control framework. Overall, the comparison results prove that the control framework can effectively help the patient learn to move in the proper trajectory, and the training becomes more challenging and brings better rehabilitation efficacy.

TABLE 1
www.frontiersin.org

TABLE 1. Comparison results.

Next, the feedback gain modification algorithm for the AAN controller was experimentally examined during the optimized training task. The subjects were required to voluntarily exert forces on the robot, and the feedback gain KD was adapted according to the subjects’ performance, producing subject-adaptive robotic assistance. At the end of each training task, questionaries were set and filled in to identify whether the current task was easier or more difficult compared to the previous task. The questionnaire responses only helped assessments of the pilots’ subjective intention without affecting the robot controller. After that, the subsequent training task was operated immediately. The tracking performance of the AAN controller, variance of the feedback gain KD, and the human-robot interaction torque were measured in real time and are depicted in Figure 7. It can be seen from the figure that the feedback gain and tracking error are functions of the interaction torque. The experimental results demonstrate that the feedback gain KD can respond correctly to the subjects’ motor capabilities. When subjects complete the task with more active involvement, KD decreases and the allowable trajectory tracking error increases, and the magnitude of robotic assistance decreases for further encouragement, and vice versa. The variance tendency of KD is consistent with the questionnaire results. Furthermore, no matter how the tracking error varies, the user-selected maximum allowable trajectory tracking error r* is always larger than or equal to the error in each task. Therefore, it can be concluded that the feedback gain modification algorithm can effectively adjust the robotic assistance according to the subjects’ changing performance, hopefully encouraging impaired patients’ active participation and facilitating rehabilitation efficacy.

FIGURE 7
www.frontiersin.org

FIGURE 7. Experimental results of the Feedback gain modification algorithm.

6 Conclusion

In this paper, a control framework is proposed for the simultaneous adaptation of training tasks and robotic assistance for robot-assisted rehabilitation. Specifically, a trajectory deformation algorithm is developed to enable pHRI to regulate the task difficulty in real time, generating a smooth and compliant desired trajectory. Furthermore, an AAN controller, along with a feedback gain modification algorithm, is designed to motivate patients’ active participation, where the robotic assistance is adjusted by evaluating the patients’ performance variance and determining the trajectory tracking error bound. Appropriate training difficulty and assistance level are two important issues in robot-assisted rehabilitation. In this study, the appropriate training difficulty is expressed in the form of making a proper trajectory, which is realized with the proposed trajectory deformation algorithm; and the appropriate assistance level is expressed in the form of increasing the user’s EMG level, which is realized with the proposed AAN controller with the feedback gain modification algorithm. The balance between these two issues is essential for better rehabilitation efficacy, and the proposed control framework can address this balance well. A lower extremity rehabilitation robot with MR actuators is then employed to validate the effectiveness of the proposed control framework. Experimental results demonstrate that the training task difficulty and robotic assistance level can be regulated appropriately according to subjects’ changing motor capabilities.

In future work, more novel methods will be explored to estimate human motor capabilities and improve pHRI control strategies. More diverse training tasks will be involved to meet the rehabilitation requirements of different degrees and types of impairments. Machine learning may be adopted to ensure better time efficiency and greater adaptability of robotic assistance modification. Furthermore, more clinical trials will be carried out to expand the proposed control framework into clinical application.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics statement

All subjects have been informed of the detailed operation procedures and potential risks with signing the consents before participation. The experiments have been approved by the ethics committee of Hefei Institutes of Physical Science, Chinese Academy of Sciences (approval number: IRB-2019-0018).

Author contributions

JX and KH: Methodology, Investigation, Formal Analysis, Writing-original draft. TZ and KC: Data curation, Validation, Writing-review and editing. AJ, LX, and YL: Supervision, Writing-review and editing. All authors contributed to the article and approved the submitted version.

Funding

This research is supported by the National Natural Science Foundation of China (52205018), Natural Science Foundation of Jiangsu Province (BK20220894), State Key Laboratory of Robotics and Systems (HIT) (SKLRS-2023-KF-25), Fundamental Research Funds for the Central Universities (NS2022048), Nanjing Overseas Scholars Science and Technology Innovation Project (YQR22044), Scientific Research Foundation of Nanjing University of Aeronauticsand Astronautics (YAH21004), and Jiangsu Provincial Double Innovation Doctor Program (JSSCBS20220232).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Damsgaard, M., Rasmussen, J., Christensen, S. T., Surma, E., and de Zee, M. (2006). Analysis of musculoskeletal systems in the anybody modeling system. Simul. Model. Pract. Theory 14 (8), 1100–1111. doi:10.1016/j.simpat.2006.09.001

CrossRef Full Text | Google Scholar

Ferraguti, F., Landi, C. T., Sabattini, L., Bonfè, M., Fantuzzi, C., and Secchi, C. (2019). A variable admittance control strategy for stable physical human-robot interaction. Int. J. Robotics Res. 38 (6), 747–765. doi:10.1177/0278364919840415

CrossRef Full Text | Google Scholar

Hogan, N., Krebs, H. I., Rohrer, B., Palazzolo, J. J., Dipietro, L., Fasoli, S. E., et al. (2006). Motions or muscles? Some behavioral factors underlying robotic assistance of motor recovery. J. Rehabilitation Res. Dev. 43 (5), 605–618. doi:10.1682/jrrd.2005.06.0103

PubMed Abstract | CrossRef Full Text | Google Scholar

Hogan, N., and Sternad, D. (2009). Sensitivity of smoothness measures to movement duration, amplitude, and arrests. J. Mot. Behav. 41 (6), 529–534. doi:10.3200/35-09-004-rc

PubMed Abstract | CrossRef Full Text | Google Scholar

Krebs, H. I., Palazzolo, J. J., Dipietro, L., Ferraro, M., Krol, J., Rannekleiv, K., et al. (2003). Rehabilitation robotics: performance-based progressive robot-assisted therapy. Aut. Robots 15 (1), 7–20. doi:10.1023/a:1024494031121

CrossRef Full Text | Google Scholar

Lasota, P. A., and Shah, J. A. (2015). Analyzing the effects of human-aware motion planning on close-proximity human-robot collaboration. Hum. Factors 57 (1), 21–33. doi:10.1177/0018720814565188

PubMed Abstract | CrossRef Full Text | Google Scholar

Lee, K. H., Baek, S. G., Choi, H. R., Moon, H., and Koo, J. C. (2018). Enhanced transparency for physical human-robot interaction using human hand impedance compensation. IEEE/ASME Trans. Mechatronics 23 (6), 2662–2670. doi:10.1109/tmech.2018.2875690

CrossRef Full Text | Google Scholar

Li, Z., Huang, Z., He, W., and Su, C-Y. (2017). Adaptive impedance control for an upper limb robotic exoskeleton using biological signals. IEEE Trans. Industrial Electron. 64 (2), 1664–1674. doi:10.1109/tie.2016.2538741

CrossRef Full Text | Google Scholar

Losey, D. P., and O’Malley, M. K. (2018). Trajectory deformations from physical human-robot interaction. IEEE Trans. Robotics 34 (1), 126–138. doi:10.1109/tro.2017.2765335

CrossRef Full Text | Google Scholar

Luo, R., Sun, S., Zhao, X., Zhang, Y., and Tang, Y. (2018). “Adaptive CPG-based impedance control for assistive lower limb exoskeleton,” in Proceedings of IEEE international conference on robotics and biomimetics, 685–690.

CrossRef Full Text | Google Scholar

Marchal-Crespo, L., and Reinkensmeyer, D. J. (2009). Review of control strategies for robotic movement training after neurologic injury. J. NeuroEngineering Rehabilitation 6 (1), 20. doi:10.1186/1743-0003-6-20

PubMed Abstract | CrossRef Full Text | Google Scholar

Pehlivan, A. U., Sergi, F., and O’Malley, M. K. (2015). A subject-adaptive controller for wrist robotic rehabilitation. IEEE/ASME Trans. Mechatronics 20 (3), 1338–1350. doi:10.1109/tmech.2014.2340697

CrossRef Full Text | Google Scholar

Peña, G. G., Consoni, L. J., dos Santos, W. M., and Siqueira, A. A. G. (2019). Feasibility of an optimal EMG-driven adaptive impedance control applied to an active knee orthosis. Robotics Aut. Syst. 112, 98–108. doi:10.1016/j.robot.2018.11.011

CrossRef Full Text | Google Scholar

Schaal, S. (2006). “Dynamic movement primitives-A framework for motor control in humans and humanoid robotics,” in Adaptive motion of animals and machines (Tokyo, Japan: Springer), 261–280.

Google Scholar

Sharifi, M., Mehr, J. K., Mushahwar, V. K., and Tavakoli, M. (2021). Adaptive CPG-based gait planning with learning-based torque estimation and control for exoskeletons. IEEE Robotics Automation Lett. 6 (4), 8261–8268. doi:10.1109/lra.2021.3105996

CrossRef Full Text | Google Scholar

Shen, Z., Zhou, J., Gao, J., and Song, R. (2018). “Torque tracking impedance control for a 3DOF lower limb rehabilitation robot,” in Proceedings of IEEE international conference on advanced robotics and mechatronics, 294–299.

CrossRef Full Text | Google Scholar

Sproewitz, A., Moeckel, R., Maye, J., and Ijspeert, A. J. (2008). Learning to move in modular robots using central pattern generators and online optimization. Int. J. Robotics Res. 27 (3–4), 423–443. doi:10.1177/0278364907088401

CrossRef Full Text | Google Scholar

Wu, X., Li, Z., Kan, Z., and Gao, H. (2020). Reference trajectory reshaping optimization and control of robotic exoskeletons for human-robot co-manipulation. IEEE Trans. Cybern. 50 (8), 3740–3751. doi:10.1109/tcyb.2019.2933019

PubMed Abstract | CrossRef Full Text | Google Scholar

Xu, J., Li, Y., Xu, L., Peng, C., Chen, S., Liu, J., et al. (2019). A multi-mode rehabilitation robot with magnetorheological actuators based on human motion intention estimation. IEEE Trans. Neural Syst. Rehabilitation Eng. 27 (10), 2216–2228. doi:10.1109/tnsre.2019.2937000

CrossRef Full Text | Google Scholar

Xu, J., Xu, L., Cheng, G., Shi, J., Liu, J., Liang, X., et al. (2021). A robotic system with reinforcement learning for lower extremity hemiparesis rehabilitation. Industrial Robot Int. J. robotics Res. Appl. 38 (3), 388–400. doi:10.1108/ir-10-2020-0230

CrossRef Full Text | Google Scholar

Xu, J., Xu, L., Ji, A., and Cao, K. (2023). Learning robotic motion with mirror therapy framework for hemiparesis rehabilitation. Inf. Process. Manag. 60, 103244. doi:10.1016/j.ipm.2022.103244

CrossRef Full Text | Google Scholar

Xu, J., Xu, L., Ji, A., Li, Y., and Cao, K. (2020b). A DMP-based motion generation scheme for robotic mirror therapy. IEEE/ASME Trans. Mechatronics, 1–12. doi:10.1109/TMECH.2023.3255218

CrossRef Full Text | Google Scholar

Xu, J., Xu, L., Li, Y., Cheng, G., Shi, J., Liu, J., et al. (2020a). A multi-channel reinforcement learning framework for robotic mirror therapy. IEEE Robotics Automation Lett. 5 (4), 5385–5392. doi:10.1109/lra.2020.3007408

CrossRef Full Text | Google Scholar

Yao, S., Zhuang, Y., Li, Z., and Song, R. (2018). Adaptive admittance control for an ankle exoskeleton using an EMG-driven musculoskeletal model. Front. Neurorobotics 12 (16), 16–12. doi:10.3389/fnbot.2018.00016

PubMed Abstract | CrossRef Full Text | Google Scholar

Yuan, Y., Li, Z., Zhao, T., and Gan, D. (2020). DMP-based motion generation for a walking exoskeleton robot using reinforcement learning. IEEE Trans. Industrial Electron. 67 (5), 3830–3839. doi:10.1109/tie.2019.2916396

CrossRef Full Text | Google Scholar

Zhou, J., Li, Z., Li, X., Wang, X., and Song, R. (2021). Human-robot cooperation control based on trajectory deformation algorithm for a lower limb rehabilitation robot. IEEE/ASME Trans. Mechatronics 26 (6), 3128–3138. doi:10.1109/tmech.2021.3053562

CrossRef Full Text | Google Scholar

Zhuang, Y., Yao, S., Ma, C., and Song, R. (2019). Admittance control based on EMG-driven musculoskeletal model improves the human-robot synchronization. IEEE Trans. Industrial Electron. 15 (2), 1211–1218. doi:10.1109/tii.2018.2875729

CrossRef Full Text | Google Scholar

Appendix

AppendixDetermination of Vector Field Function

As presented in (9), the vector field function Φt is applied to shape the deformed trajectory, and its determination should follow the subsequent rules.

Rule 1: Continuity. Once the human force τh is exerted at ti, the predefined reference trajectory qd* starts to be deformed to qd; after time tf, the robot again follows its reference trajectory qd*. Hence, in order to ensure the continuous transitioning between the original and deformed trajectories, the field vector function Φt and its time-differential Φ˙t is constrained as

Φti=Φtf=0
Φ˙ti=Φ˙tf=0(26)

Then, the trajectory configuration on the boundary condition can be satisfied, i.e., γdt=γdt and γ˙dt=γ˙dt at both the start ti and the end tf.

Specifically, the deformed trajectory between ti and tf can be evenly divided into an arbitrary number of waypoints. We define the number of waypoints as N and the time interval between consecutive waypoints as δ. As introduced before, the time duration of pHRI is p, such that the number of waypoints along γd and γd can be calculated as N=pδ+1.

Consequently, the original and deformed desired trajectory within the time interval ti,tf can be written as

γd=qd*ti,qd*ti+δqd*tfδ,qd*tf
γd=Γdsti,Γdsti+δΓdstfδ,Γdstf

Applying this waypoint parameterization, the continuity statement (26) can be rewritten as

Γdstiqd*ti=Φti=0Γdsti+δqd*ti+δ=Φti+δ=0Γdstfδqd*tfδ=Φtfδ=0Γdstfqd*tf=Φtf=0

The above equation can be further rewritten as

Bγdγd=BΦ=0(27)

where

B=100000010000000010000001R4×N

Rule 2: Smoothness. Although the reference trajectory qd* has been set to be as smooth as possible through modeling from the healthy subject gait database, the naturality of the deformed desired trajectory qd should also be maintained to guarantee the patient’s comfort and security. Numerous observations have demonstrated that the healthy human’s movement complies well with the minimum-jerk model (Li et al., 2017). Similarly, the minimum-jerk deformed trajectory can be derived by satisfying the vector field function as

Φ=δ3AΦ(28)

where

A=1000310033101330013000100001000300030001RN+3×N

Rule 3: Compliance. It is pHRI that initiates the trajectory deformation and decides the deformed trajectory shape; the vector field function Φt is highly correlated with the interaction torque τh. Analogous to (Losey and O’Malley, 2018), a cost function is formulated to reveal the variation of the trajectory deformation energy as

Jγd=γdγdTβτhti+12αγdγdTATAγdγd(29)

where α is a positive constant and τ^h is the prediction of the human-robot interaction torque. It should be noticed that the trajectory deformation occurs at the current time ti when the human first interacts with the robot and results in the interaction torque τhti. Since the future interaction torque values are required to compute the energy of the trajectory deformation but remain unknown, an online prediction of the future interaction torque is developed as βτhti, where βRN is the prediction vector.

The proposed cost function (29) contains two terms: the first term means the work done by the trajectory deformation to the human; and the second term is the squared norm of Φ with respect to the finite differencing matrix ATA (A is mentioned in (28)) to ensure the smoothness and naturality of the deformed trajectory.

In order to ensure the compliance of the deformed trajectory, the cost function (29) should be minimized under the constraint (27) to optimize the value of the field vector function Φ. The optimization problem can be formulated as

minimize Jγd
subject to Bγdγd=0(30)

A Lagrangian function is defined as follows to solve the above optimization problem.

Lγd,λ=Jγd+λTBγdγd(31)

where λR4 is a vector of Lagrange multipliers. By calculating the partial derivative of (31), we have

γdLγd,λ=βτhti+1αsATAγdγd+BTλ=0
λLγd,λ=Bγdγd=0(32)

Through further computation after (32), the subsequent equation (33) can be obtained to reveal the relationship between the field vector function and the interaction torque. In addition to Lagrange multipliers, the reader can refer to (Wu et al., 2020) to find another solver, i.e., linear variational inequality-based primal-dual neural network.

V=ρGβτhti(33)

where

G=IATA1BTBATA1BT1BATA1(34)

where IRN×N is an identity matrix.

The determination of the parameter ρ in (33) has significant impacts on the shape of Φ. In robotics-assisted rehabilitation, the duration time of the human-robot interaction is relatively long because the patient mostly tries to participate actively to guide the robot, which is different from the statement in (Damsgaard et al., 2006). Considering the above factors, the parameter ρ is defined as

ρ=μδGp+δG(35)

where μ denotes the level of assistance, which regulates whether it is the robot or human that the trajectory deformation arbitrates toward.

Therefore, following the rule of continuity, smoothness, and compliance, the ultimate expression of the vector field function is clarified as

Φt=μδHβτhti(36)

where

H=Gp+δG

7.2 Stability Analysis of AAN Controller

Combining the modified robot dynamics (15), error dynamics (16), and AAN controller (17), the following equation can be yielded.

M^e˙+C^e+KDe+τd=0(37)

For stability analysis, consider a Lyapunov candidate function as

V=12eTMe(38)

Then, the time-derivative of the Lyapunov function is

V˙=eTKDe+eTMe˙+Ceτd(39)

Let us introduce a constant θ0,1; the time-derivative of the Lyapunov function can be further written as

V˙K_De2+eMe˙+Ceτdθ1K_De2θK_De2+eMe˙+Ceτd(40)

Hence, if the following inequality is satisfied,

eMe˙+CeτdθK_D(41)

The time-derivative of the Lyapunov function satisfies

V˙θ1K_De2<0(42)

Through the stability analysis, the Lyapunov candidate V0 and its time-derivative V˙<0 are ensured, and it can be concluded that the proposed control system yields a tracking error with uniformly ultimately bounded stability.

Keywords: rehabilitation robotics, human-robot interaction, biological signal, trajectory deformation, assist-as-needed control

Citation: Xu J, Huang K, Zhang T, Cao K, Ji A, Xu L and Li Y (2023) A rehabilitation robot control framework with adaptation of training tasks and robotic assistance. Front. Bioeng. Biotechnol. 11:1244550. doi: 10.3389/fbioe.2023.1244550

Received: 22 June 2023; Accepted: 29 August 2023;
Published: 02 October 2023.

Edited by:

Tianzhe Bao, University of Health and Rehabilitation Sciences, China

Reviewed by:

Guozheng Xu, Nanjing University of Posts and Telecommunications, China
Dong Hyun Kim, Samsung Research, Republic of Korea

Copyright © 2023 Xu, Huang, Zhang, Cao, Ji, Xu and Li. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Jiajun Xu, xujiajun@nuaa.edu.cn

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.