Your research can change the world
More on impact ›


Front. Appl. Math. Stat., 27 November 2020 |

On the Relation Between Smooth Variable Structure and Adaptive Kalman Filter

  • Chair of Dynamics and Control, University of Duisburg-Essen, Duisburg, Germany

This article is addressed to the topic of robust state estimation of uncertain nonlinear systems. In particular, the smooth variable structure filter (SVSF) and its relation to the Kalman filter is studied. An adaptive Kalman filter is obtained from the SVSF approach by replacing the gain of the original filter. Boundedness of the estimation error of the adaptive filter is proven. The SVSF approach and the adaptive Kalman filter achieve improved robustness against model uncertainties if filter parameters are suitably optimized. Therefore, a parameter optimization process is developed and the estimation performance is studied.

1. Introduction

State estimation plays an important role in the field of control. System states are required for the calculation of state feedback controllers, exact input-/output linearizations, equivalent control, backstepping control, etc. Noise reduction of signals is desirable to improve performance of sliding mode approaches under real conditions. Combined input-state estimation is useful for the estimation and rejection of unknown exogenuous inputs. Additionally, robust model-based fault detection and localization approaches can be designed based on filters.

Related to linear systems minimum variance unbiased [1, 2] and augmented state filters [3, 4] can be used for combined input-state estimation. In case of known uncertainty bounds robust Kalman filters [5] can improve state estimation of uncertain linear systems. In the field of H filtering robustness of the state estimation may be improved by minimizing the effect of worst possible energy-bounded disturbances on the estimation error [6]. Multiple-model approaches [7] are a powerful tool for state estimation of uncertain systems. Combining them with the particle filter allows state estimation of nonlinear systems [8].

The smooth variable structure filter (SVSF) introduced in Ref. 9 is an approach for state estimation of uncertain nonlinear systems. Several applications of SVSF can be found in the literature. The filter has been applied to estimate the states and parameters of an uncertain linear hydraulic system in Ref. 9. A multiple-model approach has been formulated for fault detection e.g., leakage of the hydraulic system [10]. The state of charge and state of health of batteries is estimated in Refs. 11 and 12. A multiple-model approach has been applied for target tracking in Ref. 13 and a SVSF based probabilistic data association (PDA) approach has been proposed for tracking in cluttered environment [14]. For multiple object tracking a SVSF based joint-PDA approach has been developed [15]. Online multiple vehicle tracking on real road scenarios has been investigated in Ref. 16. Several SVSF based simultaneous localization and mapping algorithms have been proposed e.g., Refs. 1719. Training of neural networks based on SVSF and classification of engine faults has been studied in Ref. 20. Dual estimation of states and model parameters has been considered in Ref. 21. The estimation strategy works as follows. The bi-section method, the shooting method, and SVSF are combined. The bi-section and shooting method are applied to determine best-fitting model parameter combinations. The obtained model is used by SVSF to estimate the system states. To apply the bi-section method the measurement signals are divided into segments in which the model parameters remain nearly constant. In comparison to the Kalman filter the SVSF approach facilitates detection of these segments based on an evaluation of chattering process.

As described in Ref. 9 the SVSF approach uses a switching gain to drive the estimated state trajectory into a region around the true state trajectory called existence subspace. Due to measurement noise chattering occurs in the existence subspace. By introducing a boundary layer, similar to the saturation function approach of sliding mode control, the high frequency switching in the existence subspace can be attenuated. The claimed advantage of SVSF approach is that if the boundary layer is not used, the filter guarantees to reach the existence subspace for sure although an imprecise system model may be used. It is shown in Ref. 22 that in case of not using the boundary layer the estimations of the filter converge to the measurements, which guarantees the estimation error to be bounded. However, this is a trivial result because if the estimations are equal to the measurements and every state is measured, than the filter is not required. If the boundary layer is used, the estimations diverge from the noisy measurements and estimation performance improves. Unfortunately, it has never been proven that the SVSF with boundary layer has bounded estimation error in case of using an imprecise model.

As already mentioned a serious limitation of the SVSF approach is that all system states have to be measured. Additionally, the measurement model is required to be linear. However, related to tracking a linear measurement model may be achievd by applying a measurement conversion [23] and measurements of the vehicle velocities could also be derived from measured positions.

Another problem of SVSF results from the dependency of the estimation performance on the width of the introduced smoothing boundary layer. In Ref. 24 an estimation error model for the SVSF is proposed and in Ref. 25 the estimation error is minimized according to the smoothing boundary layer width. A maximum a posteriori estimation of the noise statistics of the error model is discused in Ref. 26. However, the derived estimation error model and the related approaches require the system to be linear and precisely known which contradicts the idea of robustness.

In our previous publication [22] a new tuning parameter for the SVSF approach was introduced to achieve online optimization of the estimation performance. In this paper the relation between the SVSF approach and the Kalman filter is studied. An adaptive Kalman filter is obtained from the SVSF approach by replacing the original filter gain. The estimation performance of SVSF and the adaptive filter variant is compared with one another. Therefore, a parameter optimization scheme is proposed. In the simulation results the adaptive Kalman filter shows superior performance compared to the original SVSF approach.

The paper is organized as follows. In Section 2 the preliminaries and the original SVSF approach are discussed. In Section 3 the relation of SVSF and Kalman filter is studied. Parameter optimization of the Kalman filter leading to an adaptive filtering approach is considered in Section 4. The stability of the adaptive filter is studied in Section 5. A performance evaluation of SVSF and adaptive Kalman filter is provided in Section 6.

Notations. An overview of the notations used within the paper is given in Table 1.


TABLE 1. Nomenclature.

2. Problem Formulation and Previous Work

Consider the dynamics of a nonlinear system to be exactly described by the discrete-time model


with states xkn, inputs ukm, and outputs ykn. The process rkn is assumed to be a white noise process with independent samples described by the covariances E(rkrkT)=R0, and E(rirjT)=0, for ij, and the mean E(rk)=0. The measurement model Eq. 2 can be obtained from any model of the form y˜k=Hkxk+r˜k as the considered SVSF approach requires Hk to be invertible [9]. Consider f˘k to be a nominal description of system (1,2), which may differ from the true behavior. According to Ref. 9 an estimation x^ of the system states x can be obtained using the SVSF algorithm


where the operator “” denotes the Schur product, Φ and Ψ are diagonal matrices with Φii, Ψii denoting the ith diagonal element, and the ith element of vector sat(eyk+1|k,Ψ) is defined as

sat(a,A)i={1for aiAii>1,1for aiAii<1,aiAiifor1aiAii1,(8)

where i{1,2,,n}. According to Ref. 22 the SVSF algorithm (Eqs 37) reduces to


in case of Φ=0n×n.

Theorem 1 of Ref. 9 proves that the output estimation error eyk|k of algorithm (Eqs 37) approaches zero if instead of the gain Mk+1 the gain Mk+1=(|eyk+1|k|+Φ|eyk|k|)sgn(eyk+1|k) is used. As a consequence the estimations of the filter equal the measurements, i.e., eyk|k=ykx^k|k=0. This is a trivial result as all states are required to be measured. So one might just use the measurements instead of the filter estimates. The estimations of the filter diverge from the measurements and estimation performance improves if a boundary layer is introduced and gain Mk+1 is replaced by Mk+1. However, it has neither been proven that algorithm (Eqs 37) with gain Mk+1 has a bounded estimation error, nor it has been shown that the introduced boundary layer minimizes the squared estimation error or some other performance criterion.

3. Relation Between Smooth Variable Structure and Kalman Filter

In this section the stochastic gain M˜k+1 of SVSF approach is replaced by a deterministic but yet undefined gain Kk+1. Using the deterministic gain the estimation error covariance matrix of the filter is determined. By minimizing the mean squared estimation error (MSE) the deterministic gain Kk+1 becomes the specific optimal one Kk+1opt. Replacing the original gain M˜k+1 of SVSF by the optimal one Kk+1opt gives a direct link to the Kalman filter.

From Eqs 1, 2, and 912 it follows that the state estimation error and the output estimation error can be determined as




where Kk+1 is a deterministic but yet undefined gain. To derive the optimal gain that minimizes the MSE an expression of the a posteriori error covariance dependent on Kk+1 is required to be derived. However, first of all the calculation of the output error covariance is considered. Inserting Eq. 14 into the definition of the output error covariance Sk+1=E(eyk+1|keyk+1|kT) gives


Expanding Eq. 15 and considering the definition of the a priori estimation error covariance Pk+1|k=E(exk+1|kexk+1|kT) and the stationary measurement noise covariance R=E(rk+1rk+1T) yields


The value of the remaining expectations in Eq. 16 is studied as follows. First of all the a priori estimation error exk+1|k is known to directly depend on the noise realizations rj with j{0,1,,k} but not on the realization rk+1. In addition rj with j{0,1,,k} and rk+1 are independent of each other due to the independent white noise assumption. Finally, rk+1 can not have any effect on exk+1|k and both random variables are stochastically independent. It follows E(rk+1exk+1|kT)=E(rk+1)E(exk+1|kT) and E(exk+1|krk+1T)=E(exk+1|k)E(rk+1T). From the zero-mean assumption of the noise i.e., E(rk)=0 it follows


Next the expression for the a posteriori estimation error covariance is considered. Inserting Eq. 13 into the definition of the a posteriori estimation error covariance Pk+1|k+1=E(exk+1|k+1exk+1|k+1T) leads to


Expanding Eq. 18 and considering the definition of the output error covariance Sk+1 yields


Based on Eq. 14 the two remaining expectations in Eq. 19 can be written as


where E(rk+1exk+1|kT) and E(exk+1|krk+1T) again vanish due to the stochastic independency of rk+1 and exk+1|k. Finally, the a posteriori error covariance is achieved as


In the following minimization of MSE is considered. Based on Eq. 21 the error covariance can also be written as


By adding RSk+11RRSk+11R expression Eq. 22 can be rewritten as


The trace of Pk+1|k+1 gives the MSE of the a posteriori estimation. The last term in Eq. 23 is at least a positive semidefinite matrix with a trace greaterequal zero as Sk+10. All other terms do not depend on Kk+1. Consequently, the minimum of MSE is achieved by


as the trace of the last term is zero for Kk+1opt. The connection between algorithm (Eqs 37) and the Kalman filter is studied as follows.

Theorem 1. The state prediction and correction of algorithm (Eqs 912) equals the one of the extended Kalman filter if M˜k+1 is replaced by Kk+1opt as stated in Eq. 24.

Proof. 1The state prediction (Eq. 9) obviously equals the one of the extended Kalman filter. Regarding the correct step (Eq. 11) it follows that


holds true if M˜k+1 is replaced by Kk+1opt of (Eq. 24). As the introduced Kk+1Kal of (Eq. 26) equals the Kalman filter gain


step (Eq. 26) and thus (Eq. 11) is identical to the correction step of the Kalman filter. ∎

4. Parameter Optimization of the Kalman Filter

The robustness of SVSF against model uncertainties is achieved by tuning the parameters of the smoothing boundary layer width [25]. However, also the Kalman filter gain can be made adaptive to achieve improved robustness [27]. In this section an adaptation law for the unknown a priori estimation error covariance Pk+1|k of the optimal gain (Eq. 24) is derived. The error covariance Pk+1|k should not be propagated based on the system model (like it is done usually in the field of Kalman filtering) as this would lead to prediction errors due to the imprecize model description. Instead Pk+1|k itself is required to be estimated.

As according to Eq. 17 the state error covariance Pk+1|k is related to the output error covariance Sk+1 the expression




might be considered to gain information about Pk+1|k. Estimating S^k+1 based on the innovation process eyk+1|k is common in the field of adaptive Kalman filtering, e.g., Refs. 27 and 28. From Eq. 31 it can be seen that if exk+1|k is constant over N time steps, and rk+1 is ergodic in the sense of 1Nj=kN+2k+1rj=0, and 1Nj=kN+2k+1rjrjT=R then S^k+1R equals the a priori error covariance. Additional information about Pk+1|k can be obtained by considering the suboptimal gain Kk+1sub=0. Replacing Kk+1 of Eq. 23 by the suboptimal gain Kk+1sub gives the error covariance Pk+1|k+1sub=R of the suboptimal filter. As the filter with the optimal gain minimizes the MSE the upper bound


can be established. If it is assumed that Pk+1|k+1Pk+1|k, which is the case for sufficient small sampling time then


might be considered to gain information about Pk+1|k. Due to the recursive nature of the filtering approach the relation of Pk+1|k to the previous value Pk|k1 might be considered. For small sampling time this can be roughly described by


In order to find an estimation P^k+1|k that fits best to the established Eqs 29, 33, and 34 a weighted least squares (WLS) estimation problem is formulated. The importance of the individual equations is expressed by the scalar weights α,β,γ>0, which will be determined in a parameter optimization process. Using the vector operator “vec” the WLS problem


subject to


is considered. The solution of this WLS problem is


which is a weighted sum of the solutions of the individual equations. The obtained solution (Eq. 36) is not guaranteed to be positive semidefinite. However, as Pk+1|k is a covariance matrix it must be at least positive semidefinite. In practice it will even be positive definite as otherwise the MSE would be zero. Incorporating the constraint Pk+10 into the WLS problem (Eq. 35) would lead to a nonlinear optimization problem. In order to keep the problem simple it is suggested to replace P^k+1|k by some P^k+1|k0 if P^k+1|k has non-positive eigenvalues. The modified error covariance P^k+1|k0 is


where Lk+1 diagonalizes P^k+1|k as


and matrix Dk+1 is a diagonal matrix with the ith diagonal element (Dk+1)ii defined by

(Dk+1)ii={(Dk+1)iiif (Dk+1)ii is >0,η×(Lk+1TRLk+1)iiif (Dk+1)ii is 0,

where i{1,2,,n}. The scaling factor η(0,1] will be part of the parameter optimization process. The diagonalization (Eq. 38) can always be achieved as P^k+1|k is symmetric if a symmetric initial matrix P^0 is chosen. Finally, estimating the optimal Kalman filter gain Kk+1opt based on P^k+1|k leads to the adaptive Kalman filtering (A-KF) approach




where Lk+1 diagonalizes P^k+1|k so that diagonalization Dk+1 is achieved as


and the diagonal matrix Dk+1 is obtained from

(Dk+1)ii={(Dk+1)iiif (Dk+1)ii is >0,η×(Lk+1TRLk+1)iiif (Dk+1)ii is 0,(46)

with i{1,2,,n} and factor η(0,1].

The adaptive filtering approach (Eqs 3946) depends on the set


of tuning parameters. In order to optimize these parameters a training model is introduced. The filter is applied to the training model instead of the real system. The training model simulates the effects of model uncertainty occurring from the real unknown system by varying the system parameters of the known nominal model f˘. Let Np be the number of system parameters and let pi(0) be the nominal value of parameter i. Assume that a priori knowledge about the amount of model uncertainty is available meaning that parameter pi(0) is required to be varied by ϱi percent to account for the severity of model uncertainty. It is suggested to obtain the ith parameter of the training model pi(t) by drawing a sample pi(t)U((1ϱi/100)pi(0),(1+ϱi/100)pi(0)) from an uniform distribution. Repeating the procedure for all Np system parameters forms one set of training parameters denoted as a training model. The optimization of the filter parameters based on the training model can be achieved as follows. Let x¯k be the states and y¯k be the measurements of the training model and let x^k|k|Y¯k be the estimations of the filter using the nominal model but receiving the measurements from the training model Y¯k={y¯0,y¯1,,y¯k} then the optimized parameters are given as

=arg minJ()=arg mink||x¯kx^k|k|Y¯k()||2,(48)

where J denotes the costfunction. The optimization is a mixed integer optimization problem which can be solved by e.g., genetic approaches. In order to make the optimization process more reliable it is recommended to build several training models and optimize the filter parameters for each of them separately. The final optimized filter parameters can be obtained by taking the mean or median of the optimized parameters of the individual training models. The process of filter parameter optimization is illustrated in Figure 1.


FIGURE 1. Filter parameter optimization. Training: The training is based on a specific system description called training model which is assumed to be the true unknown system model. The training model equals the known nominal system description but with varied system parameters to account for the model uncertainty. The filter runs with the nominal model and its tuning parameters are optimized by comparing the filter estimates with the true states of the training model. Test: The filter is applied to the real system, it runs with the optimized tuning parameters and uses the nominal system description.

5. Stability Analysis of the Adaptive Kalman Filter

As mentioned previously boundedness of the estimation error of SVSF approach (Eqs 37) using smoothing boundary layer has never been proven. However, if the adaptive Kalman filter is used instead (M˜k+1 is replaced by K^k+1opt) the boundedness of the estimation error can be proven as follows.

Theorem 2. The estimation error of the a posteriori estimation


of the adaptive Kalman filtering approach (Eqs 3946) is bounded by


if the measurement noise is bounded by rk<a1 and the following conditions are fulfilled


where ε1, ε2 are sufficient small positive constants, a1, a2 are sufficient large positive constants.

Proof. From Eq. 51 it can be concluded that P^k+1|k can be written as


with Ωk0. As known from Eq. 13 the a posteriori estimation error is


With the definition R˜k=R+Ωk0 the estimated optimal filter gain can be written as


Applying the Sherman-Morrison-Woodbury formular (Fact 2.16.3 [29]) in combination with assumption (Eq. 52) gives


Using the submultiplicative Frobenius norm (Proposition 9.3.5 [29]) and Eq. 58, Eq. 57 can be written as


Based on the minimum eigenvalue of R˜k1 the inequality


can be established (Lemma 8.4.1 [29]). Using Eq. 60 and assumption Eq. 53 an upper bound of Eq. 59 is obtained as


Unfortunately, R˜k1 implicitly depends on eyk+1|k so that assumption Eq. 54 has to be taken into consideration leading to


Based on the derivative of Eq. 62 with respect to ||eyk+1|k|| it can be shown that the maximum of the upper bound is achieved for ||eyk+1|k||=1ϵ1ϵ2 which leads to


Then (Eq. 50) is proven by applying the triangle inequality on Eq. 56.

6. Numerical Example

In this section state estimation and control of a chemical plant is considered in order to evaluate the performance of original SVSF and the adaptive Kalman filtering variant. According to Ref. 30, Ref. 31 a species A reacts in a continuous stirred tank reactor. The dynamics of the effluent flow concentration CA=x1 of species A and the reactor temperature T=x2 can be described by the time-continuous model


with measurements y, and control variable z defined as


The parameters of the system are shown in Table 2. The input of the system is the change of the coolant stream temperature u=ΔTc related to the nominal value Tceq. An input saturation of |u|50K is considered. The system is simulated based on Euler method using a sampling time of 0.1 s. The resulting discrete-time measurement model is


where the zero-mean, white noise rk is Gaussian with covariance


For the initial state values x1(t0)=0.875mol/l, and x2(t0)=325K are considered.


TABLE 2. System parameter description [30].

6.1. Filter Training Process

In the following the optimization of the tuning parameters of A-KF and SVSF approach is considered. As explained in Section 4 variation of the nominal system parameters is required to build the training model. To account for the model uncertainty a variation of 20% of the nominal parameters is considered which is assumed as a priori known. Consequently, the ith parameter of the training model pi(t) is obtained from the uniform distribution pi(t)U((1+0.2)pi(0),(10.2)pi(0)). Repeating the step for all parameters i{1,2,,10} forms one training model. The parameter Tceq is not varied as it is required to be precisely known for control. In Table 3 an overview of the true and nominal system parameters and the parameters used to build the training model is given. In order to account for the different combination and variation of the system parameters three training models are build denoted as Training I, Training II, and Training III.


TABLE 3. System parameters of nominal, real, and training model.

For each set the tuning parameters of the filters are optimized separately. The parameters of A-KF approach required to be optimized are given by Eq. 47. For the SVSF approach the boundary layer widths Ψ11, Ψ22 and convergence rates Φ11, Φ22 of the first and second state are optimized. The optimization is achieved using “genetic algorithm” of MATLAB with default settings. During optimization only one realization of measurement noise is considered so that the costfunction does not vary due to the noise.

The results of optimization are shown in Table 4. The A-KF approach requires at least 60 times more computational time for the optimization than SVSF. The SVSF algorithm is computationally more efficient and in addition it requires less parameters to be optimized. However, the achieved minimal value of the costfunction is always lower in case of A-KF.


TABLE 4. Computational effort of training process.

6.2. Open Loop Case

In the open loop case the step functions u=5K and u=5K are applied to the system and the resulting step responses are considered for state estimation. The system behavior is simulated over a time horizon of 6 min.

The performance values of A-KF and SVSF approach are shown in Table 5. The filters are applied based on four sets of optimized parameters. Three sets result from Training I, Training II, and Training III. The fourth set denoted as Training I-III considers the mean values of the optimized parameters of the other three sets. As all states are measured the MSE of the measurements is considered also. In order to account for the effect of different noise realizations the results are obtained by simulating the step response 100 times and taking the mean value and variance of the squared estimation error. The filters are applied to the same measurements with same noise realizations. From the results it can be seen that both A-KF and SVSF estimations are more precise than the measurements. In comparison to SVSF the A-KF approach achieves better estimation performance for all considered sets of optimized parameters and both step responses. For one specific run the step responses and the corresponding state estimations are visualized in Figures 2 and 3. The computational time required to generate Table 5 (to simulate both step responses 100 times and apply 8 filters each time) is 18.90 min on a 4xCPU@3.7Ghz with 8 GB memory. The computational time required to apply only one filter on one specific step response is 0.11 s in case of SVSF and 1.49 s in case of A-KF. Both values are far less than the 6 min of simulated system behavior.


TABLE 5. Estimation performance in open loop case (100 realizations).


FIGURE 2. Open loop response of step input u=+5K (Estimation errors of shown realization: μexAKF=0.193, μexSVSF=0.296, μexMeas.=0.892).


FIGURE 3. Open loop response of step input u=5K (Estimation error of shown realization: μexAKF=0.105, μexSVSF=0.267, μexMeas.=0.886).

6.3. Closed Loop Case

In the closed loop case the effluent flow concentration CA of the chemical plant is controlled. The controller relies on the filter estimates. Consequently, the estimation performance as well as the control performance dependent on the filters is studied.

In order to achieve reference tracking of the control variable CA the super-twisting sliding mode approach [32] is applied. By introducing the estimated tracking error


with reference value zr the sliding variable can be calculated as


Then reference tracking can be achieved by applying the super-twisting approach [32]


The controller parameters are chosen as k1=0.001 and k2=25 based on trial and error. For the sliding dynamics λ=0.08 is considered. The simulated time horizon is 20 min with reference values defined by

zr={0.8 mol/lif t6 min,0.75 mol/lif 6 min<t10 min,0.7 mol/lif 10 min<t14 min,0.85 mol/lif 14 min<t20 min,(70)

where t[0 min,20 min]. Related to the control performance the following four cases are studied. The sliding variable (Eq. 68) is calculated using the estimations of A-KF approach or the estimations of SVSF approach or the measurements denoted as “W/O filters” or the true states denoted as “Optimum”. For the filters the optimized filter parameters of the different training models are considered. The control of the plant over the time horizon of 20 min is simulated 100 times to account for the noise realizations. The measurements used by the filters have same noise realizations. The performance values are shown in Table 6. The best control performance can be achieved based on the true states which are noise-free. Control based on estimations of A-KF or SVSF achieves better performance than applying the controller directly on the noisy measurements. In comparison to SVSF approach A-KF method achieves better reference tracking and more precise state estimation for all considered training models. The control variable and the input values of one specific realization are visualized in Figures 4 and 5. The computational time required to generate Table 6 (to simulate 100 runs of 10 closed loop systems in parallel) is 34.54 min on a 4xCPU@3.7Ghz with 8 GB memory. The computational time required to simulate the closed loop behavior only once based on one controller is 0.45 s if the controller is fed by the estimations of SVSF and 5.24 s if the controller is fed by the estimations of A-KF. This is in both cases far less than the time horizon of 20 min over which the system behavior is simulated.


TABLE 6. Control and estimation performance in closed loop case (100 realizations).


FIGURE 4. Closed loop tracking performance using measured, estimated, or true state values for calculation of the sliding variable (Performance values of shown realization: μerAKF = 5.005e2, μerSVSF = 5.392e2, μerW/O filter = 6.057e2, μerOpt. = 4.711e2).


FIGURE 5. Closed loop system inputs generated by controllers (Input energy of shown realization: μuAKF = 1.526e2, μuSVSF = 1.480e2, μuW/O filter = 1.113e2, μuOpt. = 1.588e2).

7. Conclusion

In this paper the relation of smooth variable structure filter (SVSF) and Kalman filter has been studied. An adaptive Kalman filter (A-KF) was derived from the SVSF approach. Sufficient conditions for the boundedness of the estimation error of A-KF approach have been formulated. The simulation results show that beside the SVSF approach also an adaptive Kalman filter can be used to achieve robust estimation performance.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Author Contributions

The development and theoretical description of the adaptive Kalman filter approach was undertaken by MS and DS. The numerical simulation was designed and realized by MS. The manuscript was drafted by MS and reviewed by DS.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.


The authors want to thank the reviewers for their valueable comments and suggestions.


1The authors thank the anonymous reviewers of European Control Conference 2020 for the insightful comments and suggestions related to the proof of the theorem.


1. Kitanidis, PK. Unbiased minimum-variance linear state estimation. Automatica (1987). 23:775–8. doi:10.1016/0005-1098(87)90037-9.

CrossRef Full Text | Google Scholar

2. Gillijns, S, and De Moor, B. Unbiased minimum-variance input and state estimation for linear discrete-time systems. Automatica (2007). 43:111–6. doi:10.1016/j.automatica.2006.08.002.

CrossRef Full Text | Google Scholar

3. Anderson, BD, and Moore, JB. Optimal filtering. Englewood Cliffs, NJ: Prentice-Hall (1979). 256 p.

Google Scholar

4. Hmida, FB, Khémiri, K, Ragot, J, and Gossa, M. Three-stage Kalman filter for state and fault estimation of linear stochastic systems with unknown inputs. J Franklin Inst (2012). 349:2369–88. doi:10.1016/j.jfranklin.2012.05.004.

Google Scholar

5. Dong, Z, and You, Z. Finite-horizon robust Kalman filtering for uncertain discrete time-varying systems with uncertain-covariance white noises. IEEE Signal Process Lett (2006). 13:493–6. doi:10.1109/LSP.2006.873148.

Google Scholar

6. Hassibi, B, Sayed, AH, and Kailath, T. Indefinite quadratic estimation and control: a unified approach to H2 and H heories Society for Industrial and Applied Mathematics. Philadelphia, PA (1999). p. 572.

Google Scholar

7. Blom, HAP, and Bar-Shalom, Y. The interacting multiple model algorithm for systems with Markovian switching coefficients. IEEE Trans Automat Contr. (1988). 33:780–3. doi:10.1109/9.1299.

CrossRef Full Text | Google Scholar

8. Martino, L, Read, J, Elvira, V, and Louzada, F. Cooperative parallel particle filters for online model selection and applications to urban mobility. Digit Signal Process (2017). 60:172–85. doi:10.1016/j.dsp.2016.09.011.

CrossRef Full Text | Google Scholar

9. Habibi, S. The smooth variable structure filter. Proc IEEE (2007). 95:1026–59. doi:10.1109/JPROC.2007.893255.

CrossRef Full Text | Google Scholar

10. Gadsden, SA, Song, Y, and Habibi, SR. Novel model-based estimators for the purposes of fault detection and diagnosis. IEEE ASME Trans Mechatron (2013). 18:1237–49. doi:10.1109/TMECH.2013.2253616.

CrossRef Full Text | Google Scholar

11. Kim, T, Wang, Y, Fang, H, Sahinoglu, Z, Wada, T, Hara, S, et al. Model-based condition monitoring for lithium-ion batteries. J Power Sources (2015). 295:16–27. doi:10.1016/j.jpowsour.2015.03.184.

CrossRef Full Text | Google Scholar

12. Qiao, HH, Attari, M, Ahmed, R, Delbari, A, Habibi, S, and Shoa, T. Reliable state of charge and state of health estimation using the smooth variable structure filter. Contr Eng Pract (2018). 77:1–14. doi:10.1016/j.conengprac.2018.04.015.

CrossRef Full Text | Google Scholar

13. Gadsden, SA, Habibi, SR, and Kirubarajan, T. A novel interacting multiple model method for nonlinear target tracking. In: FUSION 2010 : 13th international conference on information fusion; 2010 Jul 26 - 29; Edinburgh, Scotland. Piscataway, NJ: IEEE.

CrossRef Full Text | Google Scholar

14. Attari, M, Gadsden, SA, and Habibi, SR. Target tracking formulation of the SVSF as a probabilistic data association algorithm. In: The 2013 American control conference; 2013 June 17 - 19; Washington, DC; IEEE, Piscataway, NJ. p. 6328–32. doi:10.1109/ACC.2013.6580830.

CrossRef Full Text | Google Scholar

15. Attari, M, Luo, Z, and Habibi, S. An SVSF-based generalized robust strategy for target tracking in clutter. IEEE Trans Intell Transport Syst (2016). 17:1381–92. doi:10.1109/TITS.2015.2504331.

CrossRef Full Text | Google Scholar

16. Luo, Z, Attari, M, Habibi, S, and Mohrenschildt, MV. Online multiple maneuvering vehicle tracking system based on multi-model smooth variable structure filter. IEEE Trans Intell Transport Syst (2020). 21:603–16. doi:10.1109/TITS.2019.2899051.

CrossRef Full Text | Google Scholar

17. Demim, F, Nemra, A, and Louadj, K. Robust SVSF-SLAM for unmanned vehicle in unknown environment. IFAC-PapersOnLine (2016). 49:386–94. doi:10.1016/j.ifacol.2016.10.585.

CrossRef Full Text | Google Scholar

18. Allam, A, Tadjine, M, Nemra, A, and Kobzili, E. Stereo vision as a sensor for SLAM based smooth variable structure filter with an adaptive boundary layer width. In: ICSC 2017, 6th International conference on systems and control. 2017 May 7–9; Batna, Algeria; Piscataway, NJ: IEEE. p. 14–20. doi:10.1109/ICoSC.2017.7958700.

CrossRef Full Text | Google Scholar

19. Liu, Y, and Wang, C. A FastSLAM based on the smooth variable structure filter for UAVs. In: 15th international conference on ubiquitous robots (UR); June 26-30, 2018; Hawaii Convention Center; Piscataway, NJ IEEE (2018). p. 591–6. doi:10.1109/URAI.2018.8441876.

CrossRef Full Text | Google Scholar

20. Ahmed, RM, Sayed, MAE, Gadsden, SA, and Habibi, SR. Fault detection of an engine using a neural network trained by the smooth variable structure filter. In: IEEE international conference on control applications (CCA); Denver, CO; 2011 Sep 28-30; Piscataway, NJ IEEE (2011). p. 1190–6. doi:10.1109/CCA.2011.6044515.

CrossRef Full Text | Google Scholar

21. Al-Shabi, M, and Habibi, S. Iterative smooth variable structure filter for parameter estimation. ISRN Signal Processing (2011). 2011:1. doi:10.5402/2011/725108.

CrossRef Full Text | Google Scholar

22. Spiller, M, Bakhshande, F, and Söffker, D. The uncertainty learning filter: a revised smooth variable structure filter. Signal Process (2018). 152:217–26. doi:10.1016/j.sigpro.2018.05.025.

CrossRef Full Text | Google Scholar

23. Longbin, M, Xiaoquan, S, Yiyu, Z, Kang, SZ, and Bar-Shalom, Y. Unbiased converted measurements for tracking. IEEE Trans Aero Electron Syst (1998). 34:1023–7. doi:10.1109/7.705921.

CrossRef Full Text | Google Scholar

24. Gadsden, SA, and Habibi, SR. A new form of the smooth variable structure filter with a covariance derivation. In: CDC 2010 : 49th IEEE conference on decision and control 2010 Dec 15 - 17; Atlanta, GA; Piscataway, NJ: IEEE (2010). p. 7389–94. doi:10.1109/CDC.2010.5717397.

CrossRef Full Text | Google Scholar

25. Gadsden, SA, El Sayed, M, and Habibi, SR. Derivation of an optimal boundary layer width for the smooth variable structure filter. In: 2011 American Control Conference - ACC 2011; San Francisco, CA; 2011 Jun 29 - Jul 1; Piscataway, NJ: IEEE (2011). p. 4922–7. doi:10.1109/ACC.2011.5990970.

CrossRef Full Text | Google Scholar

26. Tian, Y, Suwoyo, H, Wang, W, and Li, L. An ASVSF-SLAM algorithm with time-varying noise statistics based on MAP creation and weighted exponent. Math Probl Eng. (2019). 2019:1. doi:10.1155/2019/2765731.

CrossRef Full Text | Google Scholar

27. Hide, C, Moore, T, and Smith, M. Adaptive Kalman filtering for low-cost INS/GPS. J Navig. (2003). 56:143–52. doi:10.1017/S0373463302002151.

CrossRef Full Text | Google Scholar

28. Yang, Y, and Gao, W. An optimal adaptive Kalman filter. J Geodes. (2006). 80:177–83. doi:10.1007/s00190-006-0041-0.

CrossRef Full Text | Google Scholar

29. Bernstein, DS. Matrix mathematics: theory, facts, and formulas Princeton, NJ: Princeton University Press (2009). 1184 p.

Google Scholar

30. Magni, L, Nicolao, GD, Magnani, L, and Scattolini, R. A stabilizing model-based predictive control algorithm for nonlinear systems. Automatica (2001). 37:1351–62. doi:10.1016/S0005-1098(01)00083-8.

CrossRef Full Text | Google Scholar

31. Seborg, DE, Mellichamp, DA, Edgar, TF, and Doyle, FJ. Process dynamics and control New York, NY: John Wiley & Sons (2010). p. 1085–6.

Google Scholar

32. Shtessel, Y, Edwards, C, Fridman, L, and Levant, A. Sliding mode control and observation. New York: Springer (2014). 356 p.

Google Scholar

Keywords: state estimation, nonlinear systems, stochastic systems, Kalman filtering, control

Citation: Spiller M and Söffker D (2020) On the Relation Between Smooth Variable Structure and Adaptive Kalman Filter. Front. Appl. Math. Stat. 6:585439. doi: 10.3389/fams.2020.585439

Received: 20 July 2020; Accepted: 16 October 2020;
Published: 27 November 2020.

Edited by:

Lili Lei, Nanjing University, China

Reviewed by:

Xanthi Pedeli, Athens University of Economics and Business, Greece
Yong Xu, Northwestern Polytechnical University, China

Copyright © 2020 Spiller and Söffker. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Mark Spiller,