Skip to main content


Front. Control Eng., 09 February 2022
Sec. Adaptive, Robust and Fault Tolerant Control
This article is part of the Research Topic Reliable Modeling, Simulation, Identification, Control and State Estimation for Dynamic Systems with Uncertainty View all 7 articles

Robust Feedback Control for Discrete-Time Systems Based on Iterative LMIs with Polytopic Uncertainty Representations Subject to Stochastic Noise

  • 1Institute of Automatic Control, School of Electrical, Information and Media Engineering, University of Wuppertal, Wuppertal, Germany
  • 2Lab-STICC, ENSTA Bretagne, Brest, France

This paper deals with the design of linear observer-based state feedback controllers with constant gains for a class of nonlinear discrete-time systems in the form of a quasi-linear representation in presence of stochastic noise. For taking into account nonlinearities in the design of linear observer-based state feedback controllers, a polytopic modeling approach is investigated. An optimization problem is formulated to reduce the sensitivity of the controlled system towards stochastic input, state, and output noise with a predefined covariance. Due to the nonlinearities, the separation principle does not hold, thus, the controller and the observer have to be designed simultaneously. For this purpose, a Lyapunov-based method is used, which provides, in addition to the controller and observer gains, a stability proof for the nonlinear closed loop in a predefined polytopic domain. In general, this leads to nonlinear matrix inequalities. To solve these nonlinear matrix inequalities efficiently, we propose an approach based on linear matrix inequalities (LMIs) with a superposed iteration rule. When using this iterative LMI approach, a minimization task can be solved additionally, which desensitizes the closed loop to stochastic noise. The proposed method additionally enables the consideration of different linear closed loop structures by a unified Lyapunov-based framework. The efficiency of the proposed approach is demonstrated and compared with a classical LQG approach for a nonlinear overhead traveling crane.

1 Introduction

The research field of linear control theory is well investigated and facilitates generalized and efficient methods to design linear controllers for linear systems. For example, LMI methods have been established for the robust controller design and used to prove asymptotic stability of the closed loop simultaneously.

However, as almost all real systems are nonlinear, these methods do not ensure stability for real-world applications. To make use of the efficiency of linear methods nonetheless, these nonlinearities, if bounded, can be expressed as polytopic domains just like (time-varying) uncertainties. For this purpose, the nonlinear model can be transformed into a quasi-linear form, whereby the bounded nonlinearities can be evaluated using interval arithmetic (Rauh and Romig, 2021; Rauh et al., 2021). For these systems, a convex LMI approach is just as applicable as for pure parameter uncertainty. In such cases, a polytopic representation of the uncertainties is required, which is bounded by the element-wise defined realizations. Hence, by taking the uncertainties into account in the controller design, stability of the nonlinear closed loop can be guaranteed.

Additionally, process and measurement noise appear in all real systems. These noise processes can lead to oscillatiosn in the control loop or to excessively large control amplitudes and therefore should be suppressed. This can be achieved for example by an observer-based state feedback structure. However, with the plant being nonlinear, the separation principle is not valid. Therefore, the controller and the observer must be designed simultaneously to ensure stability (Ibrir, 2008; Rauh et al., 2021). The problem of simultaneously designing discrete-time observers and controllers for uncertain systems is also considered, for example, in Peaucelle and Ebihara (2014); Zemouche et al. (2016). These works also make use of a Lyapunov-based LMI approach. However, noise reduction is not discussed. In Ibrir (2008) as well as in Kheloufi et al. (2014), observed-based controllers are designed for Lipschitz nonlinear systems under H conditions. Another control design method for nonlinear systems with a noise and disturbance compensation based on LMI techniques is shown in Furtat (2018) such as in Yucelen et al. (2010) as a robust output feedback control on the basis of active noise control. In De Oliveira et al. (2002) and Sadabadi and Karimi (2013), LMI methods are used to design dynamic and static output feedback controllers for discrete-time systems subject to H and H2 conditions. A further Lyapunov-based LMI approach with polytopic domains is shown for switched linear discrete-time systems in Phat and Ratchagit (2011); Ratchagit and Phat (2011) and Yotha and Mukdasai (2013), where delays are represented with intervals.

In the continuous-time case, a desensitization to stochastic noise is already investigated, for example, in Rauh et al. (2014); Rauh et al. (2018); Rauh and Romig (2021); Rauh et al. (2021). It is especially pointed out how standard low-pass-filtering can lead to oscillation for stochastic systems with an observer-based or a filter-based controller design. To counter such oscillations, an optimization task is presented, which minimizes the area for which stability cannot be proven. This optimization task is solved by a numerical LMI-based method, which can be applied for a filter or observer design with a previously designed controller. In Rauh and Romig (2021), this work is extended by considering bounded parameter uncertainty and a simultaneous controller and linear filter design. Thereby, the entire closed loop can be designed as insensitive as possible against parameter uncertainty and stochastic noise. The same optimization task is used in Rauh et al. (2021) to design an observer-based output feedback or an observer-based state feedback controller. The reason for the separation of those articles is the fact that the design criteria are not fully identical for the requirements of different control structures.

However, a discrete-time controller is required for the implementation on a microcomputer. Therefore, the continuous-time system is sampled, which leads to a discrete-time representation of the plant. To deal with these systems, methods are required that consider the discretization in the design.

This paper deals with a discrete-time LMI approach while simultaneously optimizing the observer and controller gains. Additionally, the article solves the issue that different closed loops lead to different design criteria. Hereby, a joint optimization algorithm is developed, which can be applied to a myriad of closed loop structures. For that purpose, we address the structured linear control (SLC) problem in which the dynamic controller is described as a structured state feedback approach in an augmented system model. In this paper, the procedure is shown explicitly for the design of an observer-based state feedback control. Through a superposed iteration rule, the LMI design parameters are the same as the parameters to be implemented. The direct discrete-time design allows the offline-computed control parameters to be transferred directly to the microcomputer. Due to the consideration of noisy output equations, leading to a direct disturbance feedthrough term, a classical H2 optimization cannot be applied. Moreover, the presence of polytopic domains invalidates the parameterization of classical LQG approaches. To overcome both limitations and to obtain a discrete-time noise-insensitive controller, we forecast the influence of noise on the closed loop behavior by discrete-time increments of the Lyapunov function candidate. This leads to the discrete-time counterpart of the Itô differential operator from Rauh and Romig (2021); Rauh et al. (2021). Thereby, the uncertainty domains are quantified and subsequently minimized.

By solving the proposed optimization task, a noise-insensitive controller is obtained. With its help, the actual state trajectories converge as closely as possible to the desired operating point. In addition, this framework has the capability to robustly prescribe the desired system dynamics in terms of a discrete-time DR-region concept, see Dehnert (2020). This approach allows for specifying admissible eigenvalue domains of the closed loop to adjust the control performance. In the proposed approach, the simultaneous optimization of the observer dynamics and of the controller gains results in bilinear matrix inequalities (BMIs). To find a solution of these BMIs, an iterative algorithm is designed, in which LMIs are solved in each iteration stage. Such iterative LMI algorithms for discrete-time systems have already been published for example in Dehnert et al. (2015); Grunert et al. (2019); Dehnert (2020); Dehnert et al. (2020); Lerch et al. (2021b); Lerch et al. (2021a) in various other contexts. The use of LMIs provides an efficient and numerically stable design method and allows to account for bounded parameter uncertainty or nonlinearities by means of a convex combination of extremal system realizations. In this approach, closed-loop stability is ensured by using a quadratic Lyapunov function candidate and the objective is to design linear controllers and observers with constant gains in order to reduce the online implementation effort.

This paper is structured as follows. Within a problem description in Section 2, the observer-based output feedback controller is introduced. This is followed in Section 3 by the necessary basics of the work, which consist of the description of polytopic domains, robust Lyapunov stability and robust DR regions. Subsequently, the main result is presented in Section 4. Here, also the convergence of the superimposed iteration rule is studied, as well as a derivation of the discrete version of the Itô differential operator used for the reduction of the influence of noise. The efficiency of the proposed approach is demonstrated in Section 5 for a point-to-point control of an overhead traveling crane with bounded nonlinearities. Stochastic noise accounts for non-modeled external disturbances in the state equations and for perturbations in the available measurements. Finally, conclusions and an outlook on future work are provided in Section 6.

2 Problem Statement

2.1 Desensitization of the Closed Loop to Stochastic Noise in the Continuous-Time Case

Consider the continuous-time linear noisy system


with the state vector xRn; the input vector uRm; the output vector yRp; the system matrix Ac; the input matrix Bc and the output matrix C; the stochastically independent standard normally distributed Brownian motions of actuator noise, process noise and sensor noise wuRmu, wpRmp and wyRmy, respectively; the disturbance input matrices Gcu, Gp and Gy contain the corresponding standard deviations.

To prove the stability of system (Eq. 1), the time derivative L(V) in the form of its stochastic interpretation of the Lyapunov function candidate V(x)=12xTPx has to be negative definite. Due to the stochastic noise, this leads to the Itô differential operator (Kushner, 1967), which can be expressed by


with the augmented system matrices A and G of the considered closed loop (Rauh et al., 2014; Rauh et al., 2018). However, due to the noise, it is possible that L(V) is positive definite in the neighborhood of x = 0. This non-provable stability region is the interior of an ellipsoid and its boundary can be specified by L(V) = 0. The Itô differential operator (Eq. 2) has been used in various contexts, also for uncertain systems in Rauh and Romig (2021) and Rauh et al. (2021). A derivation can be found in Senkel et al. (2016). The objective of the proposed study is to employ this operator analogously for discrete-time control systems to design observer-based state feedback controllers.

2.2 Desensitization of the Closed Loop to Stochastic Noise in the Discrete-Time Case

In the following, we consider multivariable nonlinear discrete-time systems subject to noise represented by the quasi-linear form


with the same nomenclature as in Eq. 1. The nonlinearity of the system is assumed only in the state equation, i.e., the matrices Axk and Bxk are dependent on the states xk, whereas the measurement equation is linear. For more detailed information on quasi-linear systems, see Coetsee (1994). In general, the influence of noise is reduced by a suitable filter. Since an observer also has a filtering characteristic, the aim of this paper is to present a method to design a linear observer-based state feedback controller for nonlinear systems of the form (Eq. 3). Due to the nonlinearities, the separation theorem is no longer valid, which means that controller and observer can influence each other’s stability and must be designed simultaneously (Rauh et al., 2021). Therefore, the closed loop system of the observer-based state feedback is considered in the next step. The discrete-time state-space representation of a linear time-invariant full state observer is given by


with the observer gain HRn×p and the nominal matrices Anom and Bnom, which are determined by a suitable linearization or by the evaluation of the quasi-linear system matrix for a representative operating point. With that, all estimated states x̂ can be fed back by the control law


with the controller gain KRm×n, the reference variable rkRp and the feedforward gain SRm×p. Without loss of generality, we assume that rk=0. By formulating (Eq. 4) as an error model with


the augmented closed loop system


can be derived with


The closed loop structure is visualized in Figure 1.


FIGURE 1. Block diagram of the discrete-time observer-based state feedback control system.

1. Note that, due to the improved implementability for real-world applications, a linear control concept is shown in this paper. In Section 5.2, we elaborate a comparison with a standard LQG approach with an identical control strukture as shown in Figure 1. Both controllers (LMI-based optimization and classical LQG) should have the same control structure so that the implementation effort is identical. This yields enhanced comparability. The most important point here is that real-time implementability is provided even if the computing capacity is limited. Since only matrix-vector products have to be computed in the subsequent operation of the observer-based controls, the implementation effort is minimal in both cases.The aim of this paper is to determine the constant gain matrices K and H (which will be decision variables in the LMI approach) in such a way that noise is suppressed effectively. Thereby, the influence of noise on the controlled system decreases and the controller is able to follow the reference variables as accurately as possible with reduced oscillations. Furthermore, a stability proof for the closed loop system will be given, such that controller and observer stabilize the nonlinear system in a predefined operating range.

3 Fundamentals for the Controller Design

3.1 Modeling with Polytopic Domains

Consider the nonlinear discrete-time system (Eq. 3). If the states xk=x1k,,xnkT of the system (Eq. 3) are constrained by known limits, such that xix̲i,x̄i,i=1,,n, it is possible to interpret all matrix entries as bounded by a polytopic domain. For this purpose, new independent parameters can be introduced for all matrix entries in which dissimilar nonlinearities appear. The procedure is shown in detail in Section 5.1. The matrices Axk and Bxk belong to the convex combination


of the extremal vertex matrices Av and Bv, where nv denotes the number of independent extremal realizations for the union of all matrices included in Eq. 6. With this representation, it is possible to consider constant or time-varying uncertain parameters with known bounds (Scherer and Weiland, 1994; Boyd et al., 1997). Measurement tolerances for specific system parameters, fabrication tolerances or system nonlinearities can be accounted for by this uncertainty model. Furthermore, these uncertain parameters can be used to describe unidentifiable parts of the system dynamics or to cover different or even faulty system variants. In this work, the parameters ζv are functions of the states xk, thus they are time-varying. The closed loop (Eq. 5) with the matrices Axk and Gxk can then be expressed by


with the nv extremal vertex matrices Av and Gv. The linear variables ζv depends nonlinearly upon the state variables. Therefore, the expression (Eq. 7) is restrictive and some of the vertex matrices have unphysical properties. This overapproximation, caused by the convex enclosure of the nonlinearities, is typically not unique and can vary depending on the chosen vertex matrices.

3.2 Quadratic Lyapunov Stability

To guarantee robust stability of the autonomous closed loop (Eq. 5) with wk=0 for the bounded set (Eq. 7), a quadratic Lyapunov function candidate is given by


with P = PT ≻ 0 as a free decision variable. The uncertain closed loop (Eq. 5) with the polytopic domain (Eq. 7) is robustly (quadratically) stable if the Lyapunov conditions


are fulfilled (Scherer and Weiland, 1994). Therefore, it is sufficient that the increment ΔV = V(z[k + 1]) − V(z[k]) is negative definite. Remind that Av depends on the control and observer parameters, which are also decision variables. Therefore, (Eq. 8) are nonlinear matrix inequalities, hence, solving for P and Av simultaneously by an LMI solver is not directly possible. For this reason, an overlaid iterative LMI method is presented in Section 4 that ensures stability of the closed loop.

3.3 Robust DR Regions

Due to the robust design procedure, a direct pole placement is not possible. This issue is resolved by robust DR regions, which are subsets of the unit circle. By a pole region placement for all extreme matrices of the convex hull (Eq. 7) within the domain DR, stability is proven and certain closed loop characteristics can be realized. For example, a numerically efficient region for a high damping characteristic is sketched in Figure 2. This corresponds to the matrix inequalities


where |α + r|⩽1 with 0 < r < 1 and −1 < α < 1. If (Eq. 9) are satisfied, the system (Eq. 5) is robustly stable and all eigenvalues of the extremal realizations (Eq. 7) are located within the circular sub-region of the unit circle (Dehnert, 2020; Rauh, 2017; Wahab, 1994).


FIGURE 2. Robust circular DR region.

This transformation maps the unit circle with radius r = 1 and center at the origin α = 0 onto a circle of radius r < 1 and center at α on the real axis in the complex plane. The radius r is also called exponential decay rate and α is an eigenvalue shift operator. By minimizing this DR region and moving towards the origin, the convergence rate of the closed loop can be maximized. Further, with solely positive eigenvalues with a small imaginary part, oscillations are minimized, which corresponds to a larger damping.

4 Main Results

4.1 Iterative Offline Observer Based Controller Design with Robust DR Region Criteria

First we assume that w[k] = 0 is true, then the stability of the deterministic, noise-free part of the augmented system (Eq. 5) can be proven by the conditions (Eq. 9), which are equivalent to


after applying the Schur complement. To linearize the inverse of the matrix P, we consider the constant matrix P̂=P̂T0 as an approximation of P. Then, P−1 can be linearized by


[Dehnert, 2020]. Due to P−1L, the nonlinear matrix inequalities (Eq. 10) are always satisfied if the LMIs


are fulfilled. The proof of inequality (Eq. 11) can be found in Dehnert et al. (2015) or Dehnert (2020). Note that the inequality (Eq. 11) becomes an equality if P=P̂ holds. In the developed algorithm, the matrix P̂ is inherited from the last iteration by the update rule


where l is the current iteration. This leads to a successively closer approximation of P. The initialization of the update rule (Eq. 13) can be done with P̂0=I, see Section 4.2. By this update rule, the non-convex problem caused by the matrix inequalities (Eq. 10) can be simplified into a sequence of convex subproblems.

In a first step, the feasibility problem is formulated in the form of our method in Algorithm 1. This procedure ensures that a valid solution is determined, however, yet without any optimization. This solution then represents the starting point for further optimizations. By using Algorithm 1, the nonlinear problem is reduced to an iterative LMI algorithm. Algorithm 1 is essential because the initialization of P̂01=I may be far off the real inverse of P, which can lead to infeasibility. To address this problem, r > 1 is required in the first iteration. Subsequently, r is decreased successively to obtain a stable solution. In each iteration step l, the constant matrix P̂ is only updated if the LMI problem is feasible. Otherwise, the latest feasible solution is restored and the step size Δr is halved to obtain a feasible solution in the next iteration step l. Only if r < 1, stability of the closed loop is guaranteed. As soon as r < rend holds for a predefined α, whereby |α + rend| < 1, it is guaranteed that the closed loop is robustly stable and all eigenvalues of the extremal realizations (Eq. 7) are located within the circular sub-region of the unit circle.

Algorithm 1 Feasibility and guarantee of desired DR regions for the controller and observer gains

4.2 Convergence Study of the Iterative LMI Solution

As the convergence of the linearization P−1 = L in the iterative method is an important attribute for the solution, which is no issue in non-iterative standard methods, a convergence study will be shown in the following, based on Dehnert (2020).

Let Pco be a known and constant matrix. To numerically compute the matrix inverse Pco1, the linearization L can be used in combination with the update process (Eq. 13). With an initialization P̂0, the next value of the numerical approximation of P−1 is given by


with l = 0, 1, 2, … , whereby the convergence condition can be formulated as


This method is known as Newton-Schulz iteration and was first published in Schulz (1933) and is also called Hotelling-Bodewig algorithm or hyper-power iterative method (Soleymani, 2013). The Newton-Schulz iteration exhibits quadratic convergence if the initial value P̂01 is sufficiently close to Pco1. By left-hand multiplication of the Eq. 14 by Pco and subtraction from the unit matrix I, the proof of quadratic convergence follows from


This reformulation simultaneously gives information about the radius of convergence, such that quadratic convergence holds if P̂01 satisfies the inequality


with the spectral radius ρ.

However, in this article, Pco = P is an unknown decision variable of the LMI problem and thus not constant. Since P changes in each iteration, quadratic convergence can be proven for each iteration by the relation (Eq. 15) for the presented method. Thus, by using the iteration rule in combination with LMIs, a method similar to the Newton-Schulz iteration is obtained. This is used to find the origin of the matrix difference P−1L ≈ 0. To determine possible initial values, the condition (Eq. 16) cannot be used directly because P is unknown. However, due to the relation IPP̂01, it can be derived that diagonally dominant initial values P̂0 cause smaller spectral radii ρ. For this purpose, the identity matrix is used as the initial value, such that P̂0=I.

4.3 Desensitization of Observer and Controller Gains via Iterative LMI Solutions

In Section 4.1, it was assumed that there is an absence of noise, thus wk=0. Therefore, the stability conditions (Eq. 8) are valid. However, if noise affects the system (wk0), it is necessary to expand this condition. It follows the discrete-time version


of the Itô differential operator (Eq. 2). This operator is the generalization of the increment ΔV = V(z[k + 1]) − V(z[k]) in the stochastic case.

Proof of the discretized Itô differential operator. To prove (Eq. 17), consider the expectation value


of ΔV. Due to causality, we can assume that w[k] and z[k] are stochastically independent and the noise process is a zero mean process, i.e., E{w[k]} = 0. It also follows that Ez[k]w[k]=Ez[k]Ew[k]=0 and we obtain


By using the trace of matrices for the final scalar summand in Eq. 18, the reformulation trace{ABC} = trace{CAB} is valid. Therefore, it follows


Furthermore, we assume that the variance of each noise process wi[k] equals 1, which leads to var{w[k]}=E{w[k]E{w[k]}w[k]E{w[k]}T}=E{w[k]wT[k]}=I. With now EV} consisting only of deterministic values (constant sampling time), i.e., LD(V) = EV}, one obtaines from Eq. 19 the discrete-time equivalent of the Itô differential operator (Eq. 17).

Note, a derivation of the Itô differential operator for the continuous case can be found in Senkel et al. (2016). The following relations can already be found in Rauh et al. (2021) for continuous-time systems and are reformulated here for the discrete-time case.

Due to the stochastic noise with non-zero matrices in Gv, LD(V) may become non-negative in a region around z = 0. However, LD(V) < 0 is necessary in order to verify stability. Thus, stability cannot be proven in a domain near the origin. The bound is represented by LD(V) = 0 and can be described by the ellipsoids


for each vertex realizations with


To increase the region in which stability can be proven, one option is to minimize the interior of the ellipsoids (Eq. 20). The volume of the ellipsoids is proportional to


Therefore, the cost function


shall be minimized. To be able to use LMI conditions, a reshaping of traceGvTPGv is necessary. Therefore, we introduce the additional decision variable N = NT ≻ 0, such that


which can be reformulated with the Schur complement to


As presented in Section 4.1, the matrix inverse can be approximated by the linearization P1L=2P̂1P̂1PP̂1 such that


holds. By these reformulations and the results of Section 4.1, the cost function (Eq. 21) can be rewritten, thus the minimization task


subject to




can be formulated. All variables denoted by (̂) are constant values obtained from the last iteration. The iteration rule for the optimization task (Eq. 22) is given in Algorithm 2. This second stage is initialized with the solution of the first stage (Algorithm 1). Therefore, all eigenvalues of the matrices Av are already placed in the DR region according to Section 3.3. With Algorithm 2, the sensitivity towards noise is reduced while the eigenvalues remain in the DR region. The controller and observer are simultaneously optimized. Thereby, a low gain controller and observer are designed.

4.3.1 Computational Complexity

The shown algorithms are implemented in Matlab using Mosek (MOSEK ApS, 2019) and the interface Yalmip (Löfberg, 2004). The controller design is performed offline using the algorithms. Therefore, computational time is not a limiting factor and not shown in detail. The number of iterations and the computation time depends on the order of the closed loop, the size and number of the decision variables and the number of vertex matrices. However, each calculation of the control parameters is done within a few seconds to minutes on a standard PC.

4.4 Improvements and Delimitation in Comparison to Preliminary Works and Existing Results

4.4.1 Parameter-Dependent Decision Variables

Due to the quadratic Lyapunov function, which is used for all nv extremal realizations Av, conservative solutions can arise. In order to avoid conservative solutions, the use of parameter-dependent Lyapunov functions has been established in the literature (Boyd et al., 1997), (Daafouz and Bernussou, 2001). However, also the shown iteration rule in combination with a quadratic Lyapunov function leads to less conservative solutions. This could be demonstrated in Dehnert (2020) and Dehnert et al. (2021). For this reason, and due to the lower numerical effort, we use parameter-independent quadratic Lyapunov functions in this approach. Note, the matrix N can also be chosen to be parameter dependent. Thus, in general, less conservative solutions can be achieved. This procedure was used for example in Rauh et al. (2021). However, in investigations of the method presented here, a parameter-dependent matrix N does not yield any positive effect. In combination with a parameter-dependent Lyapunov matrix, a further improvement of the cost function could be observed in first experiments. These correlations should be investigated in future research.

4.4.2 LMI Methods

The main disadvantage of established LMI design methods for discrete-time systems from the literature (see for example de Oliveira et al. (1999); De Oliveira et al. (2002)) is the need of a change of variables to convert the nonlinear matrix inequality into LMIs. By the iterative procedure used in this paper, the closed loop system matrix remains in its original form in the design procedure. This makes it possible to apply the method on a myriad of different types of closed loops, such as PID structures or observer-based feedback controllers and their combinations within a uniform approach, whereas other methods are solely applicable to one particular controller type. The numerical effort of the iteration rule is classified as acceptable, due to the fact that the method yields less conservative solutions, compared to existing standard methods. This is especially investigated in Dehnert et al. (2021) for saturated discrete-time linear systems. Furthermore, the observer and controller matrices can be structured independently of each other and independently of the Lyapunov matrix P without modifications of the LMI conditions. As a result, the applicability of LMI methods for real applications is increased, since a change of LMI conditions is avoided. This simplifies the design of different control structures for various real-world technical systems significantly.

The independence of the method on the actual controller types is generated by the formulation of the iteration rule. This makes it possible to avoid changing the control variables in additional LMI variables, such that Av can be implemented in its original form. These are major advantages of the method, compared to preliminary works of Rauh and Romig (2021) and Rauh et al. (2021) for the continuous-time case.

Algorithm 2 Offline Minimization of Sensitivity towards Noise (Stage 2)

5 Example: Overhead Traveling Crane

5.1 Modeling the Overhead Traveling Crane

There exist several approaches to the modeling of overhead traveling cranes, which differ in complexity due to the number of inputs. As an example, Ackermann (2002) uses a simplified model, in which the rope length change cannot be manipulated. In Park et al. (2007), the modeling of a system with variable rope length is shown. In the following model, the rope length depends on a winch whose radius has to be taken into account, leading to an extension of the model from Park et al. (2007). The crane system is shown in Figure 3 and consists of a cart that can be moved along a rail with the help of a synchronous motor. The winch drive is mounted on the cart to move the weight suspended on a rope. It is assumed that the rope has vanishing elasticity. Incremental encoders are used to measure the position of the carriage, the pendulum movement and the rope length. These are the measurable outputs of the system and can be summarized in the vector



FIGURE 3. Setup of the overhead traveling crane.

The mathematical model of the overhead traveling crane is given by the derivations of the equations of motion using Lagrange’s equations of second kind. The parameters of the model are the mass m1 = 5.5 kg of the cart, the payload with the mass m2 = 0.5 kg, the rope winch with the radius RT = 0.03 m and the mass moment of inertia θ = 0.000225 kg m2. Then, the Lagrange function is defined as the difference of kinetic and potential energy according to


with the kinetic energy


and the potential energy


where the gravitation constant is g=9.81ms2. This leads to the second-order nonlinear state equations


where Qj represents all external and non-conservative forces, which can be summarized in the vector


with the actuation forces Fx and Fl generated by the engine drum and the rope winch, respectively, as well as the friction constants of the cart Ffc=13.5kgs, the rope winch Ffr=3kgs and the pendulum bearing Ffp=0.0025kgs. The external forces Fx and Fl are the system inputs u1 and u2. The resulting multivariable nonlinear equations of motion are given in Appendix A. This nonlinear model can be transformed into the quasi-linear representation


with the state vector


composed of the generalized variables (Eq. 27) and the corresponding generalized velocities q̇. For this quasi-linear model, the simplifications


due to small pendulum angles were made. In addition, the second input for the rope length u2 has to be extended by u2=u2+u2,0 by the gravity compensation term u2,0 = −m2 g. Neglecting it would lead to a continuous increase of the rope length in the nonlinear case due to gravity.

A possible realization of the quasi-linear representation (Eq. 28) can be given by the matrices


where the constants pi with i = 1, 2, , 13 consist of the system parameters. These constants and their values are given in Appendix A. It is assumed that the states x and the inputs u are constrained by


In general, this allows to evaluate the respective matrix entries from Eq. 30 using interval arithmetic. For the control design, a polytopic representation in the form of Eq. 9 can be defined. For that purpose, the occurring states and nonlinearities are taken into account by introducing nδ = 5 independent parameters of the interval vector δ=δ1,,δnδ, with the components δi=δ̲i,δ̄i, such that


which leads to the transformed matrices


If all vertices of the nδ independent parameters of Eq. 31 are considered, the polytopic representation


is obtained with nv=2nδ=32 vertices. This allows us to taken into account the nonlinearities of the states in the controller design. The procedure described is based on Rauh et al. (2017).

Due to the independence of the parameters δi, the system dynamics are embedded conservatively by this approach. Reducing the pessimism should be the subject of further researches. For example, smaller interval boxes connected in series could be used to reduce the over-approximation of Eq. 31 (Azuma et al. (1997)). If all parameters δi are monotonically decreasing or increasing functions of x, it is also possible to reduce the number of vertices (Azuma et al. (2000)).

In the following, the first order, explicit Euler approximation


with Ts = 0.015 s is used to discretize (31). This avoids the appearance of the matrix exponential function in the discretization, such that the convexity condition of Eq. 32 remains valid.

Figure 4 shows the advantage of the quasi-linear model in comparison to a linear model. For this comparison, the linear model was linearized at the operating point



FIGURE 4. Simulation results for the nonlinear, the quasi-linear and the linear model with different rope length. Case (A) l = 0.3 m, Case (B) l = 0.2 m and Case (C) l = 0.4 m.

The nonlinear model from Appendix A is shown in red, the quasi-linear model in blue and the linear model in green. In Case (Figure 4A) all models have a constant rope length of l = 0.3 m, in Case (Figure 4B) l = 0.2 m and in Case (Figure 4C) l = 0.4 m. It can be observed that in the latter two cases large deviations occur due to the linearization and deviation from the operating point. Despite the simplifications made for small angles, a deviation from the operating point shows that the quasi-linear model behaves like the nonlinear one, while the linear model shows deviations. Due to the fact that the quasi-linear model is used, the presented approach is independent of the operating point. Only the upper and the lower bounds of the states must be known, which, however, is not a disadvantage, since these usually result from the system itself.

5.2 Control of the Overhead Traveling Crane

The presented method is to be compared with a standard approach. For this purpose, an LQG controller has been implemented additionally. Subsequently, the results of the LMI controller are compared with the standard LQG design procedure. The same control structure of both controllers ensures the comparability of the methods. Due to limited computing capacity of the implementation both controllers are linear with constant gains.

In the following, it is first described how the setting parameters of the LMI controller and the LQG controller can be selected systematically. Both LQG and LMI approaches are parameterized with the disturbance input matrices


Exactly introduced in Section 2, where the linearized model with Anom and Bnom is used in the observer (cf. Figure 1). For that purpose, and to design the LQG controller, the nonlinear model from Appendix A has been linearized at the operating point (Eq. 34) and discretized with Eq. 33. Subsequently, the LQG’s observer and controller were designed separately from each other. In the simulation shown later, which is equivalent to Figure 1 and uses the nonlinear model from Appendix A to represent the plant, the LQG controller exhibits a stable behavior during the simulation. However, due to the nonlinearities no guaranteed stability statement can be made (invalid separation theorem). Thus, there is no proof of stability for the LQG control in the nonlinear case. Furthermore, an optimal design of the LQG controller is not possible and the parameters have to be set individually and semi-empirically. To determine the parameters as systematically as possible, the covariance matrices for the design of the observer are given by


Furthermore, the controller parameterization in the LQG case was performed with the diagonal matrices


with μx,i = 1 and μu,i = 0.5.

Thus, the diagonal elements of the weighting matrices are normalized by the maximum value of the respective state or input.

For the LMI controller, there are two tuning parameters r and α, defined in Section 3.3. With these parameters, it is possible to manipulate the eigenvalue location of the extremal matrices (Eq. 31) and thus affect the dynamic behavior of the system. First, a suitable radius r is determined, with which a sufficient control gain K is available without providing high observer gains H. Therefore, α = 0 is placed and Algorithm 2 is used for various values of r. The resulting evaluation is shown in Figure 5. The parameter r represents an upper estimate of the spectral radius of all extremal matrices Ai. As r decreases, the closed loop decay rate increases. Therefore, the r-dependency of the maximum time constant Tmax(Ai) of all extremal matrices Ai is shown. Furthermore, Figure 5 shows the Frobenius norms of the observer ‖HF gain and control gain ‖KF, respectively. Since the observer gain also increases with decreasing r, a compromise between control gain and noise reduction has to be established. This compromise is represented, for example, by r = 0.993. In the following, the parameter α is investigated. Therefore, a constant distance to the stability bound is given, such that α + r = 0.993 is valid. The following four cases are subject to discussion:



FIGURE 5. r-dependency of the maximum time constant Tmax(Ai) of all closed loop system matrices Ai and controller and observer gains, represented by ‖HF and ‖KF (α = 0).

Exemplarily, a simulation result for Case 2 is shown in Figure 6; Figure 7 in comparison to the LQG controller. During the simulation, a predefined, piecewise constant, tracking profile r was applied to the overhead traveling crane system. Figure 6 shows the tracking behavior and Figure 7 the observer errors. In the following, the root mean square deviations (RMSE) values


with N = 30 s/Ts = 2000 of the simulation shown in Figure 6; Figure 7 are taken into account. Furthermore, the maximum control variables are evaluated. For this purpose, the simulation was carried out for all four cases. The results compared to the LQG controller of this analysis are shown in Table 1.


FIGURE 6. Tracking performance of the measurable states x1, x3 and x5 and control inputs u1 and u2 for the LMI and LQG controller.


FIGURE 7. Observer errors of unmeasurable states x2, x4 and x6 for the LMI and LQG controller.


TABLE 1. Comparison of the RMSE values (Eqs 35, 36) and the maximum control variables max |u| for (α + r = 0.993).

The control performance of all four cases is similar to the LQG controller. However, the observer errors of the LMI controllers (Case 1—Case 4) are strictly smaller than the LQG observer error (see also Figure 7). Furthermore, the reduction of the radius r leads to significantly smaller control variables without negatively affecting the control and observer behavior. Next, it is shown how the optimization (Eq. 22) affects the eigenvalues of the extremal matrices. For this purpose, Figure 8 shows the respective eigenvalue locations (Case 1—Case 4) before and after the optimization. The optimization effectively suppresses system noise. The proposed algorithm achieves this by reducing the gains, thus placing the eigenvalues further to the right of the r boundary. In a final summary, a further comparison of all LMI controllers (Case 1—Case 4) to the LQG approach is shown in Table 2. Therefor, all controllers are rated in terms of the RMSE values



FIGURE 8. DR regions before and after optimization for Case 1—Case 4.


TABLE 2. Comparison for all controllers in terms of the RMSE values (Eq. 37) from all observer states x̂i to the ideal noise-free trajectories x̂i,f; the improvement is quantified by a comparison of each Case with the LQG control.

Quantifying all observer states x̂i with respect to the ideal noise-free trajectories x̂i,f. This comparison shows the impact of the minimization task (Eq. 22). It can be observed that the result of the minimization is degraded by progressively decreasing the radii of the DR regions. This shows as well, that a compromise between decreasing the DR region for the controller tuning and noise reduction has to be found. Moreover, there are significant improvements in noise reduction up to 64.9% compared to the LQG controller.

6 Conclusions and Outlook

In this article, a design of a linear observer-based state feedback controller based on an iterative LMI approach was developed for discrete-time systems in the presence of stochastic noise. Nonlinearities were taken into account by forming a polytopic quasi-linear representation. The verification of closed loop stability under disturbances could be provided by a discretized version of the Ito differential operator, whereby the noise was already taken into account in the control design. In addition, the proof of convergence for the method could be provided. The proposed method can also be applied to controllers with different structures without modifying the method. The example of the overhead traveling crane could be used to demonstrate the advantages of the new method in comparison with a standard LQG controller. This means that only a few setting parameters are required, which can be used for systematically adjusting the controller and observer gains. Furthermore, the impact of noise could be significantly reduced.

Further work will deal with an optimization of the control parameters to achieve enhanced damping properties and smaller tracking errors. This could include filter-based PID-controllers and parameter-dependent Lyapunov functions. Additionally, the observer matrices Anom and Bnom can be optimized if they are chosen as free decision variables. On the one hand, this implies a less conservative model in which no or less unphysical vertices exist. On the other hand, the optimized observer matrices may cause unphysical behavior in control operation. This effect can be reduced, for example, by improving the convex enclosure of the quasi-linear model, similar to the interval multisection applied in Rauh et al. (2017) for the implementation of a gain-scheduling controller, such that the over-approximation is reduced. In addition, it will be investigated how actuator saturations can be included in the optimization and how they affect it.

Data Availability Statement

Data are contained within the article.

Author Contributions

Conceptualization, RD and AR; Investigation, RD, MD, SL, and AR; Software, RD, MD, SL, and AR; Validation, RD, MD, SL, AR, and BT; Writing-original draft, RD, MD, SL, AR, and BT; Writing-review and editing, RD, MD, SL, AR, and BT. All authors have read and agreed to the published version of the manuscript.


This article is funded by the Publication Fund for Open Access Publications of the University of Wuppertal.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.


We acknowledge support from the Open Access Publication Fund of the University of Wuppertal.


The nonlinear equations of the overhead traveling crane system are given by


They can be simplified with (29) to the quasi-linear model (31) with the constant terms



Ackermann, J. (2002). Robust Control. London: Springer.

Google Scholar

Azuma, T., Watanabe, R., and Uchida, K. (1997). “An Approach to Solving Parameter-dependent LMI Conditions Based on Finite Number of LMI Conditions,” in Proceedings of the American Control Conference, Albuquerque, NM, June 6, 1997. doi:10.1109/acc.1997.611851

CrossRef Full Text | Google Scholar

Azuma, T., Watanabe, R., Uchida, K., and Fujita, M. (2000). A New LMI Approach to Analysis of Linear Systems Depending on Scheduling Parameters in Polynomial Forms. at-Automatisierungstechnik 48 (4), 199. doi:10.1524/auto.2000.48.4.199

CrossRef Full Text | Google Scholar

Boyd, S., El-Ghaoui, L., Feron, E., Balakrishnan, V., and Yaz, E. (1997). Linear Matrix Inequalities in System and Control Theory. Philadelphia: SIAM: studies in applied mathematics.

Google Scholar

Coetsee, J. (1994). Control of Nonlinear Systems Represented in Quasilinear Form. Massachusetts: Massachusetts Institute of Technology.

Google Scholar

Daafouz, J., and Bernussou, J. (2001). Parameter Dependent Lyapunov Functions for Discrete Time Systems with Time Varying Parametric Uncertainties. Syst. Control. Lett. 43, 355–359. doi:10.1016/s0167-6911(01)00118-9

CrossRef Full Text | Google Scholar

de Oliveira, M. C., Bernussou, J., and Geromel, J. C. (1999). A New Discrete-Time Robust Stability Condition. Syst. Control. Lett. 37, 261–265. doi:10.1016/s0167-6911(99)00035-3

CrossRef Full Text | Google Scholar

De Oliveira, M. C., Geromel, J. C., and Bernussou, J. (2002). Extended H 2 and H Norm Characterizations and Controller Parametrizations for Discrete-Time Systems. Int. J. Control. 75, 666–679. doi:10.1080/00207170210140212

CrossRef Full Text | Google Scholar

Dehnert, R. (2020). Entwurf robuster Regler mit Ausgangsrückführung für zeitdiskrete Mehrgrößensysteme. Wiesbaden: Springer Vieweg.

Google Scholar

Dehnert, R., Lerch, S., Grunert, T., Damaszek, M., and Tibken, B. (2021). “A Less Conservative Iterative LMI Approach for Output Feedback Controller Synthesis for Saturated Discrete-Time Linear Systems,” in 25th International Conference on System Theory, Control and Computing (ICSTCC), Iasi, Romania, October 20–23, 2021. doi:10.1109/icstcc52150.2021.9607288

CrossRef Full Text | Google Scholar

Dehnert, R., Lerch, S., and Tibken, B. (2020). “Robust Anti Windup Controller Synthesis of Multivariable Discrete Systems with Actuator Saturation,” in 2020 IEEE Conference on Control Technology and Applications (CCTA), Montreal: QC, Canada, August 24–26, 2020, 581–587. doi:10.1109/ccta41146.2020.9206346

CrossRef Full Text | Google Scholar

Dehnert, R., Tibken, B., Paradowski, T., and Swiatlak, R. (2015). “Multivariable PID Controller Synthesis of Discrete Linear Systems Based on LMIs,” in 2015 IEEE Conference on Control Applications (CCA), Sydney, NSW, Australia, September 21–23, 2015, 1236–1241. doi:10.1109/cca.2015.7320781

CrossRef Full Text | Google Scholar

Furtat, I. (2018). Control of Nonlinear Systems with Compensation of Disturbances under Measurement Noises. Int. J. Control. 93, 1–23. doi:10.1080/00207179.2018.1503723

CrossRef Full Text | Google Scholar

Grunert, T., Dehnert, R., Kummert, A., Tibken, B., and Fielsch, S. (2019). “Gain Scheduled Control of Bounded Multilinear Discrete Time Systems with Uncertanties: An Iterative LMI Approach,” in 2019 IEEE 58th Conference on Decision and Control (CDC), Nice, France, December 11–13, 2019, 5199–5205. doi:10.1109/cdc40024.2019.9029623

CrossRef Full Text | Google Scholar

Ibrir, S. (2008). Static Output Feedback and Guaranteed Cost Control of a Class of Discrete-Time Nonlinear Systems with Partial State Measurements. Analysis 68 (7), 1784–1792. doi:10.1016/

CrossRef Full Text | Google Scholar

Kheloufi, H., Zemouche, A., Bedouhene, F., and Souley-Ali, H. (2014). “Robust H Observer-Based Controller for Lipschitz Nonlinear Discrete-Time Systems with Parameter Uncertainties,” in 53rd IEEE Conference on Decision and Control, Los Angeles, CA, December 15–17, 2014, 4336–4341.

Google Scholar

Kushner, H. J. (1967). Stochastic Stability and Control. New York: Academic Press.

Google Scholar

Lerch, S., Dehnert, R., Damaszek, M., and Tibken, B. (2021a). “Anti Windup PID Control of Discrete Systems Subject to Actuator Magnitude and Rate Saturation: An Iterative LMI Approach,” in 25th International Conference on System Theory, Control and Computing (ICSTCC), Iasi, Romania, October 20–23, 2021. doi:10.1109/icstcc52150.2021.9607157

CrossRef Full Text | Google Scholar

Lerch, S., Dehnert, R., Damaszek, M., and Tibken, B. (2021b). “Static Output Feedback Controller Design of Discrete Systems Subject to Actuator Magnitude and Rate Saturation,” in 25th International Conference on System Theory, Control and Computing (ICSTCC), Iasi, Romania, October 20–23, 2021. doi:10.1109/icstcc52150.2021.9607245

CrossRef Full Text | Google Scholar

Löfberg, J. (2004). “Yalmip: A Toolbox for Modeling and Optimization in MATLAB,” in 2004 IEEE International Conference on Robotics and Automation (IEEE Cat. No.04CH37508), New Orleans, LA, April 26–May 1, 2004 (New Orleans, LA: IEEE), 284–289.

Google Scholar

MOSEK ApS (2019). The MOSEK Optimization Toolbox for MATLAB Manual. Version 9.0. Available at:

Google Scholar

Park, H., Chwa, D., and Hong, K.-S. (2007). A Feedback Linearization Control of Container Cranes: Varying Rope Length. Int. J. Control Automation, Syst. 5, 379–387.

Google Scholar

Peaucelle, D., and Ebihara, Y. (2014). LMI Results for Robust Control Design of Observer-Based Controllers, the Discrete-Time Case with Polytopic Uncertainties. IFAC Proc. Volumes 47, 6527–6532. doi:10.3182/20140824-6-za-1003.00218

CrossRef Full Text | Google Scholar

Phat, V. N., and Ratchagit, K. (2011). Stability and Stabilization of Switched Linear Discrete-Time Systems with Interval Time-Varying Delay. Nonlinear Anal. Hybrid Syst. 5, 605–612. doi:10.1016/j.nahs.2011.05.006

CrossRef Full Text | Google Scholar

Ratchagit, K., and Phat, V. N. (2011). Robust Stability and Stabilization of Linear Polytopic Delay-Difference Equations with Interval Time-Varying Delays. Neural, Parallel and Scientific Computations 19, 361–372.

Google Scholar

Rauh, A., Dehnert, R., Romig, S., Lerch, S., and Tibken, B. (2021). Iterative Solution of Linear Matrix Inequalities for the Combined Control and Observer Design of Systems with Polytopic Parameter Uncertainty and Stochastic Noise. Algorithms 14, 205. doi:10.3390/a14070205

CrossRef Full Text | Google Scholar

Rauh, A., Prabel, R., and Aschemann, H. (2017). “Oscillation Attenuation for Crane Payloads by Controlling the Rope Length Using Extended Linearization Techniques,” in 2017 22nd International Conference on Methods and Models in Automation and Robotics (MMAR), Miedzyzdroje, Poland, August 28–31, 2017, 307–312. doi:10.1109/mmar.2017.8046844

CrossRef Full Text | Google Scholar

Rauh, A., Romig, S., and Aschemann, H. (2018). “When Is Naive Low-Pass Filtering of Noisy Measurements Counter-productive for the Dynamics of Controlled Systems,” in 2018 23rd International Conference on Methods Models in Automation Robotics (MMAR), Miedzyzdroje, Poland, August 27–30, 2018, 809–814.

CrossRef Full Text | Google Scholar

Rauh, A., and Romig, S. (2021). Linear Matrix Inequalities for an Iterative Solution of Robust Output Feedback Control of Systems with Bounded and Stochastic Uncertainty. Sensors 21, 3285. doi:10.3390/s21093285

PubMed Abstract | CrossRef Full Text | Google Scholar

Rauh, A., Senkel, L., Gebhardt, J., and Aschemann, H. (2014). “Stochastic Methods for the Control of Crane Systems in Marine Applications,” in 2014 European Control Conference (ECC), Strasbourg, France, June 24–27, 2014, 2998–3003. doi:10.1109/ecc.2014.6862370

CrossRef Full Text | Google Scholar

Rauh, A. (2017). Sensitivity Methods for Analysis and Design of Dynamic Systems with Applications in Control Engineering: Feedforward Control – Feedback Control – Robust Control – State Estimation. Aachen: Shaker.

Google Scholar

Sadabadi, M., and Karimi, A. (2013). An LMI Formulation of Fixed Order H and H2 Controller Design for Discrete-Time Systems with. Polytopic Uncertainties 75, 2453–2458. doi:10.1109/CDC.2013.6760248

CrossRef Full Text | Google Scholar

Scherer, C., and Weiland, S. (1994). Linear Matrix Inequality in Control. Germany/The Netherlands: Department of Mathematics University of Stuttgart/Department of Electrical Engineering Eindhoven University of Technology.

Google Scholar

Schulz, G. (1933). Iterative Berechung der reziproken Matrix. Z. Angew. Math. Mech. 13, 57–59. doi:10.1002/zamm.19330130111

CrossRef Full Text | Google Scholar

Senkel, L., Rauh, A., and Aschemann, H. (2016). Experimental and Numerical Validation of a Reliable Sliding Mode Control Strategy Considering Uncertainty with Interval Arithmetic. Editors A. Rauh, and L. Senkel (Cham, Switzerland: Springer International Publishing), 87–122. chap. 4 part I. doi:10.1007/978-3-319-31539-3_4

CrossRef Full Text | Google Scholar

Soleymani, F. (2013). On a Fast Iterative Method for Approximate Inverse of Matrices. Commun. Korean Math. Soc. 28, 407–418. doi:10.4134/ckms.2013.28.2.407

CrossRef Full Text | Google Scholar

Wahab, A. (1994). Pole Assignment in a Specified Circular Region Using a Bilinear Transformation onto the Unit Circle. Int. J. Syst. Sci. 25 (7), 1113–1125. doi:10.1080/00207729408949265

CrossRef Full Text | Google Scholar

Yotha, N., and Mukdasai, K. (2013). New Delay-dependent Robust Stability Criterion for LPD Discrete-Time Systems with Interval Time-Varying Delays. Discrete Dyn. Nat. Soc. 2013, 929725. doi:10.1155/2013/929725

CrossRef Full Text | Google Scholar

Yucelen, T., Sadahalli, A. S., and Pourboghrat, F. (2010). “Active Noise Control in a Duct Using Output Feedback Robust Control Techniques,” in Proceedings of the 2010 American Control Conference, Baltimore, MD, June 30–July 2, 2010, 3506–3511. doi:10.1109/acc.2010.5530942

CrossRef Full Text | Google Scholar

Zemouche, A., Zerrougui, M., Boulkroune, B., Rajamani, R., and Zasadzinski, M. (2016). “A New LMI Observer-Based Controller Design Method for Discrete-Time LPV Systems with Uncertain Parameters,” in 2016 American Control Conference (ACC), Boston, MA, July 6–8, 2016, 2802–2807. doi:10.1109/acc.2016.7525343

CrossRef Full Text | Google Scholar

Keywords: discrete-time systems, stochastic disturbance, robust control, linear matrix inequaities (LMI), optimization, polytopic modeling

Citation: Dehnert R, Damaszek M, Lerch S, Rauh A and Tibken B (2022) Robust Feedback Control for Discrete-Time Systems Based on Iterative LMIs with Polytopic Uncertainty Representations Subject to Stochastic Noise. Front. Control. Eng. 2:786152. doi: 10.3389/fcteg.2021.786152

Received: 29 September 2021; Accepted: 29 November 2021;
Published: 09 February 2022.

Edited by:

Mudassir Rashid, Illinois Institute of Technology, United States

Reviewed by:

Fotis Nicholas Koumboulis, National and Kapodistrian University of Athens, Greece
Grienggrai Rajchakit, Maejo University, Thailand

Copyright © 2022 Dehnert, Damaszek, Lerch, Rauh and Tibken. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Robert Dehnert,

Present addresses: Andreas Rauh, Department of Computing Science, Carl von Ossietzky Universität Oldenburg, Group: Distributed Control in Interconnected Systems, Oldenburg, Germany

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.