Safe Model-Based Reinforcement Learning for Systems With Parametric Uncertainties

Reinforcement learning has been established over the past decade as an effective tool to find optimal control policies for dynamical systems, with recent focus on approaches that guarantee safety during the learning and/or execution phases. In general, safety guarantees are critical in reinforcement learning when the system is safety-critical and/or task restarts are not practically feasible. In optimal control theory, safety requirements are often expressed in terms of state and/or control constraints. In recent years, reinforcement learning approaches that rely on persistent excitation have been combined with a barrier transformation to learn the optimal control policies under state constraints. To soften the excitation requirements, model-based reinforcement learning methods that rely on exact model knowledge have also been integrated with the barrier transformation framework. The objective of this paper is to develop safe reinforcement learning method for deterministic nonlinear systems, with parametric uncertainties in the model, to learn approximate constrained optimal policies without relying on stringent excitation conditions. To that end, a model-based reinforcement learning technique that utilizes a novel filtered concurrent learning method, along with a barrier transformation, is developed in this paper to realize simultaneous learning of unknown model parameters and approximate optimal state-constrained control policies for safety-critical systems.

on the ability to synthesize safe controllers.To improve robustness to parametric uncertainties and changing objectives and models, autonomous systems also need the ability to simultaneously synthesize and execute control policies online and in real time.This paper concerns reinforcement learning (RL), which has been established as an effective tool for safe policy synthesis for both known and uncertain dynamical systems with finite state and action spaces (see, e.g., [1], [2]).
RL typically requires a large number of iterations due to sample inefficiency (see, e.g., [1]).
Online MBRL methods that handle modeling uncertainties are motivated by complex tasks that require systems to operate in dynamic environments with changing objectives and system models, where accurate models of the system and environment are generally not available in due to sparsity of data.In the past, MBRL techniques under the umbrella of approximate dynamic programming (ADP) have been successfully utilized to solve reinforcement learning problems online with model uncertainty (see, e.g., [6]- [8]).ADP utilizes parametric methods such as neural networks (NNs) to approximate the value function, and the system model online.By obtaining an approximation of both the value function and the system model, a stable closed loop adaptive control policy can be developed (see, e.g., [9]- [13]).
Real-world optimal control applications typically include constraints on states and/or inputs that are critical for safety (see, e.g., [14]).ADP was successfully extended to address input constrained control problems in [6] and [15].The state-constrained ADP problem was studied in the context of obstacle avoidance in [16] and [17], where an additional term that penalizes proximity to obstacles was added to the cost function.Since the added proximity penalty in [16] was finite, the ADP feedback could not guarantee obstacle avoidance, and an auxiliary controller was needed.In [17], a barrier-like function was used to ensure unbounded growth of the proximity penalty near the obstacle boundary.While this approach results in avoidance guarantees, it relies on the relatively strong assumptions that the value function is continuously differentiable over a compact set that contains the obstacles in spite of penalty-induced discontinuities in the cost function.
Control Barrier Function (CBF) is another approach to guarantee safety in safety-critical systems (see e.g., [18]), with recent applications to the safe reinforcement learning problems (see e.g., [19]- [21]).[19] have addressed the issue of model uncertainty in safety-critical control with an RL-based data-driven approach.A drawback of this approach is that it requires a October 6, 2021 DRAFT nominal controller that keeps the system stable during the learning phase, which may not be always possible to design.In [21], the authors proposes a safe off-policy RL scheme which trades-off between safety and performance.In [20] the authors proposes a safe RL scheme in which the proximity penalty approach from [17] is cast into the framework of CBFs.While the control barrier function results in safety guarantees, the existence of a smooth value function, in spite of a nonsmooth cost function, needs to be assumed.Furthermore, to facilitate parametric approximation of the value function, the existence of a forward invariant compact set in the interior of the safe set needs to be established.Since the invariant set needs to be in the interior of the safe set, the penalty becomes superfluous, and safety can be achieved through conventional Lyapunov methods.
This paper is inspired by a safe reinforcement learning technique, recently developed in [22], based on the idea of transforming a state and input constrained nonlinear optimal control problem into an unconstrained one with a type of saturation function, introduced in [23], [24].In [22], the state constrained optimal control problem is transformed using a barrier transformation (BT), into an equivalent, unconstrained optimal control problem.Later, a learning technique is used to synthesize the feedback control policy for this unconstrained optimal control problem.The controller for the original system is then derived from the unconstrained approximate optimal policy by inverting the barrier transformation.In [25], the restrictive persistence of excitation requirement in [22] is softened using model-based reinforcement learning (MBRL), where exact knowledge of the system dynamics is utilized in the barrier transformation.
One of the primary contributions of this paper is a detailed analysis of the connection between the transformed dynamics and the original dynamics, which is missing from results such as [22], [25], and [26].While the stability of the transformed dynamics under the designed controllers is established in results such as [22], [25], and [26], the implications of the behavior of the transformed system on the original system are not examined.In this paper, it is shown that the trajectories of the original system are related to the trajectories of the transformed system via the barrier transformation as long as the trajectories of the transformed system remain bounded.
While the transformation in [22] and [25] results in verifiable safe controllers, it requires exact knowledge of the system model, which is often difficult to obtain.Another primary contribution of this paper is the development of a novel filtered concurrent learning technique for online model learning and its integration with the barrier transformation method to yield a novel MBRL solution to the online state-constrained optimal control problem under parametric uncertainty.
October 6, 2021 DRAFT The developed MBRL method learns an approximate optimal control policy in the presence of parametric uncertainties for safety critical systems while maintaining stability and safety during the learning phase.The inclusion of filtered concurrent learning makes the controller robust to modeling errors and guarantees local stability under a finite (as opposed to persistent) excitation condition.
In the following, the problem is formulated in Section II and the BT is described and analyzed in Section III.A novel parameter estimation technique is detailed in Section IV and a model-based reinforcement learning technique for synthesizing feedback control policy in the transformed coordinates is developed in Section V.In Section VI, a Lypaunov-based analysis is utilized to establish practical stability of the closed-loop system resulting from the developed MBRL technique in the transformed coordinates, which guarantees that the safety requirements are satisfied in the original coordinates.Simulation results in Section VII demonstrate the performance of the developed method and analyze its sensitivity to various design parameters, followed by a comparison of the performance of the developed MBRL approach to an offline pseudospectral optimal control method.Strengths and limitations of the developed method are discussed in Section VIII, along with possible extensions.

A. Control objective
Consider a continuous-time affine nonlinear dynamical system where x = [x 1 ; . . .; x n ] ∈ R n is the system state, θ ∈ R p are the unknown parameters, u ∈ R q is the control input, and the functions f : R n → R n×p and g : R n → R n×q are known, locally Lipschitz functions.In the following, [a; b] denotes the vector [a b] T and (v) i denotes the ith component of the vector v.
The objective is to design a controller u for the system in (1) such that starting from a given feasible initial condition x 0 , the trajectories x(•) decay to the origin and satisfy x i (t) ∈ (a i , A i ), ∀t ≥ 0, where i = 1, 2, . . ., n and a i < 0 < A i .While MBRL methods such as those detailed in [5] guarantee stability of the closed-loop with state constraints are typically difficult to establish without extensive trial and error.In the following, a BT is used to guarantee state constraints.

A. Design
Let the function b : R → R, referred to as the barrier function (BF), be defined as Define b (a,A) : R n → R n as b (a,A) (x) := [b (a 1 ,A 1 ) ((x) 1 ); . . .; b (an,An) ((x) n )] with a = [a 1 ; . . .; a n ] and A = [A 1 ; . . .; A n ].Moreover, the inverse of (2) on the interval (a i , A i ), is given by Taking the derivative of (3) with respect to y yields , where Consider the BF based state transformation where s := [s 1 , • • • , s n ] denotes the transformed state.In the following derivation, whenever clear from the context, the subscripts a i and A i of the BF and its inverse are suppressed for brevity.The time derivative of the transformed state can be computed using the chain rule as ṡi = B i (s i ) ẋi which yields the transformed dynamics The dynamics of the transformed state can then be expressed as where F (s) := y(s)θ, (y(s) Continuous differentiability of b −1 implies that F and G are locally Lipschitz continuous.
Furthermore, f (0) = 0 along with the fact that b −1 (0) = 0 implies that F (0) = 0.As a result, for all compact sets Ω ⊂ R n containing the origin, G is bounded on Ω and there exists a positive constant L y such that ∀s ∈ Ω, y(s) ≤ L y s .The following section relates the solutions of the original system to the solutions of the transformed system.

B. Analysis
In the following lemma, the trajectories of the original system and the transformed system are shown to be related by the barrier transformation provided the trajectories of the transformed system are complete (see, e.g., page 33 of [27]).The completeness condition is not vacuous, it is not difficult to construct a system where the transformed trajectories escape to infinity in finite time, while the original trajectories are complete.For example, consider the system ẋ = x + x 2 u with x ∈ R and u ∈ R. All nonzero solutions of the corresponding transformed system escape in finite time.However all nonzero solutions of the original system under the feedback (7), starting from the initial condition b(x 0 ), under the feedback policy (s, t) → ζ(s, t) and t → Λ(t, x 0 , ξ) is a Carathéodory solution to (1), starting from the initial condition x 0 , under the feedback policy Proof.See Lemma 1 in the Appendix.
Note that the feedback ξ is well-defined at x only if b(x) is well-defined, which is the case whenever x is inside the barrier.As such, the main conclusion of the lemma also implies that Λ(•, x 0 , ξ) remains inside the barrier.It is thus inferred from Lemma 1 that if the trajectories of (7) are bounded and decay to a neighborhood of the origin under a feedback policy (s, t) → ζ(s, t), then the feedback policy (x, t) → ζ b(x), t , when applied to the original system in (1), achieves the control objective stated in section (II-A).
To achieve BT MBRL in the presense of parametric uncertainties, the following section develops a novel parameter estimator.

IV. PARAMETER ESTIMATION
The following parameter estimator design is motivated by the subsequent Lyapunov analysis, and is inspired by the finite-time estimator in [28] and the filtered concurrent learning (FCL) method in [29].Estimates of the unknown parameters, θ ∈ R p , are generated using the filter where and the update law where β 1 is a symmetric positive definite gain matrix and Y f is a tunable upper bound on the filtered regressor Y f .Equations ( 7) -( 12) constitute a nonsmooth system of differential equations where can be shown that ( 13) admits Carathéodory solutions.
Lemma 2. If Y f is non-decreasing in time then (13) admits Carathéodory solutions.
Proof.see Lemma 2 in Appendix.
Note that (9), expressed in the integral form where (11), expressed in the integral form and the fact that s(τ all t ≥ 0. As a result, a measure for the parameter estimation error can be obtained using known The dynamics of the parameter estimation error can then be expressed as The filter design is thus motivated by the fact that if the matrix 1 θ can be used to establish convergence of the parameter estimation error to the origin.Initially, Y T f Y f is a matrix of zeros.To ensure that there exists some finite time T such that Y T f (t)Y f (t) is positive definite, uniformly in t for all t ≥ T , the following finite excitation condition is imposed.
) is positive semidefinite, and so is the integral is also full rank for all t ≥ T .Similar to other MBRL methods that rely on system identification (see e.g., chapter 4 of [5]) the following assumption is needed to ensure boundedness of the state trajectories over the interval [0, T ].Assumption 2. A fallback controller ψ : R n × R ≥0 → R q that keeps the trajectories of (7) inside a known bounded set over the interval [0, T ), without requiring the knowledge of θ, is available.
If a fallback controller that satisfies Assumption 2 is not available, then, under the additional assumption that the trajectories of (7) are exciting over the interval [0, T ), such a controller can be learned, online while maintaining system stability, using model-free reinforcement learning techniques such as [30], [31] and [32].

V. MODEL-BASED REINFORCEMENT LEARNING
Lemma 1 implies that if a feedback controller that practically stabilizes the transformed system in ( 7) is designed, then the same feedback controller, applied to the original system by inverting the BT, also achieves the control objective stated in Section II-A.In the following, a controller that practically stabilizes ( 7) is designed as an estimate of the controller that minimizes the infinite horizon cost.
over the set U of piecewise continuous functions t → u(t), subject to (7), where φ(τ, s 0 , u(•)) denotes the trajectory of ( 7), evaluated at time τ , starting from the state s 0 and under the controller Assuming that an optimal controller exists, let the optimal value function, denoted by V * : R n × R q → R, be defined as where u I and U I are obtained by restricting the domains of u and functions in U I to the interval I ⊆ R, respectively.Assuming that the optimal value function is continuously differentiable, it can be shown to be the unique positive definite solution of the Hamilton-Jacobi-Bellman (HJB) equation (see, e.g., [33]) where . Furthermore, the optimal controller is given by the feedback policy u(t) = u * (φ(t, s, u [0,t) )) where u * : R n → R q defined as Remark 2. In the developed method, the cost function is selected to be quadratic in the transformed coordinates.However, a physically meaningful cost function is more likely to be available in the original coordinates.If such a cost function is available, it can be transformed from the original coordinates to the barrier coordinates using the inverse barrier function, to yield a cost function that is not quadratic in the state.While the analysis in this paper addresses the quadratic case, it can be extended to address the non-quadratic case with minimal modifications as long as s → r(s, u) is positive definite for all u ∈ R q .

A. Value function approximation
Since computation of analytical solutions of the HJB equation is generally infeasible, especially for systems with uncertainty, parametric approximation methods are used to approximate the value function V * and the optimal policy u * .The optimal value function is expressed as 1 For ease of exposition, a state penalty of the form s T Qs has been considered in this paper.However, the analysis extends in a straightforward manner to general positive definite state penalty functions s → Q(s).As such, a state penalty function x → P (x), given in the original coordinates, can easily be transformed into an equivalent state penalty Q(s) = P (b −1 (s)).
Since the barrier function is monotonic and b(0) = 0, if P is positive definite, then so is Q.Furthermore, for applications with bounded control inputs, a non-quadratic penalty function similar to Eq. 17 of [26] can be incorporated in (17).
where W ∈ R L is an unknown vector of bounded weights, σ : R n → R L is a vector of continuously differentiable nonlinear activation functions such that σ (0) = 0 and ∇ s σ (0) = 0, L ∈ N is the number of basis functions, and : R n → R is the reconstruction error.
The basis functions are selected such that the approximation of the functions and their derivatives is uniform over the compact set χ ⊂ R n so that given a positive constant ∈ R, there exists L ∈ N and known positive constants W and σ such that W ≤ W , sup s∈χ (s) ≤ , sup s∈χ ∇ s (s) ≤ , sup s∈χ σ (s) ≤ σ, and sup s∈χ ∇ s σ (s) ≤ σ (see, e.g., [34]).Using (19), a representation of the optimal controller using the same basis as the optimal value function is derived as Since the ideal weights, W , are unknown, an actor-critic approach is used in the following to estimate W .To that end, let the NN estimates V : R n × R L → R and û : R n × R L → R q be defined as where the critic weights, Ŵc ∈ R L and actor weights, Ŵa ∈ R L are estimates of the ideal weights, W .

B. Bellman Error
Substituting ( 23) and ( 24) into (19) results in a residual term, δ : referred to as Bellman Error (BE), defined as Traditionally, online RL methods require a persistence of excitation (PE) condition to be able learn the approximate control policy (see, e.g., [3], [6], [7]).Guaranteeing PE a priori and verifying PE online are both typically impossible.However, using virtual excitation facilitated by model-based BE extrapolation, stability and convergence of online RL can established under a PE-like condition that, while impossible to guarantee a priori, can be verified online (by monitoring the minimum eigenvalue of a matrix in the subsequent Assumption 3 (see, e.g., [4]).
Using the system model, the BE can be evaluated at any arbitrary point in the state space.
October 6, 2021 DRAFT Virtual excitation can then be implemented by selecting a set of states evaluating the BE at this set of states to yield where, ∇ s k := ∂ ∂s k , y k := y(s k ) and G k := G (s k ).Defining the actor and critic weight estimation errors as Wc := W − Ŵc and Wa := W − Ŵa and substituting the estimates ( 21) and ( 22) into (19), and subtracting from (25) yields the analytical BE that can be expressed in terms of the weight estimation errors as where and ω := ∇ s σ y θ + Gû(s, Ŵa ) ∈ R L .In (27) and the rest of the manuscript, the dependence of various functions on the state, s, is omitted for brevity whenever it is clear from the context.Similarly, (26) implies that where, for some constant d > 0. While the extrapolation states s k are assumed to be constant in this analysis for ease of exposition, the analysis extends in a straightforward manner to time-varying extrapolation states that are confined to a compact neighborhood of the origin.

C. Update laws for Actor and Critic weights
The actor and the critic weights are held at their initial values over the interval [0, T ) and starting at t = T , using the instantaneous BE δ from ( 25) and extrapolated BEs δk from ( 26), the weights are updated according to with Γ (t 0 ) = Γ 0 , where Γ : R ≥t 0 → R L×L is a time-varying least-squares gain matrix, ρ (t) := are constant adaptation gains.The control commands sent to the system are then computed using the actor weights as where the controller ψ was introduced in Assumption 1.The following verifiable PE-like rank condition is then utilized in the stability analysis.
Assumption 3.There exists a constant c 3 > 0 such that the set of points Since ω k is a function of the weight estimates θ and Ŵa , Assumption 3 cannot be guaranteed a priori.However, unlike the PE condition, Assumption 3 does not impose excitation requirements on the system trajectory, the excitation requirements are imposed on a user-selected set of points in the state space.Furthermore, Assumption 3 can be verified online.Since is non-decreasing in the number of samples, N , Assumption 3 can be met, heuristically, by increasing the number of samples.

VI. STABILITY ANALYSIS
In the following theorem, the stability of the trajectories of the transformed system, and the estimation errors Wc , Wa , and θ are shown.
Proof.See Theorem 1 in Appendix.
Using Lemma 1, it can then be concluded that the feedback control law October 6, 2021 DRAFT applied to the original system in (1), achieves the control objective stated in section (II-A).

VII. SIMULATION
To demonstrate the performance of the developed method for a nonlinear system with an unknown value function, two simulation results, one for a two-state dynamical system (35), and one for a four-state dynamical system (37) corresponding to a two-link planar robot manipulator, are provided.

A. Two state dynamical system
The dynamical system is given by where The BT version of the system can be expressed in the form (7) with where The state x = [x 1 x 2 ] T needs to satisfy the constraints x 1 ∈ (−7, 5) and x 2 ∈ (−5, 7).The objective for the controller is to minimize the infinite horizon cost function in (17), with Q = diag(10, 10) and R = 0.1.The basis functions for value function approximation are selected as . The initial conditions for the system and the initial guesses for the weights and parameters are selected as x(0) = [−6.5;6.5], θ(0) = [0; 0; 0; 0], Γ(0) = diag(1, 1, 1), and Ŵa (0) = Ŵc (0) = [ 1 /2; 1 /2;  1) Results for the two state system: As seen from Fig. 2, the system state x stays within the user-specified safe set while converging to the origin.The results in Fig. 3 indicate that the unknown weights for both the actor and critic NNs converge to similar values.As demonstrated in Fig. 4 the parameter estimation errors also converge to the zero.
Since the ideal actor and critic weights are unknown, the estimates cannot be directly compared against the ideal weights.To gauge the quality of the estimates, the trajectory generated by the controller u(t) = û s(t), Ŵ * c , where Ŵ * c is the final value of the critic weights obtained in Fig. 3, starting from a specific initial condition, is compared against the trajectory obtained using an offline numerical solution computed using the GPOPS II optimization software (see, e.g., [35]).The total cost, generated by numerically integrating (17), is used as the metric for comparison.The costs are computed over a finite horizon, selected to be roughly 5 times the time constant of the optimal trajectories.The results in Table I indicate that while the two solution techniques generate slightly different trajectories in the phase space (see Fig. 5) the total cost of the trajectories is similar.
2) Sensitivity Analysis for the two state system: To study the sensitivity of the developed technique to changes in various tuning parameters, a one-at-a-time sensitivity analysis is performed.
The parameters k c1 , k c2 , k a1 , k a2 , β, and v are selected for the sensitivity analysis.The costs of the trajectories, under the optimal feedback controller obtained using the developed method, are presented in Table II for II indicate that the developed method is robust to small changes in the learning gains.Fig. 5. Comparison of the optimal trajectories obtained using GPOPS II and using BT MBRL with FCL and fixed optimal weights for the two-state dynamical system.where

The results in Table
0, 0, 0, 0 0, 0, 0, 0 with s 2 = sin(x 2 ), c 2 = cos(x 2 ), p 1 = 3.473, p 2 = 0.196, p 3 = 0.242.The positive con- approximation are selected as σ(s) = [s 1 s 3 ; s 2 s 4 ; s 3 s 2 ; s 4 s 1 ; s 1 s 2 ; s 4 s 3 ; s 2 1 ; s 2 2 ; s 2 3 ; s 2  4 ].The initial conditions for the system and the initial guesses for the weights and parameters are selected as x(0) = [−5; −5; 5; 5], θ(0) = [5; 5; 5; 5], Γ(0) = diag (10,10,10,10,10,10,10,10,10,10), and Ŵa (0) = Ŵc (0) = [60; 2; 2; 2; 2; 2; 40; 2; 2; 2].The ideal values of the actor and the critic weights are unknown.The simulation uses 100 fixed Bellman error extrapolation points in a 4x4 square around the origin of the s−coordinate system. 1) Results for the four state system: As seen from Fig. 6, the system state x stays within the user-specified safe set while converging to the origin.As demonstrated in Fig. 8, the parameter estimations converge to the true values.A comparison with offline numerical optimal control, similar to the procedure used for the two-state, yields the results in Table III indicate that the two solution techniques generate slightly different trajectories in the state space (see Fig. 9) and the total cost of the trajectories is different.We hypothesize that the difference in costs is due to the basis for value function approximation being unknown.In summary, the newly developed method can achieve online optimal control thorough a BT MBRL approach while estimating the value of the unknown parameters in the system dynamics and ensuring safety guarantees in the original coordinates during the learning phase.The following section details a one-at-a-time sensitivity analysis and study the sensitivity of the developed technique to changes in various tuning parameters.
2) Sensitivity Analysis for the four state system: The parameters k c1 , k c2 , k a1 , k a2 , β, and v are selected for the sensitivity analysis.The costs of the trajectories, under the optimal feedback controller obtained using the developed method, are presented in     The value of β 1 is set to be diag(100, 100, 100, 100).The results in Table IV indicate that the developed method is not sensitive to small changes in the learning gains.
The results in Tables 2 and 4 indicate that the developed method is not sensitive to small changes in the learning gains.While reduced sensitivity to gains simplifies gain selection, as indicated by the local stability result, the developed method is sensitive to selection of basis function and initial guesses of the unknown weights.Due to high dimensionality of the vector of unknown weights, a complete characterization of the region of attraction is computationally difficult.As such, the basis functions and the initial guess were selected via trial and error.

VIII. CONCLUSION
This paper develops a novel online safe control synthesis technique which relies on a nonlinear coordinate transformation that transforms a constrained optimal control problem into an unconstrained optimal control problem.A model of the system in the transformed coordinates is simultaneously learned and utilized to simulate experience.Simulated experience is used to realize convergent RL under relaxed excitation requirements.Safety of the closed-loop system, expressed in terms of box constraint, regulation of the system states to a neighborhood of the origin, and convergence of the estimated policy to a neighborhood of the optimal policy in transformed coordinates is established using a Lyapunov-based stability analysis.
While the main result of the paper states that the state is uniformly ultimately bounded, the simulation results hint towards asymptotic convergence of the part of the state that corresponds to the system trajectories, x(•).Proving such a result is a part of future research.
Limitations and possible extensions of the ideas presented in this paper revolve around the two key issues: (a) safety, and (b) online learning and optimization.The barrier function used in the BT to address safety can only ensure a fixed box constraint.A more generic and adaptive barrier function, constructed, perhaps, using sensor data is a subject for future research.
For optimal learning, parametric approximation techniques are used to approximate the value functions in this paper.Parametric approximation of the value function requires selection of appropriate basis functions which may be hard to find for the barrier-transformed dynamics.
Developing techniques to systematically determine a set of basis functions for real-world systems is a subject for research.
The barrier transformation method to ensure safety relies on knowledge of the dynamics of the system.While this paper addresses parametric uncertainties, the BE method could potentially result in a safety violation due to unmodeled dynamics.In particular, the safety guarantees developed in this paper rely on the relationship (Lemma 1) between trajectories of the original dynamics and the transformed system, which holds in the presence of parametric uncertainty, but fails if a part of the dynamics is not included in the original model.Further research is needed to establish safety guarantees that are robust to unmodeled dynamics (for a differential games approach to robust safety, see [26]).
v l −1 v l v −1 l (ι) .Furthermore, the concatenated state trajectories are bounded such that Z (t) ∈ B r for all t ∈ R ≥T .Since the estimates Ŵa approximate the ideal weights W , the policy û approximates the optimal policy u * .

Remark 1 .Fig. 1 .
Fig. 1.The developed BT MBRL framework.The control system consists of a model-based barrier-actor-critic-estimator architecture.In addition to the transformed state-action measurements, the critic also utilizes states, actions, and the corresponding state derivatives, evaluated at arbitrarily selected points in the state space, to learn the value function.In the figure, BT: Barrier Transformation; TS: Transformed State; BE: Bellman Error.

October 6 ,
2021 DRAFT u(•), r(s, u) := s T Qs + u T Ru, and Q ∈ R n×n and R ∈ R q×q are symmetric positive definite (PD) matrices 1 .
1 /2].The ideal values of the unknown parameters in the system model are θ 1 = 1, θ 2 = −1, θ 3 = −0.5, θ 4 = 0.5, and the ideal values of the actor and the critic weights are unknown.The simulation uses 100 fixed Bellman error extrapolation points in a 4x4 square around the origin of the s−coordinate system.October 6, 2021 DRAFT 5 different values of each parameter.The parameters are varied in a neighborhood of the nominal values (selected through trial and error) k c1 = 0.3, k c2 = 5, k a1 = 180, k a2 = 0.0001, β = .03,and v = 0.5.The value of β 1 is set to be diag(50, 50, 50, 50).

Fig. 2 .Fig. 3 .
Fig.2.Phase portrait for the two-state dynamical system using MBRL with FCL in the original coordinates.The boxed area represents the user-selected safe set.

Fig. 4 .
Fig. 4. Estimates of the unknown parameters in the system under the nominal gains for the two-state dynamical system.The dash lines in the figure indicates the ideal values of the parameters.

Fig. 6 .Fig. 7 .
Fig.6.State trajectories for the four-state dynamical system using MBRL with FCL in the original coordinates.The dash lines represent the user-selected safe set.

Fig. 8 .
Fig. 8. Estimates of the unknown parameters in the system under the nominal gains for the four-state dynamical system.The dash lines in the figure indicates the ideal values of the parameters.

Fig. 9 .
Fig.9.Comparison of the optimal angular position (top) and angular velocity (bottom) trajectories obtained using GPOPS II and BT MBRL with fixed optimal weights for the four-state dynamical system.

TABLE I COMPARISON
(35)OSTS FOR A SINGLE BARRIER TRANSFORMED TRAJECTORY OF(35), OBTAINED USING THE OPTIMAL FEEDBACK CONTROLLER GENERATED VIA THE DEVELOPED METHOD, AND OBTAINED USING PSEUDOSPECTRAL