State-Dependent Riccati Equation

State-Dependent Riccati Equation

The 19 joints of the hand are driven by 38 tendons that have a nonlinear stiffness characteristic. It means that the motor displacement required to adjust the tendon force depends on the current force. As demonstrated previously, the gain scheduling method is effective to assign the poles (of the pointwise linear system) but does not account for the input command magnitude. The optimal control method, that leads to the ARE in the case of a linear system, is able to account for the cost of the state error and the input amplitude. However, its genuine form is limited to linear problems. The optimization problem, that is solved relatively easily in the case of a linear system, is not anymore trivial to solve in the presence of nonlinearities. The exact solution of an optimal control problem is obtained by solving the Hamilton-Jacobi-Bell (HJB) equation given by V (u(t)) = Z T 0 C(x(t), u(t))dt + D(x(T)), (14.1) where x ∈ R n , n ∈ N is the state vector. The running state cost and the terminal state cost are denoted C ∈ R (resp. D ∈ R). The functional to be minimized by the choice of the input function u(t) ∈ R, t ∈ [0, T], T > 0 is represented by V (u(t) ∈ R. Direct methods to solve the optimal control problems are reported as early as in 1959, in [87]. It has been applied to solve offline optimization problem such as space shuttle trajectory, ship maneuver or, more recently, throwing problem [88]. A result of optimal control due to Pontryagrin [86] is that in many cases bang-bang control is the solution (saturated maximum/minimum control input). However only a limited number of forms can be solved analytically. One must resort to numerical methods for the other cases, nonetheless their form can give further insights on the most efficient numerical techniques to be employed. Unfortunately, they require forward and backward integrations and, in general, are extremely expensive to compute. Especially, they are generally for real-time or online application. An intermediate way between the linear optimal control, with the ARE, and the optimal nonlinear control, with the HBJ equation, has been proposed around 1962 by Pearson under the name of State Dependent Riccati Equation (SDRE) [89]. It has been expended by Wernly [90] and popularized by Cloutier [91–95]. The method is an intuitive extension of the ARE, applied to a pointwize linearized system. The existence of a SDRE stabilizing feedback is discussed in [96]. The method offers only limited theoretical results for global stability but proved to be effective in practice. More details can be found in the extensive survey [97]. In a first section the method is presented with a generic example based on [91]. The second section applies the method to two problems: the control of the tendon force, similar to the gain scheduling example, and the control of a single joint with one motor and a nonlinear spring. The third section evaluates the controller with the help of simulations. Finally, section four discusses the results. 

State Dependant Riccati Equations

 Considering a nonlinear multi-variable system, x˙ = f(x) + u , (14.2) where the state dimension is n ∈ N and x ∈ R n is the state vector. A nonlinear function of the state variables, that is assumed to be sufficiently smooth, is denoted f(x) ∈ R n . The control input is u ∈ R n . It is possible to write (14.2) in a pseudo-linear form, also referred to as the pointwise linear form, as x˙ = Akx + Bku , (14.3) One pointwise linearized form and the associated input for a given factorization Ξk, k ∈ N are denoted Ak ∈ R n×n and Bk ∈ R n×m. It should be noted that, excepted the case n = 1, there exists an infinite number of factorization Ξk and its associated matrices (Ak, Bk). Once a factorization has been selected, the ARE can be used to select the optimal gains. According to Chapter 13, the state feedback gains are selected as K = R−1BT k S , (14.4) where R(t) ∈ R m×m is a positive definite cost matrix for the input, Bk is the input matrix and S(t) is one solution of the Riccati equation defined by SAk + AT k S + SBkR−1BT k S − Q = 0 , (14.5) where Q ∈ R n×n (resp. R ∈ R m×m) is the state error cost (resp. the control input cost) both positive definite. The closed-loop system is x˙ = Akx + BT k R−1BT k Sx . (14.6) Under the assumption that all quantities are continuous and continuously differentiable (C 1 ), and by construction of S, the closed-loop system of (14.6) is Hurwitz, therefore locally asymptotically stable. 

Applications

 In this section the state-dependent Riccati equation (SDRE) is derived for two particular cases. First, the force regulation of the tendon forces when 162 ft(θ) θ rm Figure 14.1: Model for the tendon force controller. The link is assumed to be fixed, thus the tendon force only depends on the motor position. considering the joint fixed is studied. It is the problem that was motivating the gain scheduling method of Chapter 10. Second, a single nonlinear flexible joint model driven by a single motor is proposed. The second problem is a simplification of the real case problem that allows to understand the effect of the control. 

Tendon force controller

 The model comprises a motor, a spring element and a tendon (cf. Fig. 16.1). The tendon is attached to a fixed reference (grounded). The control objective is to regulate the tendon force (ft ∈ R), measured by the spring lever, by adjusting the position θ ∈ R of the motor with the torque input u ∈ R. The dynamic equation of the system is Bθ ¨θ = −ft(θ) + u , (14.7) where Bθ ∈ R is the motor inertia (w. r. t. the motor acceleration), θ ∈ R and u ∈ R are classically the motor position and the torque input. The tendon force depending on the motor position is denoted ft(θ) = ϕ(θ). It is important to note that for the following analysis, the function ϕ(θ) is required to be at least C 2 w. r. t. θ. To apply the SDRE method it is first necessary to establish the pointwise linear form. One possible solution is given by equation (14.8). The linearization w. r. t. to θ is Bθ ¨θ = −ft(θ0) − ∂ft(θ) ∂θ |θ0 (θ − θ0) + u ,

Cours gratuitTélécharger le cours complet

Télécharger aussi :

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *