Reduction-related functions

Functions

MinimaxAdaptiveControl.baredFunction
bared(Ahat::AbstractMatrix{T}, Bhat::AbstractMatrix{T}, Ghat::AbstractMatrix{T}, H::AbstractMatrix{T}, model::GenericModel{T}) where T <: Real
bared(Ahat::AbstractMatrix{T}, Bhat::AbstractMatrix{T}, Ghat::AbstractMatrix{T}, H::AbstractMatrix{T}; method::Symbol = :Laub, iters::Int = 1000) where T <: Real

Solves the backward algebraic Riccati equation using the specified method.

Arguments

  • Ahat::AbstractMatrix{T}: State transition matrix.
  • Bhat::AbstractMatrix{T}: Control input matrix.
  • Ghat::AbstractMatrix{T}: Disturbance input matrix.
  • H::AbstractMatrix{T}: Cost matrix, symmetric.
  • model::GenericModel{T}: JuMP optimization model.
  • method::Symbol: Method to be used for solving the Riccati equation (default is :Laub). The available options are :Laub and :Iterate, see Details on the backward algebraic riccati equation for more information.
  • iters::Int: Number of iterations for the iterative method (default is 1000).

Returns

  • A tuple containing:
    • P::Matrix{T}: Backward riccati solution matrix, positive definite.
    • K::Matrix{T}: Gain matrix.
    • termination_status::Symbol: Status of the optimization.
source
MinimaxAdaptiveControl.faredFunction
fared(sys::OFLinMod{T}, γ::Real, model::GenericModel{T}; method::Symbol = :LMI2) where T <: Real
fared(sys::OFLinMod{T}, γ::Real; method::Symbol = :Iterate, iters::Int = 1000) where T <: Real
fared(A::AbstractMatrix{T}, B::AbstractMatrix{T}, C::AbstractMatrix{T}, D::AbstractMatrix{T}, G::AbstractMatrix{T}, Q::AbstractMatrix{T}, R::AbstractMatrix{T}, γ::Real, model::GenericModel{T}; method = :LMI2) where T <: Real
fared(A::AbstractMatrix{T}, B::AbstractMatrix{T}, C::AbstractMatrix{T}, D::AbstractMatrix{T}, G::AbstractMatrix{T}, Q::AbstractMatrix{T}, R::AbstractMatrix{T}, γ::Real; method = :Iterate, iters = 1000) where T <: Real

Solves the forward algebraic Riccati equation for an output-feedback linear model (or the given system matrices) using the specified method.

Arguments

  • sys::OFLinMod{T}: Output-feedback linear model.
  • A::AbstractMatrix{T}: State transition matrix.
  • B::AbstractMatrix{T}: Control input matrix.
  • C::AbstractMatrix{T}: Output matrix.
  • D::AbstractMatrix{T}: Measurement noise matrix.
  • G::AbstractMatrix{T}: Disturbance input matrix.
  • Q::AbstractMatrix{T}: State cost matrix.
  • R::AbstractMatrix{T}: Control input cost matrix.
  • γ::Real: Induced 2\ell_2-gain
  • model::GenericModel{T}: Generic optimization model.
  • method::Symbol: Method to be used for solving the Riccati equation (default is :LMI2 or :Iterate depending on the function signature). The options are :LMI1, :LMI2, :Iterate, and :Laub. See Details on the forward algebraic riccati equation for more information.
  • iters::Int: Number of iterations for the iterative method (default is 1000).

Returns

  • A tuple containing:
    • S::Matrix{T}: Forward riccati solution matrix, positive definite.
    • Ahat::Matrix{T}: Observer state transition matrix.
    • Ghat::Matrix{T}: Observer output injection matrix.
    • H::Matrix{T}: Cost matrix.
    • termination_status::Symbol: Status of the optimization.
source
MinimaxAdaptiveControl.reduceSysFunction
reduceSys(sys::Vector{SSLinMod{T}}, γ::T, models::AbstractVector{GenericModel{T}}) where T<: Real

The function computes the aggregated model matrices (Ahat, Bhat, Ghat) and the gain (Ks) and auxiliary (Hs) matrices for each of the input models.

Reduces state-feedback adaptive control with uncertain parameters

xt+1=Axt+But+wt(A,B){(A1,B1),,(AM,BM)} \begin{aligned} x_{t+1} & = Ax_t + Bu_t + w_t \\ (A, B) & \in \{(A_1, B_1), \ldots, (A_M, B_M)\} \end{aligned}

with the soft-constrained objective function

minumaxw,N,A,Bt=0N(xtQ2+utR2γ2wt2), \min_u\max_{w, N, A, B}\sum_{t=0}^N\left( |x_t|^2_Q + |u_t|^2_R - \gamma^2 |w_t|^2 \right),

to a certain system

zt+1=A^zt+B^ut+G^dt \begin{aligned} z_{t + 1} & = \hat Az_t + \hat Bu_t + \hat G d_t \end{aligned}

with uncertain objective

minumaxH,d,Nt=0NσH(zt,ut,dt)\min_u \max_{H, d, N} \sum_{t = 0}^N \sigma_H(z_t, u_t, d_t)

Arguments

  • sys::Vector{SSLinMod{T}}: Vector of state-space linear models to be reduced.
  • γ::T: Induced 2\ell_2-gain.
  • models::AbstractVector{GenericModel{T}}: Vector of JuMP models, one per state-space model.

Returns

  • Ahat::Matrix{T}: Aggregated state transition matrix.
  • Bhat::Matrix{T}: Aggregated control input matrix.
  • Ghat::Matrix{T}: Aggregated disturbance input matrix.
  • Ks::Vector{Matrix{T}}: Vector of γ\gamma-suboptimal H\mathcal H_\infty gain matrices.
  • Hs::Vector{Matrix{T}}: Vector of cost weights.
reduceSys(sys::Vector{OFLinMod{T}}, γ::T, models::AbstractVector{GenericModel{T}}, method::Symbol = :LMI2) where T <: Real

Reduces a set of output-feedback linear models into a single aggregated model using a specified method. The function computes the aggregated model matrices (Ahat, Bhat, Ghat) and the gain (Ks) and auxiliary (Hs) matrices for each of the input models.

Takes uncertain systems of the form

xt+1=Axt+But+Gwt,t0yt=Cxt+Dvt(A,B,C,D,G)M\begin{aligned} x_{t + 1} & = A x_t + B u_t + Gw_t,\quad t \geq 0 \\ y_t & = C x_t + D v_t \\ (A, B, C, D, G) & \in \mathcal M \end{aligned}

with soft-constrained objective function

minumaxw,v,x0N,MMt=0N(xtQ2+utR2γ2(wt,vt)2)x0x^0SM,02, \min_u\max_{w, v, x_0 N, M \in \mathcal M}\sum_{t=0}^N\left( |x_t|^2_Q + |u_t|^2_R - \gamma^2 |(w_t, v_t)|^2 \right) - |x_0 - \hat x_0|^2_{S_{M, 0}},

to a certain system

zt+1=A^zt+B^ut+G^dt \begin{aligned} z_{t + 1} & = \hat Az_t + \hat Bu_t + \hat G d_t \end{aligned}

with uncertain objective

minumaxH,d,Nt=0NσH(zt,ut,dt)\min_u \max_{H, d, N} \sum_{t = 0}^N \sigma_H(z_t, u_t, d_t)

Arguments

  • sys::Vector{OFLinMod{T}}: Vector of output-feedback linear models to be reduced.
  • γ::T: Induced 2\ell_2-gain.
  • models::AbstractVector{GenericModel{T}}: Vector of JuMP models, one for each ouput-feedback model
  • method::Symbol: Method to be used for the reduction (default is :LMI2). See fared

Returns

  • A::Matrix{T}: Aggregated observer state transition matrix.
  • B::Matrix{T}: Aggregated observer control input matrix.
  • G::Matrix{T}: Aggregated observer measurement input matrix.
  • Ks::Vector{Matrix{T}}: Vector of γ\gamma-suboptimal H\mathcal H_\infty observer gain matrices.
  • Hs::Vector{Matrix{T}}: Vector of cost weights.
source

Details on the forward algebraic riccati equation

Given matrices ARnx×nxA \in \mathbb{R}^{n_x \times n_x}, CRnx×nyC \in \mathbb{R}^{n_x \times n_y}, DRny×nvD \in \mathbb{R}^{n_y \times n_v}, GRnx×nwG \in \mathbb{R}^{n_x \times n_w}, QRnx×nxQ \in \mathbb{R}^{n_x \times n_x}, RRnu×nuR \in \mathbb{R}^{n_u \times n_u}, and γ>0\gamma > 0 the discrete-time forward algebraic Riccati equation (FARED) is defined as.

S=(AX1A+γ2GG)1X=S+γ2C(DD)1CQL=γ2AX1C(DD)1.\begin{aligned} S & = (AX^{-1}A^\top + \gamma^{-2}GG^\top)^{-1} \\ X & = S + \gamma^2 C^\top (DD^\top)^{-1}C - Q \\ L & = \gamma^2 AX^{-1}C(DD^\top)^{-1}. \end{aligned}

The FARED may have multiple solutions. The function fared computes the solution associated with the maximal SS (over the positive semidefinite cone). The function also returns a matrix H which is given by

[SX1SS0γ2SX1C(DD)10R0γ2CX1S(DD)10((DD)1+C(SQ)1C)1],\begin{bmatrix} SX^{-1}S -S & 0 & \gamma^2 SX^{-1}C(DD^\top)^{-1} \\ 0 & R & 0 \\ \gamma^2 C^\top X^{-1}S(DD^\top)^{-1} & 0 & -\left((DD^\top)^{-1} + C(S - Q)^{-1}C^\top\right)^{-1} \end{bmatrix},

The observer matrices are constructed by A^=AX1S\hat A = A X^{-1} S and G^=γ2AX1C(DD)1\hat G = \gamma^2 A X^{-1} C (DD^\top)^{-1}.

fared :LMI1

Solves (if possible) the SDP

maximizetrace(S)subject to[SSASGASX0nx×nwGS0nw×nxγ2Inw]0X=S+γ2C(DD)1CQ,\begin{aligned} \text{maximize} & \quad \text{trace}(S) \\ \text{subject to} & \quad \begin{bmatrix} S & SA & SG \\ A^\top S & X & 0_{n_x \times n_w} \\ G^\top S & 0_{n_w \times n_x} & \gamma^2 I_{n_w} \end{bmatrix} \succeq 0 \\ & \quad X = S + \gamma^2 C^\top (DD^\top)^{-1}C - Q, \end{aligned}

where SS is the decision variable. The function returns the optimal SS and the corresponding XX, LL, and HH.

fared :LMI2

Directly constructs an SDP whose solution gives the optimal (S,A^,G^,H)(S, \hat A, \hat G, H). Let

Y=[Inx00C000D]Y = \begin{bmatrix} I_{n_x} & 0 \\ 0 & C^\top \\ 0 & 0 \\ 0 & D^\top \end{bmatrix}

The SDP is given by

minimizeS,A^,B^,Htrace(H)subject to[SS00A^SSQ00ASCG^00γ2Inw0GS000γ2InvG^A^ASGGG^DS][YHY000]0S0,HH=0.\begin{aligned} \underset{S, \hat A, \hat B, H}{\text{minimize}} & \quad \text{trace}(H) \\ \text{subject to} & \quad \begin{bmatrix} S & -S & 0 & 0 & -\hat A^\top \\ -S & S - Q & 0 & 0 & A^\top S - C^\top \hat G \\ 0 & 0 & \gamma^2 I_{n_w} & 0 & G^\top S \\ 0 & 0 & 0 & \gamma^2 I_{n_v} & -\hat G^\top \\ -\hat A & A^\top S & G & -G & -\hat G D S \end{bmatrix} - \begin{bmatrix} Y H Y^\top & 0 \\ 0 & 0 \end{bmatrix} \succeq 0\\ & S \succeq 0, \\ & H - H^\top = 0. \end{aligned}

fared :Iterate

This method solves the FARED by iterating starting with S0=γ2InxS_0 = \gamma^2 I_{n_x}. The iteration is given by

St+1=(AXt1A+γ2GG)1Xt=St+γ2C(DD)1CQ\begin{aligned} S_{t + 1}& = (AX_t^{-1}A^\top + \gamma^{-2}GG^\top)^{-1} \\ X_t & = S_t + \gamma^2 C^\top (DD^\top)^{-1}C - Q \end{aligned}

fared :Laub

Uses MatrixEquations.jl function ared to solve the FARED. ared implements W. F. Arnold and A. J. Laub, "Generalized eigenproblem algorithms and software for algebraic Riccati equations," in Proceedings of the IEEE, vol. 72, no. 12, pp. 1746-1754, Dec. 1984, doi: 10.1109/PROC.1984.13083.

Details on the backward algebraic riccati equation

Fix γ>0\gamma > 0 and consider the transformation

G(γ)=γG^Hdd,R=HuuHudHddHdu,Q=HddHzdHddHdz(HzuHzdHddHdu)R1(),A=A^BR1(HzuHzdHddHdu)GHdd(HdzHduR1(HuzHudHddHdz))B=B^G(Hdd)1Hdu. \begin{aligned} G(\gamma) & = \gamma\hat G\sqrt{-H^{-dd}}, \qquad R = H^{uu} - H^{ud}H^{-dd}H^{du}, \\ Q & = H^{dd} - H^{zd}H^{-dd}H^{dz}\\ & \qquad -(H^{zu} - H^{zd}H^{-dd} H^{du})R^{-1}(*), \\ A & = \hat A - BR^{-1}(H^{zu} - H^{zd}H^{-dd}H^{du}) \\ & \quad - GH^{-dd}\left(H^{dz} - H^{du}R^{-1}\left(H^{uz} - H^{ud}H^{-dd}H^{dz}\right)\right)\\ B & = \hat B - G(H^{dd})^{-1}H^{du}. \end{aligned}

As we are interested in the upper value, dtd_t is allowed to depend causally on zz and uu, but uu is required to depend strictly causally on dd. The following parameterization of dtd_t and utu_t respect the causalilty structure of the problem,

dt=Hdd(Hdzzt+Hduut)+γHddδt,t0ut=R1(HuzHudHddHdz)zt+vt.t0,\begin{aligned} d_t & = -H^{-dd}(H^{dz}z_t + H^{du}u_t) + \gamma\sqrt{-H^{-dd}}\delta_t, \quad t \geq 0 \\ u_t & = -R^{-1}(H^{uz} - H^{ud}H^{-dd}H^{dz})z_t + v_t. \quad t \geq 0, \end{aligned}

and leads to the follow equivalent dynamic game. Compute

infνsupδ,Nt=0N1(ztQ2+vtR2γ2δt2), \inf_{\nu}\sup_{\delta, N} \sum_{t=0}^{N-1} \left( |z_t|^2_{Q} + |v_t|^2_{R} - \gamma^2|\delta_t|^2\right),

subject to the dynamics

zt+1=Azt+Bvt+G(γ)δt,t0,vt=νt(z0,,zt),t0.\begin{aligned} z_{t + 1} & = Az_t + Bv_t + G(\gamma)\delta_t, \quad t \geq 0, \\ v_t & = \nu_t(z_0, \ldots, z_t), \quad t \geq 0. \end{aligned}

This reformulation puts the problem on standard γ\gamma-suboptimal $\Hinf$ form, and the value function, if bounded, is well known to be a quadratic function of the initial state, and the optimal controller is of the form vt=Kztv_t = -K_\dagger z_t.

The value function, V(z0)V_\star(z_0), has the form V(z0)=z0P2V_\star(z_0) = |z_0|^2_{P_\star}, where PP_\star is the minimal fixed point of the generalized algebraic riccati equation. PP_\star and KK_\dagger and can be computed, for example, through value iteration, Arnold and Laub's Schur methods and convex optimization.