• Nem Talált Eredményt

Input blend calculation

Part II Subsystem decoupling

3.3 LTI subsystem decoupling algorithm

3.3.1 Input blend calculation

The aim of the subsection is to find an input blend vector ku, which maximizes the excitation of the selected mode, while minimizes the impact on the one(s) to be decoupled. In this step only the state dynamics are considered, and the measurement equations are removed from the model equations.

The concept is shown in Figure 3.2. Here

¯u is the scalar input from the SISO controller λc(s) (see Figure 4.1),ku is annu dimensional column vector distributing the blended input to the real input channels. Using the introduced terminology the decoupling is formulated as: the sensitivity (Hindex) from

¯u to the performance outputycis to be maximized, while the worst case gain (Hnorm) from

¯u toyd is minimized.

Before going into the details, it is emphasized that the input blend calculation uses the dual representation (see Section 2.1). This is a necessary step to keep the optimization problem linear in the variables, as explained later. At the same time note that, according to Lemma 6 theH index can only be calculated for tall or square systems. Therefore, in case if the inputs are blended into a scalar signal

¯u, then the dual representation would be a wide system. The problem is then converted to a square one, by defining the performance output as the sum of the states as it is shown in Figure 3.2. Accordingly the resulting new Cc and Cd matrices are denoted by Ic ∈R1×nxc and Id ∈ R1×nxd respectively, describing row vectors with compatible dimensions and all elements equal to 1.

Furthermore, if one writes the LMIs (2.18) and (2.27) for the dual system, then expresses the formulas in terms of the original representation, and substitutesIc,IdtoCcandCd, respectively, then one gets the LMI conditions

ATc ITc I 0

T

Ξ

ATc ITc I 0

+

BcT 0

0 I

T

Π

BTc 0

0 I

≺0, (3.5)

and

XdATd +AdXd+BdKuBdT XdIdT

IdXd −γ2I

⪯0, (3.6)

where Π =

−Ku 0 0 β2I

. Here the introduced new matrix variable Ku = kukTu∈Snu is the dyadic product of the input blend vector.

It should be clear that the terms involvingKu are appearing in the LMIs only because of the dual representation, otherwise we would be facing a bilinear (and quadratic) matrix problem,

i.e., the dual form ensures linearity. Nevertheless, the newly introduced variable Ku is a rank 1 matrix, which has to be taken into consideration in the solution. The input blend calculation is summarized in Proposition 1.

Proposition 1.: The input blend design. The optimal input blend ku for the system given in the form of (2.1) with state-space matrices (3.2), can be calculated as the left singular vector corresponding to the largest singular value of the blend matrix Ku, where Ku satisfies the following optimization problem

minimize

Xd, Ku, Xc, Y, β2, γ2 −β22

subject to (3.5), (3.6),Xd∈Snxd, Xd⪰0, Xc∈Snxc, Y ∈Hnxc, Y ≻0,

Ku∈Snu,0⪯Ku⪯I, and rank (Ku) = 1,

(3.7)

with I being the identity matrix with appropriate dimensions.

Proposition 1 is a multi-objective optimization problem, which is frequent in mixedH/H

fault detection observer design (see e.g., [132]). More precisely, the two competing objectives (i.e.

maximization ofβ2and minimization ofγ2) are merged into a single value by using scalarization.

The proposed objective function can be considered as a special case of weighted scalarization, with weights equaling one. This expresses that noapriori knowledge is available before the opti-mization, however, it can be changed in a later stage of the decoupling design. Furthermore, the proposed multi-objective optimization has a simple, but illustrative game-theoretic interpreta-tion, where the two players wishes to reach their individual goals of maximizingβ or minimizing γ together [84]. Since the two optimizations are connected through the shared variableKu, the game is cooperative. It is known that the solution is then Pareto-optimal, i.e. any decrease in one objective simultaneously increases the other one. In order to investigate this trade-off more systematically, the ϵ-constrained reformulation of (3.7) is invoked as follows

minimize

Xd, Ku, Xc, Y β2, γ2 −β2 subject to (3.5), (3.6), γ2 < ϵ,

Xd ∈Snxd, Xd⪰0,

Xc∈Snxc, Y ∈Hnxc, Y ≻0,

Ku∈Snu,0⪯Ku⪯I, and rank (Ku) = 1,

(3.8)

In (3.8) the objective function is selected as one of the competing goals„, while the other objective is constrained by a suitably chosen constant ϵ. By systematically varyingϵthe entire set of Pareto-optimal solutions can be generated [104]. This is illustrated in Figure 3.3, where the two objectives to be minimized are given in the x and y axes of the plot. The green star denotes an utopia point, where both objectives are minimal, but cannot be reached due to the trade-off between these goals. The Pareto-optimal solutions form the so-called Pareto-front, expressing that any decrease in one objective increases the other one. These are also called as Pareto-efficient solutions, to distinguish from other feasible points, where both objectives could be further decreased. The set of infeasible points cannot be reached due to the constraints.

Finally note that theϵ-constrained formulation of the decoupling can also be used for generating problem specific solutions with prescribed level of suppression (or excitation) by setting the ϵ value accordingly. Such solutions can be also achieved by changing the weights of the objectives in (3.7). However, the user should keep in mind the trade-off, represented by the Pareto-front:

„Alternatively the value ofβcan also be constrained while minimizing onlyγ.

infeasible

feasible

Pareto front

H[0,] at∞

−β2

γ2

?

utopia point

Figure 3.3: Pareto optimality

minimizing the maximum sensitivity of the subsystem to be decoupled will also decrease the sensitivity of the targeted subsystem and vice versa.

A key point of the optimization problem given in Proposition 1 is that the Ku matrix variable has to be a rank-one solution. This leads to a non-convex rank constraint, and multiple approaches are at hand to satisfy it. An effective heuristic proposed by [40]: the minimization of the rank of a symmetric positive definite matrix, is the minimization of its nuclear norm, expressed by its trace. In [TB6] this heuristic has been applied, which introduced an additional trace(Ku) term to the objective function of (3.7).

However, it is advised to use a numerically less sensitive method to find a rank-one solution of Ku. This is achievable by the use of an alternating projection scheme, which is taken and tailored from [47], where the authors used an alternating projection technique for satisfying a coupling rank constraint in a fixed-order H control design problem. This has been used in [TB14] and [TB8] to find optimal blend vectors. The method alternates between two solution sets. One is the solution set of (3.7) without the rank constraint, while the other is the set ofKu matrices with one rank lower. The approach uses a sequence of orthogonal projections to find a Ku solution in the intersection of the two sets if it exists. In the next iteration step (projection sequence) starting from this Ku the rank is further reduced by one, and it is repeated until the desired rank one solution is achieved. For more details see Appendix D.

Now we are in the position to present the proposed solution to Proposition 1. The process is summarized in Algorithm 1. Each of its steps are discussed next.

1 The solution process starts with defining the subsystems to be controlled and decoupled. The systems are transformed to the form as shown in Figure 3.2. This is the starting point of Algorithm 1, in line 1.

2 Next the optimization problem in Proposition 1 is solved, without the arising rank constraint.

The blend matrix is constrained to be symmetric and 0⪯ Ku ⪯I. This step providesKu0

which is the initial value for the following alternating projection sequences. The corresponding step is given in line 2 of Algorithm 1 and provides the achievable values forβ and γ.

3 The alternating projection scheme starts at line 4, where the computed β and γ are kept constant. In each of the outer loops the dimension of the rank constraint set is reduced by one. The inner-loop contains the alternating projection to obtain the corresponding reduced rank solution. Once the solution is obtained by fulfilling the stopping criteria (line 10), the outer loop reduces the rank further, until 1 is achieved.

4 The blend vector ku can be found from the singular value decomposition of the blend matrix Ku, upon the convergence: it is the left singular vector corresponding to the largest singular value of Ku. Onceku is found, it is applied to the subsystems to give

˙

x{c,d}(t) =A{c,d}x{c,d}(t) +B{c,d}ku

¯u(t),

y{c,d}(t) =C{c,d}x{c,d}(t). (3.9)

Algorithm 1 Input blend calculation with alternating projection

1: Given: The subsystemsGc and Gd are given in a form as shown in Figure 3.2.

2: Initialization: Solve the following optimization problem, forβ22,Xd,Xc,Y,Ku: minimize

Xd, Ku, Xc, Y, β2, γ2−β22 s.t.:

XdATd +AdXd+BdKuBdT XdIdT

IdXd −γ2I

⪯0, ATc ITc

I 0 T

Ξ

ATc ITc I 0

+

BcT 0

0 I

T

Π

BcT 0

0 I

≺0, Ku ∈Snu,0⪯Ku ⪯I, Y ≻0.

3: Seti= 0

4: fork= 1 to nu−1 do

5: Kui =PΓn−k

rankKu

6: forj=1 to maximum number of iterations do

7: Solve the following optimization problem forXc,Y,Xd,S,Ku: minimize

Xd, Ku, Xc, Y, Strace (S) s.t.:

XdATd +AdXd+BdKuBTd XdIdT

IdXd −γ2I

⪯0, ATc ITc

I 0 T

Ξ

ATc ITc I 0

+

BcT 0

0 I

T

Π

BcT 0

0 I

≺0, Ku∈Snu,0⪯Ku⪯I, Y ≻0,

S Ku−Kui Ku−Kui I

⪰0.

8: Kui+1 =PΓn−k

rankKu

9: if j >1 then

10: if |K

ui+1 Kui|

|Kui+1 | >thresholdthen

11: i=i+ 1

12: break for loop

13: end if

14: end if

15: i=i+ 1

16: end for

17: end for

18: From the Kui = U SVT singular value decomposition find ku as the left singular vector corresponding to the largest singular value.