• Aucun résultat trouvé

Combined H[infinity]/LQG control via the optimal projection equations : on minimizing the LQG cost bound

N/A
N/A
Protected

Academic year: 2021

Partager "Combined H[infinity]/LQG control via the optimal projection equations : on minimizing the LQG cost bound"

Copied!
13
0
0

Texte intégral

(1)

LIDS-P-2024

Combined 'HOO/LQG Control via the

Optimal Projection Equations: On

Minimizing the LQG Cost Bound

Denis Mustafa*

Laboratory for Information and Decision Systems

Massachusetts Institute of Technology

Cambridge MA 02139, USA

+1 (617) 253-2156

September 5, 1990

Revised January 18, 1991

To appear in Int J Robust & Nonlinear Control

Keywords: A7/o control; LQG control; Lagrange multiplier; Riccati equation

Abstract

The Optimal Projection equations for combined 7Yo/LQG control are consid-ered. Positive semidefiniteness of the associated Lagrange multiplier is shown to be necessary for the LQG cost boumd to be minimal. It follows that all four Optimal Projection equations have have a role to play, even in the full-order case.

1

Introduction

The design of feedback controllers which satisfy both 17i and LQG criteria has been the focus of much attention recently (see for example [1, 3, 10, 12]). Such controllers are interesting because they offer both robust stability (via an `7o-norm bound) and nominal performance (via an LQG cost bound).

Our concern in this paper is with the 'equalized 7-oo/LQG weights' case of the X7-oo/LQG control problem considered in [1]. Thus a full or reduced-order stabilizing controller is sought to minimize the auxiliary cost (i.e., a certain LQG cost bound) of a specified closed-loop system subject to an 'H-norm bound on the same closed-loop.

In [1], a Lagrange multiplier approach is used to tackle the combined 7-oo/LQG problem. From the stationarity conditions a set of four coupled modified algebraic

(2)

Riccati equations (the Optimal Projection equations) are derived which characterize the controller gains. However it is not apparent if the auxiliary cost has actually been minimized, since the constraint equation used to define the auxiliary cost has a nonunique solution. In this paper we consider this issue. Non-negativity of the Lagrange multiplier turns out to play a key role, and all four Optimal Projection equations are needed to ensure this.

The present paper is an expanded version of [7] where a special case was treated. In Section 2 a concise description of the combined 'Hoo/LQG problem of interest to us is given-mostly standard material for those familiar with [1, 4]. Section 3 contains the main results. A key lemma is introduced which relates non-negativity of the Lagrange multiplier to minimality of the auxiliary cost. The lemma is then applied in the reduced-order and then full-reduced-order cases, to bring out the role of the four Optimal Projection equations with regard to minimality of the auxiliary cost, strictness of the 'Hto-norm bound and miniriality of the controller realization. Some proofs are relegated to the Appendix for clarity.

2

Combined

7-,oo/LQG

Control

Consider an n-state plant P with state-space realization

.x = Ax + B1w + B2u

z = Clx + D1 2U

y = C2x + D2 1w

mapping disturbance signals w E IR-1 and control signals u G IRm2 to regulated signals

z E IRP1 and measured signals y C IRP2. The n,-state feedback controller K with state-space realization

= Aczxc + Bcy

U = Ccxc

is to be designed.

The closed-loop transfer function H = C(sI - A)-lB from w to z is given by

[ BcC2 A, ] B= B D21 i and C=[Cl Dl2 c]

Internal stability of the closed-loop H corresponds to asymptotic stability of

A

. It is standard that H is a linear fractional map F(P,K) of P (appropriately partitioned) and K:

H = F(P, K) := P11 + P12K(I -P22K)-lP21

2.1 Assumptions

(3)

* (A, B2) is stabilizable and (C2, A) is detectable

which is a necessary and sufficient condition for the existence of stabilizing controllers. In common with [1], it is also assumed that

* D T[C1 D12 ] = [0O R2 ] whereR 2 >0

[ DB'

]

D2

=

[

]

where

V > 0

D21 21 V2

which are the standard assumptions of a non-singular control problem without cross-weightings between the state and control input, and plant disturbance and sensor noise. As explained in Section III-A of [4], without loss of generality suitable scaling of u and w can then be applied to make R2 = I and V2 = I, and we assume that this has

been done. Note that, as pointed out on p2482 of [13], no stabilizability/detectability assumptions are needed on (A, B1, Cl).

2.2 7-(I-norm and LQG cost

The concepts of 7too-norm and LQG cost are well-known. We will therefore omit detailed discussion and content ourselves with stating the following definitions for ref-erence. Given a system H = C(sI- A)-1B where A is asymptotically stable, the

'Hoo-norm of H is

llHlloo := sup A'2,{H*(jw)H(jw)},

where H*(s) := HT(-s), and the LQG cost of H is

C(H) := -

J

trace[H*(jw)H(jw)Idw.

2ir foo

It is a well-known fact that

C(H) = trace[QCTC] (1)

where Q > 0 satisfies

o = AQ + QAT + BB T. (2)

An 'exact' H7-o/LQG control problem would be to minimize the LQG cost C(H) over all stabilized closed-loops H satisfying IHI(Joo < y. As far as this author is aware, this remains an interesting unsolved problem. Instead, the combined 7iOo/LQG problem considered in [1] and in the present paper minimizes an LQG cost overbound subject to an toH-norm bound. We turn now to the definition of this LQG cost bound.

2.3 Auxiliary Cost

The following lemma, basically Lemma 2.1 of [1], is the key to defining the LQG cost bound used in the combined I-oo/LQG problem.

(4)

Lemma 2.1 Suppose A is asymptotically stable and y E IR+. If there exists Q > 0

satisfying

o = AQ + QAT + -2QC T Q + T (3)

then the following hold true:

(i) JIC(sI - A)-'BIK• <-Vi

(ii) Q > Q where Q > 0 satisfies (2).

Given a system H = C(sI - A)-1B where A is asymptotically stable, and y e IR+

such that there exists Q > 0 satisfying (3), the auxiliary cost [1] is then defined by

J(H,y) := trace[QCT C ]. (4)

By Lemma 2.1, and comparing (4) with (1), it is immediate that

* IIHloo < ay;

* C(H) < J(H,7).

Thus the single equation (3) leads to an TH{-norm bound 7 and an LQG cost overbound

J(H,7y). The combined I'H,/LQG control problem considered in [1] and in the present

paper is to minimize J(H,y) over all stabilized closed-loops H satisfying

11Hk

0oo < 7y. The solution to (3) is, however, nonunique. Out of the set of Q > 0 satisfying (3) there exists a minimal solution Q > 0 which satisfies Q < Q where Q is any Q > 0 satisfying (3). Clearly to obtain the lowest auxiliary cost _J(H,y) we need to use Q:

J(H,y) := trace[QCT C' ] < J(H,y). (5)

This was pointed out in [8]. Our main results regard the solution of the combined

-Hoo/LQG problem as given in [1], with particular concern for ensuring that the minimal solution to (3) (hence the smallest auxiliary cost) is indeed taken.

2.4 Entropy

The auxiliary cost of a system satisfying a strict 7-to-norm bound was shown in [8] to equal the entropy of the system. (See [10] for a comprehensive treatment of en-tropy in the context of ZoH-control). Given a system H = C(sI- A)-1B where A is asymptotically stable, and 7 E IR+ such that

11HlII

< y, the entropy is defined by

I(H, 7y) := -

-J

n I det(I

-

-2H*(jw)H(jw))dw.

2 7- _ oo

The entropy is an upper bound on the LQG cost [9, 10] so

* IIHloo < a;

(5)

The Strict Bounded Real Lemma (see e.g., [11] for proof) which we now state, gives a. useful characterization of stable, strictly proper systems satisfying a strict 7too-norm bound. The entropy is well-defined and finite for such systems.

Lemma 2.2 The following are equivalent

(i) A is asymptotically stable and IIC(sI - A)-1Blloo < y. (ii) There exists a stabilizing solution Q > 0 to (3).

By the stabilizing solution Q > 0 to (3) we mean the unique Q > 0 satisfying (3) such

that A + y-2 QCTc, is asymptotically stable. The above lemma allows a state-space

evaluation of the entropy (Lemma .5.3.2 of [10]):

I(H, y) = trace[QC 'T]C (6)

where Q. > 0 is the stabilizing solution to (3).

Now compare the definitions of the entropy I(H, y) in (6) and of the auxiliary cost J(H,y) in (5). The entropy is trace[QCTTC'] for a system satisfying

11HI1o

< -y; the auxiliary cost is trace[QC'TC'] for a system satisfying IIHIIoo •.

_<

We will see these differences disappear when the solution to the combined 7Ho/LQG problem is analysed in the next section.

3

On Minimizing the LQG Cost Bound

3.1

Stationarity Conditions

The combined 7i0o/LQG problem reduces to finding controller matrices A,, Bc, Cc

which make A asymptotically stable and minimize J(H,-y) = trace[QC0T] subject to Q > 0 satisfying (3). Define the Lagrangian

£(Ac, Bc, Cc, 1P, Q) := trace[QCTC' + {AQ + QAT + -2 QOTOQ + BBT}p]

where P E IR(n+nc)x(n+tlc) is a Lagrange multiplier. The stationary points are then characterized by

0OA= =0, = 0, =0, and -=0. (7)

Expansion of these, followed by some algebraic manipulations, is carried out in [1] to obtain four coupled algebraic Riccati equations. The controller gains are defined in terms to the solution to these 'Optimal Projection equations.' Before stating them we will find it beneficial to do some further analysis of the stationarity conditions. The point is that we seek the minimum of J(H,-,) not just of J(H,2y).

(6)

3.2

The Lagrange Multiplier

Consider the last of the stationarity conditions in (7). One obtains the following Lya-punov equation for the Lagrange multiplier 'P:

o = p(A + Qy- QOT O) + (A + <-2-QOT O)T p + OTO. (8)

As the next lemma shows, the existence of a positive semidefinite solution for the Lagrange multiplier is intimately related to minimality of Q and strictness of the

1o,,-norm bound.

Lemma 3.1 Suppose A is asymptotically stable. Then IIC(sI - A)-'BIIl < -Y if and only if there exists Q > 0 satisfying (3) such that there exists P > 0 satisfying (8). If so, then the following statements are equivalent:

(i) Q > 0 is the minimal solution to (S). (ii) Q > 0 is the stabilizing solution to (3).

(iii) Q > 0 satisfies (3) such that there exists P > 0 satisfying (8).

Proof Appendix A. [O

The equivalence of (i) and (ii) has also been obtained (independently) in Theorems 2.2 and 2.3 of [15]. For our purposes however, the equivalence of (i) and (ii) with item (iii) is the most important: the key point being that Q is minimal if and only if the Lagrange multiplier P associated with Q is positive semidefinite. Then we know that

* J(H,-y) = J(H,-), rather than just J(H,y) > J(H,y).

*

lHl]1o < 7y, rather than just IIHIlo < 7.

· I(H,y) = J(H,y) = tracel[C TC] = trace[Q,C TC].

Now we can apply the above lemma to the solution of the combined 1oo/LQG control problem.

3.3

Reduced-Order Case

The next theorem, a quotation of Theorem 6.1 and Appendix (A.21)-(A.23) of [1], characterizes the combined H-Io/LQG controller gains in the reduced-order case (nh <

n). We will then apply Lemma 3.1.

Theorem 3.2 ([1]) Suppose there exists Q > O P > 0, Q > 0 and P > 0 satisfying 0= AQ + QAT + B1B T + Q(C-2iTC -C C2)Q + r±QCT C2QrT (9)

0 = (A + -2[Q + Q]CTC11)TP + P(A + a-[Q + Q]C1 C1) + CfTC,

(7)

O

= (A - B2B2TPS + YQ-2QCTC1 )Q + O(A - B2BTPS + -2QCTC1)T

+y-2 (CTC, + STPBB2 PS)Q + QGC2C 2Q - r QCT C2Qir (11)

o = (A + Q(<-2C[C~ -I _CTC2))TP + P(A + Q(7-2CjTC, -C2 C2))

+STPB2B[PS - TST PB 2B[2PST (12)

where S:= (I+ -y-2Qp)-l and

rank(P) = rank(Q) = rank(PQ) = n,.

The projection r1 is defined via the factorization QP = GTMr by r_ := I -GTr, where G and r are n, x n and satisfy rGT = I and M is n, x n, and invertible. Let

A, = r(A + Q(, C[C - 2 C2) - B2B PS)GT

B, = rQcT

C, = -BTPSGC .

Then (A, B) is stabilizable if and only if A is asymptotically stable. In this case (i) K = (A,, B,, C,) stabilizes P;

(ii)

II(P,K)11oo

<

-;

(iii) J(F(P, K),?) = trace[(Q + Q)C~TCl + OSTPB2BTPS]; 00ie=

r

Q±Q QET and

P -P±P

_PGT

(iv)

=

[

QrQ

QrE

T

L

-GP

GPG

T

J

On applying Lemma 3.1, the following corollary is obtained.

Corollary 3.3 Suppose all the conditions of Theorem 3.2 are satisfied. Then the fol-lowing hold true:

(i) I T(P, K)IIoo < 7;

(ii) J(E(P,K), -) = J( F(P,K),y);

(iii) J(F(P,K), y) = I(.F(P,K),7)

Proof From Theorem 3.2(iv),

Q=[I] rI

+[

O and

=

[G

][ G +[

O

]

Hence Q> 0 and P > 0 since Q > 0, P > 0, Q O0 and P > 0. Lemma 3.1 therefore applies (which gives (i) immediately) implying that Q is minimal (which gives (ii)) and stabilizing (which gives (iii) by (5) and (6)). C

(8)

3.4

Full-Order Case

To specialize Theorem 3.2 to the full-order case (nh = n) simply set G r = I and=

Theorem 3.4 Suppose there ezists Q > O, P > O, Q > 0 and P > 0 satisfying

O = AQ + QAT + B1BT + Q(y7-CTC1 _ -CTC2)Q (13) o = (A + y-2[Q + Q]CTC1)TP + P(A +

<-2[Q

+ Q]C[C1) +CTC1 - STPB2BTPS (14) 0

=

(A - B2BTPS +

Q-2QCTCI)Q

+ Q(A - B2BTPS + ,-2QCTC,1)T +7-2Q(C' C1 + STPB2B2PS)Q + QCT'C2Q (15) o = (A + Q(-2C1TC1 -_ CTC2))TP +P(A + Q(-2CTC, - C2 C2)) + STPB2B2rPS (16)

where S := (I + y-2Q) - 1 Let

A, = A + Q(7-2C"TC - C2TC2)- B2BTPS (17)

B, = QCT (18)

c, = -B Ps. (19)

Then (A, B) is stabilizable if and only if A is asymptotically stable. In this case (i) K = (A,, B,, CU) stabilizes P;

(ii) I I(P, K) IIc < -y;

(iii) J(F(P,K),,y) = trace[(Q + O)CrC, + QSTPB2BTPS];

(iv)

Q

=[QQ ] and [= p= P

P]

Note that the equations for Q, P and Q do not depend on P, and neither does the controller nor the cost. There is thus a great temptation to discard the P equation as superfluous-as is done in Remark 6.1 and Theorem 4.1 of [1]. Our analysis will show that P really does have a role to play. To see this role emerge, we need only apply Lemma 3.1 to Theorem 3.4 to obtain the following corollary.

Corollary 3.5 Suppose all the conditions of Theorem 3.4 are satisfied. Then the

fol-lowing hold true: (i) IlY(P,tK)1oo < ';

(ii) J(F(P, K), ) = J(F(P, K), 7);

(9)

Proof From Theorem 3.4(iv),

I

I T Q01

Q =[]I[ + [0 0] and v=

[

IjP[ I] +

[

Hence Q > 0 and 7P > 0 since Q > 0, P > 0, Q > 0 and P > 0. Lemma 3.1 therefore applies (which gives (i) immediately) implying that Q is minimal (which gives (ii)) and stabilizing (which gives (iii) by (5) and (6)). [

It is now clear that the existence of P > 0 as well as existence of Q > 0, P > 0 and

Q

> 0 is needed to ensure positive semidefiniteness of the Lagrange multiplier which is necessary for minimality of the auxiliary cost. All four Optimal Projection Equations

have a role to play. But whilst we do need to calculate Q, P and

Q

to write down the controller and its cost, knowing that P > 0 exists is enough. The following lemma gives a convenient way of ensuring this.

Lemma 3.6 P > 0 exists satisfying (16) if Q > 0 is the stabilizing solution of (13). Proof If Q > O is the stabilizing solution to (13) then A + Q(--2,CTC1 - CTCc2) is

asymptotically stable. Then by Lemma 12.1 of [14] there exists a unique P

satisfy-ing (16), and P > 0. O

So either we solve for Q > 0, P > 0, Q > 0 and P > 0, or we solve for the stabilizing solution Q > 0 (knowing that existence of P > 0 is then ensured), and for P > 0 and Q> 0.

3.5

Controller Minimality

One of the themes of this paper is that even in the full-order case, the fourth Optimal Projection equation is needed. In this section we will give a further use for this

it

equation, with regard to minirmality of the full-order controller realization (17)-(19).

Firstly consider the pure LQG problem i.e., --+ oo so that [9, 1] I(H,oo) = J(H, oo) = C(H) and the LQG cost is minimized over all stabilized closed-loops. It is well-known that the LQG controller may or may not be stable and may or may not be minimal. To remove non-mIinimal states regardless of stability or not of the LQG controller, [16] proposed a balanced reduction method, called the BCRAM (Balanced Controller Reduction Algorithm, Modified). In fact the same idea can be applied to the

H7-L case, when we will refer to the method as I7oo-BCRAM. It enables non-minimal

states to be removed from the full-order combined Jioo/LQG controller given in (17)-(19), even if the controller is unstable. The following lemma underlies the method.

Lemma 3.7 Definitions as in Theorem 3.4. Then the following hold:

(i) Suppose A, + BC 2 is asymptotically stable. Then (A,,B,) is controllable if and only if there ezists M > 0 satisfying

(10)

(ii) Suppose Ac - B2Cc is asymptotically stable. Then (C,, Ac) is observable if and only if there ezists N > 0 satisfying

0 = (A, - B2C¢)TN + N(AC - B2C¢) + CTCc. (21)

Proof Appendix B. O]

With reference to part (ii), note that since Ac - B2Cc = A + Q(7-2CTC1 - CTC2),

Ac - B2Cc is asymptotically stable if and only if Q is the stabilizing solution to (13).

This is the solution that Lemma 3.6 needs.

KO/-BCRAM simply involves applying a similarity transformation to the plant P to make Al and N equal and diagonal. Non-minimal states in the controller then show up as zeros on the leading diagonal of M and N; such states may be removed from the controller realization to leave a minimal realization. It is easy to see that equation (16) is identical to equation (21), so P = N, and we see P playing a role again. In [2], a similar observation has been made on the role of P in the LQG case of BCRAM.

Non-minimal states in the full-order combined 7/HO/LQG controller can therefore be removed using 1t7/-BCRAM by balancing M and P, and deleting balanced controller states corresponding to the zero eigenvalues of AIP. Carrying this further, reduced-order controllers could be obtained by removing (observable and controllable) balanced controller states corresponding to the smallest non-zero eigenvalues of MP. However no a priori guarantees are known, so such a controller might not even be stabilizing. A closed-loop balance-and-truncate method such as 7,,-balanced truncation [10], for which a priori guarantees are available, is a more attractive method in this respect. Alternatively, if the extra computational complexity is not an issue, the reduced-order Optimal Projection equations of Section 3.3 could be solved.

4

Conclusions

By analysing the stationarity conditions associated with the combined 74O/LQG con-trol problem, we have obtained some insights into the Optimal Projection equations which characterize combined '7,Io/LQG controllers. Existence of a positive semidefinite Lagrange multiplier was shown to be necessary for a minimized auxiliary cost and im-plies a strict 7Lo-norm bound. Fortunately, existence of positive semidefinite solutions to the four Optimal Projection equations was shown to imply existence of a positive semlidefinite Lagrange multiplier. All four Optimal Projection equations have a role to play here, even in the full-order case when one equation otherwise appears to be

redundant. That equation was shown to be closely involved in the issue of controller

mininimality too.

The results of this paper are concerned with the 'equalized' 7-H,/LQG weights case of combined HOO/LQG control. That is, when the X74-norm bound and the LQG cost bound apply to the same closed-loop transfer function. Extension of the results of this paper to the non-equalized case, where the Th,7-norm bound is applied to a different closed-loop transfer function to that for the LQG cost bound, is an interesting open problem.

(11)

5

Acknowledgements

The author wishes to thank: Dr. M. Mercadal for performing some preliminary nu-merical computations on the solution of the 7'to version of the Optimal Projection equations; Dr. D. S. Bernstein for some stimulating discussions; Dr. D. Obradovic for useful comments on an earlier version of this paper; and D. G. MacMartin for ad-vice regarding the numerical solution of the LQG version of the Optimal Projection equations.

Appendix

A

Proof of Lemma 3.1

This proof, taken from [7], is included for completeness. To prove the first part of Lemma 3.1, suppose A is asymptotically stable. If IIC(sI- A)-Blloo < y then by Lelnma 2.2 there exists a stabilizing solution Q > 0 to (3). That is, A + y-2QCTC is asymptotically stable. Then by Lemma 12.1 of [14] there exists a unique symmetric solution P to (8), and this solution is positive semidefinite. Conversely, let Q > 0 satisfy (3) such that there exists P > 0 satisfying (8). Since A is asymptotically stable,

(C, A) is certainly detectable, and hence so is (6, A + _y-2Q'TC). Then Lemma 12.2

of [14] gives that A+-y-2 QCTC is asymptotically stable. That is, such a Q is stabilizing,

hence by Lemma 2.2, IIC(sI - A)-'BI3 < 7.

We may now proceed with the proof of the equivalence of parts (i), (ii) and (iii) of the lemmla.

(iii)= (ii) Since A is asymptotically stable, (C, A) is certainly detectable, and hence so

is (C, A + -y2QCTC), where Q > 0 satisfies (3). Then Lemma 12.2 of [14] states that existence of P > 0 satisfying (8) implies that A + 7-2QCTC is asymptotically stable.

(ii)•=(iii) Since Q is stabilizing, A + y-2QCTC0 is asymptotically stable. By Lemma

12.1 of [14] there exists a unique symmetric solution P to (8), and this solution is positive semidefinite.

(ii)=>(i) Let Q, be the stabilizing solution and let Q be any other solution to (3).

Subtracting their respective equations and rearranging we obtain

0 =

(Q

- Q,)(A + 7-2Q'CT 0) T + (A + <-2 QCTC)(Q - Q

5)

+y-2(Q _ Q,)C T C(Q - Q ). (22)

Since Q, is stabilizing, A + 7-_2Q.CTC is asymptotically stable, hence Lemma 12.1

of [14] gives that Q - Q, > 0. That is, Q, is minimal.

(i)=(ii) Let Q be the minimal solution to (3) and let Q, be the stabilizing solution. Then by definition Q, -Q > 0. We have already proved that (ii)=(i), so Q, is minimal

(12)

B

Proof of Lemma 3.7

We only need to prove part (i); part (ii) follows by duality. Firstly note the well-known fact that (A,, B¢) is controllable if and only if (A, + BC 2, B,) is controllable.

'If' Let M > 0 satisfy (20). Suppose that (A,+BC 2, B,) is not completely controllable.

Then by the PBH test (see Theorem 2.4-8 of [6]), there exists A and a non-zero vector y such that

y*(A, + BC 2) = Ay* and y*B, = 0.

But then y*(20)y implies (A + A)y*My = 0, which implies Re(A) = 0 since M > 0. But this contradicts the asymptotic stability of A, + B,C2. Hence (A, + BC 2, B,) is

controllable.

'Only If' Theorem 3.3(4) of [5], tells us that controllability of (A, + BC 2, B,) implies

that there exists M > 0 satisfying (20) if and only if A, + BC 2 is asymptotically stable,

which it is by assumption. [

References

[1] D. S. Bernstein and W. M. Haddad. LQG control with an '7oC performance bound: a Riccati equation approach. IEEE Transactions on Automatic Control, 34(3):293-305, 1989.

[2] D. S. Bernstein and D. C. Hyland. The optimal projection approach to robust fixed-structure control design. Mechanics and Control of Space Structures, AIAA

Progress in Aeronautics and Astronautics Series, to appear, 1990.

[3] J. Doyle, K. Zhou, and B. Bodenheimer. Optimal control with mixed 7'2 and

o4, performance objectives. In Proceedings of the American Control Conference,

Pittsburgh, PA, 1989.

[4] J. C. Doyle, K. Glover, P. P. Khargonekar, and B. A. Francis. State-space solu-tions to standard 7-2 and 7-,oo control problems. IEEE Transactions on Automatic

Control, 34(8):831-847, 1989.

[5] K. Glover. All optimal Hankel-norm approximations of linear multivariable sys-tems and their C,O-error bounds. International Journal of Control,

39(6):1115-1193, 1984.

[6] T. Kailath. Linear Systems. Prentice-Hall, 1980.

[7] D. Mustafa. Reduced-order robust controllers: 74,o-balanced truncation and op-timal projection. In Proceedings of the 29th Conference on Decision and Control, Honolulu, Hawaii, 1990.

[8] D. Mustafa. Relations between maximum entropy/7, control and combined 7-/Io/LQG control. Systems and Control Letters, 12:193-203, 1989.

(13)

[9] D. Mustafa and K. Glover. Controllers which satisfy a closed-loop /,,o-norm bound and maximize an entropy integral. In Proceedings of the Conference on Decision

and Control, Austin, TX, 1988.

[10] D. Mustafa and K. Glover. Minimum Entropy 7,o Control. Volume 146 of Lecture

Notes in Control and Information Sciences, Springer Verlag, 1990.

[11] I. R. Petersen, B. D. O. Anderson, and E. A. Jonckheere. A first principles solution to the non-singular 7o, control problem. Preprint, 1990.

[12] M. A. Rotea and P. P. Khargonekar. Simultaneous X72/7,o control with state feedback. In Proceedings of the American Control Conference, San Diego, CA,

1990.

[13] M. (. Safonov, D. J. N. Limebeer, and R. Y. Chiang. Simplifying the 7oo theory via loop-shifting, matrix pencil and descriptor concepts. International Journal of

Control, 50(6):2467-2488, 1989.

[14] W. M. Wonham. Linear Multivariable Control: A Geometric Approach. Springer Verlag, 1979.

[15] J.-H. Xu, R. E. Skelton, and G. Zhu. Upper and lower covariance bounds for perturbed linear systems. IEEE Transactions on Automatic Control, 35(8):944-949, 1990.

[16] A. Yousuff and R. E. Skelton. A note on balanced controller reduction. IEEE

Références

Documents relatifs

In sections 4 and 5 we prove our main results: the sufficient condition for quadratic growth in the strong sense, the characterization of this property in the case of pure

Optimal control, parabolic equations, box constraints for the control, finitely many con- straints for the state, Robinson constraint qualification, second order optimality

Gaillard, Two parameters wronskian representation of solutions of nonlinear Schr¨odinger equation, eight Peregrine breather and multi-rogue waves, J.. Gaillard, Hierarchy of

To specify the discrete optimal control problem, in particular, the approximation of the control parameter u and the cost functional J , we focus on the high order

Illustrative examples show that the conversion of a system described by a single first-order hyperbolic partial differential equation to an integral delay system can simplify

One class of data assimilation methods, called Variational Data Assimilation [5,6] (4D-Var), is based on optimal control theory [7]: the inverse problem of reconstructing the

Tr¨ oltzsch, Second order sufficient optimality conditions for nonlinear parabolic control problems with state constraints. Tr¨ oltzsch, Lipschitz stability of optimal controls for

When the TRPOD algorithm is applied to the wake flow configuration, this approach con- verges to the minimum predicted by an open-loop control approach and leads to a relative mean