• Aucun résultat trouvé

Fast Sampling

Dans le document Communications and Control Engineering (Page 189-193)

Variants and Extensions

7.5 Fast Sampling

Let us now turn to the special case of sampled data systems. In this case, according to (2.12) the discrete time solutionxu(n, x0)represents the continuous time solution ϕ(t,0, x0, v) at sampling times t=nT. In this setting, it is natural to define the optimization horizon not in terms of the discrete time variablenbut in terms of the continuous timet. Fixing an optimization horizonTopt>0 and picking a sampling periodT >0 where we assume for simplicity of exposition thatTopt is an integer multiple ofT, the discrete time optimization horizon becomesN=Topt/T, cf. also Sect. 3.5.

Having introduced this notation, an interesting question is what happens to sta-bility and performance of the NMPC closed loop if we keepToptfixed but vary the sampling periodT. In particular, it is interesting to see what happens if we sample faster and faster, i.e., if we letT →0. Clearly, in a practical NMPC implementation we cannot arbitrarily reduceT because we need some time for solving the optimal control problem (OCPN) or its variants online. Still, in particular in the case of zero order hold it is often desirable to sample as fast as possible in order to approximate the ideal continuous time control signal as good as possible, cf., e.g., the paper of Neši´c and Teel [26], and thus one would like to make sure that this does not have negative effects on the stability and performance of the closed loop.

In the case of equilibrium endpoint constraint from Sect. 5.2 it is immediately clear that the stability result itself does not depend onT, however, the feasible set XN may change withT. In the case of zero order hold, i.e., when the continuous time control functionν is constant on each sampling interval[nT , (n+1)T ), cf.

the discussion after Theorem 2.7, it is easily seen that each trajectory for sampling periodT is also a trajectory for each sampling periodT / kfor eachk∈N. Hence, the feasible setXkN for sampling periodT / kalways contains the feasible setXN

for sampling periodT, i.e., the feasible set cannot shrink fork→ ∞and hence for sampling periodT / kwe obtain at least the same stability properties as for sampling periodT.

In the case of Lyapunov function terminal costsF as discussed in Sect. 5.3 either the terminal costs or the running costs need to be adjusted to the sampling period T in order to ensure that Assumption 5.9 remains valid. One way to achieve this is to choose a running cost in integral form (3.4) and the terminal cost F such that the following condition holds: for eachx∈X0and someT0>0 there exists a continuous time controlvsatisfyingϕ(t,0, x, v)∈X0and

V

ϕ(t,0, x, v)

V (x)≤ − t

0

L

ϕ(τ,0, x, v), v(τ )

(7.8)

7.5 Fast Sampling 177 for allt∈ [0, T], cf. also Findeisen [9, Sect. 4.4.2]. Under this condition one easily checks that Assumption 5.9 holds for from (3.4) and allTT0, provided the control functionvin (7.8) is of the formv|[nT ,(n+1)T )(t )=u(n)(t )for an admissible discrete time control sequenceu(·)withu(n)U. IfU=L([0, T],Rm)then this last condition is not a restriction but if we use some smaller space forU(as in the case of zero order hold, cf. the discussion after Theorem 2.7), then this may be more difficult to achieve; see also [9, Remark 4.7].

Since the schemes from Chap. 6 do not use stabilizing terminal constraintsX0

and terminal costsF, the difficulties just discussed vanish. However, the price to pay for this simplification is that the analysis of the effect of small sampling periods which we present in the remainder of this section is somewhat more complicated.

FixingTopt and letting T →0 we obtain that N =Topt/T → ∞. Looking at Theorem 6.21, this is obviously a good feature, because this theorem states that the largerN becomes, the better the performance will be. However, we cannot directly apply this theorem because we have to take into account thatβin the Controllability Assumption 6.4 will also depend onT.

In order to facilitate the analysis, let us assume that in our discrete time NMPC formulation we use a running costthat only takes the statesϕ(nT ,0, x0, v)at the sampling instants and the respective control values into account.1For the continuous time system, the controllability assumption can be formulated in discrete time. We denote the set of admissible continuous time control functions (in analogy to the discrete time notation) byVτ(x). More precisely, for the admissible discrete time control valuesU(x)UL([0, T],Rm)(recall that these “values” are actually functions on[0, T], cf. the discussion after Theorem 2.7) and anyτ >0 we define

Vτ(x):=

vL

[0, τ],Rm there existsu∈UN(x)withNτ/T+1 such thatu(n)=v|[nT ,(n+1)T](· +nT )

holds for alln∈N0withnT < τ . Then, the respective assumption reads as follows.

Assumption 7.3 We assume that the continuous time system is asymptotically con-trollable with respect towith rateβKL0, i.e., for eachx∈Xand eachτ >0 there exists an admissible control functionvx∈Vτ(x)satisfying

ϕ(t,0, x, vx), vx(t )

β

(x), t for allt∈ [0, τ].

For the discrete time system (2.8) satisfying (2.12) the Controllability Assump-tion7.3translates to the discrete time Assumption 6.4 as

xux(n, x), ux(n)

β

(x), nT .

1Integral costs (3.4) can be treated, too, but this is somewhat more technical, cf. Grüne, von Lossow, Pannek and Worthmann [21, Sect. 4.2].

178 7 Variants and Extensions

Fig. 7.5 Suboptimality index αfrom (6.19) for fixedTopt

and varying sampling periodT

In the special case of exponential controllability,βin Assumption7.3is of the form

β(r, t )=Ceλtr (7.9)

for C≥1 and λ >0. Thus, for the discrete time system, the Controllability As-sumption 6.4 becomes

xux(n, x), ux(n)

CeλnT(x)=C eλTn

(x) and we obtain aKL0-function of type (6.3) withCfrom (7.9) andσ=eλT.

Summarizing, if we change the sampling periodT, then not only the discrete time optimization horizonN but also the decay rateσ in the exponential controllability property will change, more precisely we haveσ→1 asT →0. When evaluating (6.19) with the resulting values

γk=

k1

j=0

Ceλj T,

it turns out that the convergenceσ→1 counteracts the positive effect of the growing optimization horizonsN→ ∞. In fact, the negative effect ofσ →1 is so strong thatαdiverges to−∞asT →0. Figure7.5illustrates this fact (which can also be proven rigorously, cf. [21]) forC=2,λ=1 andTopt=5.

This means that whenever we choose the sampling periodT >0 too small, then performance may deteriorate and eventually instability may occur. This predicted behavior is not consistent with observations in numerical examples. How can this be explained?

The answer lies in the fact that our stability and performance estimate is only tight for one particular system in the class of systems satisfying Assumption 6.4, cf. The-orem 6.23 and the discussion preceding this theThe-orem, and not for the whole class.

In particular, the subclass of sampled data systems satisfying Assumption 6.4 may well behave better than general systems. Thus, we may try to identify the decisive

7.5 Fast Sampling 179

Fig. 7.6 αfor fixedToptand varying sampling periodT without Assumption7.4(lower graphs) and with Assumption7.4(upper graphs) withL=2 (left) andL=10 (right)

property which makes sampled data systems behave better and try to incorporate this property into our computation ofα.

To this end, note that so far we have not imposed any continuity properties off in (2.1). Sampled data systems, however, are governed by differential equations (2.6) for which we have assumed Lipschitz continuity in Assumption 2.4. Let us assume for simplicity of exposition that the Lipschitz constant in this assumption is inde-pendent ofr. Then, for a large class of running coststhe following property for the continuous time system can be concluded from Gronwall’s Lemma; see [21] for details.

Assumption 7.4 There exists a constantL >0 such that for eachx∈Xand each τ >0 there exists an admissible control functionvx∈Vτ(x)satisfying

ϕ(t,0, x, vx), vx(t )

eLt(x) for allt∈ [0, τ].

The estimates oninduced by this assumption can now be incorporated into the analysis in Chap. 6. As a result, the valuesγkin Formula (6.19) change to

γk=min k1

j=0

Ceλj T,

k1

j=0

eLj T

.

The effect of this change is clearly visible in Fig.7.6. Theα-values from (6.19) no longer diverge to−∞but rather converge to a finite—and for the chosen parameters also positive—value asT →0. Again, this convergence behavior can be rigorously proved; for details we refer to [21].

180 7 Variants and Extensions

Fig. 7.7 Scheme of the NMPC closed-loop components

Dans le document Communications and Control Engineering (Page 189-193)