• Aucun résultat trouvé

What Is Not Covered in This Book?

Dans le document Communications and Control Engineering (Page 22-26)

1.4 What Is Not Covered in This Book?

The area of NMPC has grown so rapidly over the last two decades that it is virtually impossible to cover all developments in detail. In order not to overload this book, we have decided to omit several topics, despite the fact that they are certainly important and useful in a variety of applications. We end this introduction by giving a brief overview over some of these topics.

For this book, we decided to concentrate on NMPC schemes with online opti-mization only, thus leaving out all approaches in which part of the optiopti-mization is carried out offline. Some of these methods, which can be based on both infinite hori-zon and finite horihori-zon optimal control and are often termed explicit MPC, are briefly discussed in Sects. 3.5 and 4.4. Furthermore, we will not discuss special classes of nonlinear systems like, e.g., piecewise linear systems often considered in the explicit MPC literature.

Regarding robustness of NMPC controllers under perturbations, we have re-stricted our attention to schemes in which the optimization is carried out for a nom-inal model, i.e., in which the perturbation is not explicitly taken into account in the optimization objective, cf. Sects. 8.5–8.9. Some variants of model predictive con-trol in which the perturbation is explicitly taken into account, like min–max MPC schemes building on game theoretic ideas or tube based MPC schemes relying on set oriented methods are briefly discussed in Sect. 8.10.

An emerging and currently strongly growing field are distributed NMPC schemes in which the optimization in each NMPC step is carried out locally in a number of subsystems instead of using a centralized optimization. Again, this is a topic which is not covered in this book and we refer to, e.g., Rawlings and Mayne [23, Chap. 6]

and the references therein for more information.

At the very heart of each NMPC algorithm is a mathematical model of the sys-tems dynamics, which leads to the discrete time dynamicsf in (1.1). While we will explain in detail in Sect. 2.2 and Chap. 9 how to obtain such a discrete time model from a differential equation, we will not address the question of how to obtain a suitable differential equation or how to identify the parameters in this model. Both modeling and parameter identification are serious problems in their own right which cannot be covered in this book. It should, however, be noted that optimization meth-ods similar to those used in NMPC can also be used for parameter identification;

see, e.g., Schittkowski [26].

A somewhat related problem stems from the fact that NMPC inevitably leads to a feedback law in which the full statex(n)needs to be measured in order to evaluate the feedback law, i.e., a state feedback law. In most applications, this information is not available; instead, only output informationy(n)=h(x(n))for some output map his at hand. This implies that the statex(n)must be reconstructed from the output y(n)by means of a suitable observer. While there is a variety of different techniques for this purpose, it is interesting to note that an idea which is very similar to NMPC can be used for this purpose: in the so-called moving horizon state estimation ap-proach the state is estimated by iteratively solving optimization problems over a

10 1 Introduction moving time horizon, analogous to the repeated minimization ofJ (x(n), u(·)) de-scribed above. However, instead of minimizing the future deviations of the pre-dictions from the reference value, here the past deviations of the trajectory from the measured output values are minimized. More information on this topic can be found, e.g., in Rawlings and Mayne [23, Chap. 4] and the references therein.

References

1. Alamir, M., Bornard, G.: Stability of a truncated infinite constrained receding horizon scheme:

the general discrete nonlinear case. Automatica 31(9), 1353–1356 (1995)

2. Bellman, R.: Dynamic Programming. Princeton University Press, Princeton (1957). Reprinted in 2010

3. Bitmead, R.R., Gevers, M., Wertz, V.: Adaptive Optimal Control. The Thinking Man’s GPC.

International Series in Systems and Control Engineering. Prentice Hall, New York (1990) 4. Chen, C.C., Shaw, L.: On receding horizon feedback control. Automatica 18(3), 349–352

(1982)

5. Chen, H., Allgöwer, F.: Nonlinear model predictive control schemes with guaranteed stabil-ity. In: Berber, R., Kravaris, C. (eds.) Nonlinear Model Based Process Control, pp. 465–494.

Kluwer Academic, Dordrecht (1999)

6. Cutler, C.R., Ramaker, B.L.: Dynamic matrix control—a computer control algorithm. In: Pro-ceedings of the Joint Automatic Control Conference, pp. 13–15 (1980)

7. De Nicolao, G., Magni, L., Scattolini, R.: Stabilizing nonlinear receding horizon control via a nonquadratic terminal state penalty. In: CESA’96 IMACS Multiconference: Computational Engineering in Systems Applications, Lille, France, pp. 185–187 (1996)

8. De Nicolao, G., Magni, L., Scattolini, R.: Stabilizing receding-horizon control of nonlinear time-varying systems. IEEE Trans. Automat. Control 43(7), 1030–1036 (1998)

9. Fontes, F.A.C.C.: A general framework to design stabilizing nonlinear model predictive con-trollers. Systems Control Lett. 42(2), 127–143 (2001)

10. García, C.E., Prett, D.M., Morari, M.: Model predictive control: Theory and practice—a sur-vey. Automatica 25(3), 335–348 (1989)

11. Grimm, G., Messina, M.J., Tuna, S.E., Teel, A.R.: Model predictive control: for want of a local control Lyapunov function, all is not lost. IEEE Trans. Automat. Control 50(5), 546–558 (2005)

12. Jadbabaie, A., Hauser, J.: On the stability of receding horizon control with a general terminal cost. IEEE Trans. Automat. Control 50(5), 674–678 (2005)

13. Keerthi, S.S., Gilbert, E.G.: Optimal infinite-horizon feedback laws for a general class of constrained discrete-time systems: stability and moving-horizon approximations. J. Optim.

Theory Appl. 57(2), 265–293 (1988)

14. Lee, E.B., Markus, L.: Foundations of Optimal Control Theory. Wiley, New York (1967) 15. Maciejowski, J.M.: Predictive Control with Constraints. Prentice Hall, New York (2002) 16. Magni, L., Sepulchre, R.: Stability margins of nonlinear receding-horizon control via inverse

optimality. Systems Control Lett. 32(4), 241–245 (1997)

17. Mayne, D.Q., Michalska, H.: Receding horizon control of nonlinear systems. IEEE Trans.

Automat. Control 35(7), 814–824 (1990)

18. Mayne, D.Q., Rawlings, J.B., Rao, C.V., Scokaert, P.O.M.: Constrained model predictive con-trol: Stability and optimality. Automatica 36(6), 789–814 (2000)

19. Parisini, T., Zoppoli, R.: A receding-horizon regulator for nonlinear systems and a neural approximation. Automatica 31(10), 1443–1451 (1995)

20. Pontryagin, L.S., Boltyanskii, V.G., Gamkrelidze, R.V., Mishchenko, E.F.: The Mathematical Theory of Optimal Processes. Translated by D.E. Brown. Pergamon/Macmillan Co., New York (1964)

References 11

21. Propo˘ı, A.I.: Application of linear programming methods for the synthesis of automatic sampled-data systems. Avtom. Telemeh. 24, 912–920 (1963)

22. Qin, S.J., Badgwell, T.A.: A survey of industrial model predictive control technology. Control Eng. Pract. 11, 733–764 (2003)

23. Rawlings, J.B., Mayne, D.Q.: Model Predictive Control: Theory and Design. Nob Hill Pub-lishing, Madison (2009)

24. del Re, L., Allgöwer, F., Glielmo, L., Guardiola, C., Kolmanovsky, I. (eds.): Automotive Model Predictive Control—Models, Methods and Applications. Lecture Notes in Control and Information Sciences, vol. 402. Springer, Berlin (2010)

25. Richalet, J., Rault, A., Testud, J.L., Papon, J.: Model predictive heuristic control: Applications to industrial processes. Automatica 14, 413–428 (1978)

26. Schittkowski, K.: Numerical Data Fitting in Dynamical Systems. Applied Optimization, vol. 77. Kluwer Academic, Dordrecht (2002)

Chapter 2

Dans le document Communications and Control Engineering (Page 22-26)