• Aucun résultat trouvé

Convex Hulls and Polytopes

Dans le document R.L. Graham, La Jolla B. Korte, Bonn (Page 72-77)

Simplex Algorithm

2 The property (c) of optimum solutions is often called complementary

3.4 Convex Hulls and Polytopes

In this section we collect some more facts on polytopes. In particular, we show that polytopes are precisely those sets that are the convex hull of a finite number of points. We start by recalling some basic definitions:

Definition 3.25. Given vectorsx1, . . . ,xk∈Rn andλ1, . . . , λk≥0withk i=1λi

=1, we callx =k

i=1λixi aconvex combinationofx1, . . . ,xk. A set X ⊆Rn isconvexifλx+(1−λ)y∈ X for allx,yX andλ∈[0,1]. Theconvex hull conv(X)of a set X is defined as the set of all convex combinations of points inX. Anextreme pointof a setX is an elementxX withx∈/conv(X\ {x}).

So a setX is convex if and only if all convex combinations of points inX are again in X. The convex hull of a set X is the smallest convex set containing X. Moreover, the intersection of convex sets is convex. Hence polyhedra are convex.

Now we prove the “finite basis theorem for polytopes”, a fundamental result which seems to be obvious but is not trivial to prove directly:

Theorem 3.26. (Minkowski [1896], Steinitz [1916], Weyl [1935]) A set P is a polytope if and only if it is the convex hull of a finite set of points.

Proof: (Schrijver [1986]) Let P= {x∈Rn: Axb}be a nonempty polytope.

C is a polyhedral cone, so by Theorem 3.24 it is generated by finitely many nonzero vectors, say by

C, where C is the cone generated by

Corollary 3.27. A polytope is the convex hull of its vertices.

Proof: Let P be a polytope. By Theorem 3.26, the convex hull of its vertices is a polytope Q. Obviously QP. Suppose there is a point zP\Q. Then, by Theorem 3.23, there is a vectorcwithcz>max{cx:xQ}. The supporting hyperplane {x :cx =max{cy : yP}}of P defines a face of P containing no

vertex. This is impossible by Corollary 3.9. 2

The previous two and the following result are the starting point of polyhedral combinatorics; they will be used very often in this book. For a given ground set E and a subsetXE, theincidence vectorof X (with respect toE) is defined as the vectorx ∈ {0,1}E withxe=1 foreX andxe=0 foreE\X.

62 3. Linear Programming

Corollary 3.28. Let (E,F)be a set system, P the convex hull of the incidence vectors of the elements ofF, andc:E →R. Thenmax{cx :xP} =max{c(X): XF}.

Proof: Since max{cx : xP} ≥ max{c(X) : XF}is trivial, let x be an optimum solution of max{cx:xP}(note thatPis a polytope by Theorem 3.26).

By definition of P,x is a convex combination of incidence vectors y1, . . . ,yk of elements ofF: x =k

i=1λiyi for someλ1, . . . , λk ≥0. Sincecx=k

i=1λicyi, we havecyicx for at least one i ∈ {1, . . . ,k}. This yi is the incidence vector

of a setYF withc(Y)=cyicx. 2

Exercises

1. A set of vectors x1, . . . ,xk is called affinely independent if there is noλ ∈ Rk\ {0}withλ1l=0 andk

i=1λixi =0. Let∅ = X ⊆Rn. Show that the maximum cardinality of an affinely independent set of elements of X equals dimX+1.

2. LetP be a polyhedron. Prove that the dimension of any facet ofP is one less than the dimension of P.

3. Formulate the dual of the LP formulation (1.1) of theJob Assignment Prob-lem. Show how to solve the primal and the dual LP in the case when there are only two jobs (by a simple algorithm).

4. Let G be a digraph, c : E(G) → R+, E1,E2E(G), and s,tV(G). Consider the following linear program

min

eE(G)

c(e)ye

s.t. yezwzv (e=(v, w)∈ E(G))

ztzs = 1

ye ≥ 0 (e∈E1) ye ≤ 0 (eE2).

Prove that there is an optimum solution(y,z)andsXV(G)\ {t}with ye=1 fore∈δ+(X), ye= −1 for e∈δ(X)\E1, and ye=0 for all other edgese.

Hint:Consider the complementary slackness conditions for the edges entering or leaving{v∈V(G):zvzs}.

5. Let Axb be a linear inequality system inn variables. By multiplying each row by a positive constant we may assume that the first column of A is a vector with entries 0,−1 and 1 only. So can write Axb equivalently as

aixbi (i =1, . . . ,m1),

−x1+ajxbj (j =m1+1, . . . ,m2), x1+akxbk (k=m2+1, . . . ,m),

wherex =(x2, . . . ,xn)and a1, . . . ,am are the rows of A without the first entry. Then one can eliminate x1: Prove that Axb has a solution if and only if the system

aixbi (i =1, . . . ,m1),

ajxbjbkakx (j=m1+1, . . . ,m2,k=m2+1, . . . ,m) has a solution. Show that this technique, when iterated, leads to an algorithm for solving a linear inequality system Axb(or proving infeasibility).

Note: This method is known as Fourier-Motzkin elimination because it was proposed by Fourier and studied by Motzkin [1936]. One can prove that it is not a polynomial-time algorithm.

6. Use Fourier-Motzkin elimination (Exercise 5) to prove Theorem 3.19 directly.

(Kuhn [1956])

7. Show that Theorem 3.19 implies the Duality Theorem 3.16.

8. Prove the decomposition theorem for polyhedra: Any polyhedron P can be written as P = {x+c : xX,cC}, where X is a polytope andC is a polyhedral cone.

(Motzkin [1936])

∗ 9.Let P be a rational polyhedron and F a face of P. Show that {c:cz=max{cx:xP} for all zF}

is a rational polyhedral cone.

10. Prove Carath´eodory’s theorem:

If X ⊆ Rn and y ∈ conv(X), then there are x1, . . . ,xn+1X such that y∈conv({x1, . . . ,xn+1}).

(Carath´eodory [1911])

11. Prove the following extension of Carath´eodory’s theorem (Exercise 10):

If X ⊆ Rn and y,z ∈ conv(X), then there are x1, . . . ,xnX such that y∈conv({z,x1, . . . ,xn}).

12. Prove that the extreme points of a polyhedron are precisely its vertices.

13. Let P be a nonempty polytope. Consider the graphG(P)whose vertices are the vertices of P and whose edges correspond to the 1-dimensional faces of P. Let x be any vertex of P, andc a vector withcx <max{cz: zP}. Prove that then there is a neighbour yof x inG(P)withcx<cy.

∗ 14.Use Exercise 13 to prove that G(P) is n-connected for any n-dimensional polytope P (n ≥1).

References

General Literature:

Chv´atal, V. [1983]: Linear Programming. Freeman, New York 1983

Padberg, M. [1995]: Linear Optimization and Extensions. Springer, Berlin 1995

Schrijver, A. [1986]: Theory of Linear and Integer Programming. Wiley, Chichester 1986

64 3. Linear Programming Cited References:

Avis, D., and Chv´atal, V. [1978]: Notes on Bland’s pivoting rule. Mathematical Program-ming Study 8 (1978), 24–34

Bland, R.G. [1977]: New finite pivoting rules for the simplex method. Mathematics of Operations Research 2 (1977), 103–107

Borgwardt, K.-H. [1982]: The average number of pivot steps required by the simplex method is polynomial. Zeitschrift f¨ur Operations Research 26 (1982), 157–177 Carath´eodory, C. [1911]: ¨Uber den Variabilit¨atsbereich der Fourierschen Konstanten von

positiven harmonischen Funktionen. Rendiconto del Circolo Matematico di Palermo 32 (1911), 193–217

Dantzig, G.B. [1951]: Maximization of a linear function of variables subject to linear inequalities. In: Activity Analysis of Production and Allocation (T.C. Koopmans, ed.), Wiley, New York 1951, pp. 359–373

Dantzig, G.B., Orden, A., and Wolfe, P. [1955]: The generalized simplex method for min-imizing a linear form under linear inequality restraints. Pacific Journal of Mathematics 5 (1955), 183–195

Farkas, G. [1894]: A Fourier-f´ele mechanikai elv alkalmaz´asai. Mathematikai ´es Term´esz-ettudom´anyi ´Ertesit¨o 12 (1894), 457–472

Gale, D., Kuhn, H.W., and Tucker, A.W. [1951]: Linear programming and the theory of games. In: Activity Analysis of Production and Allocation (T.C. Koopmans, ed.), Wiley, New York 1951, pp. 317–329

Hoffman, A.J., and Kruskal, J.B. [1956]: Integral boundary points of convex polyhedra. In:

Linear Inequalities and Related Systems; Annals of Mathematical Study 38 (H.W. Kuhn, A.W. Tucker, eds.), Princeton University Press, Princeton 1956, pp. 223–246

Klee, V., and Minty, G.J. [1972]: How good is the simplex algorithm? In: Inequalities III (O. Shisha, ed.), Academic Press, New York 1972, pp. 159–175

Kuhn, H.W. [1956]: Solvability and consistency for linear equations and inequalities. The American Mathematical Monthly 63 (1956), 217–232

Minkowski, H. [1896]: Geometrie der Zahlen. Teubner, Leipzig 1896

Motzkin, T.S. [1936]: Beitr¨age zur Theorie der linearen Ungleichungen (Dissertation).

Azriel, Jerusalem 1936

von Neumann, J. [1947]: Discussion of a maximum problem. Working paper. Published in:

John von Neumann, Collected Works; Vol. VI (A.H. Taub, ed.), Pergamon Press, Oxford 1963, pp. 27–28

Steinitz, E. [1916]: Bedingt konvergente Reihen und konvexe Systeme. Journal f¨ur die reine und angewandte Mathematik 146 (1916), 1–52

Weyl, H. [1935]: Elementare Theorie der konvexen Polyeder. Commentarii Mathematici Helvetici 7 (1935), 290–306

There are basically three types of algorithms forLinear Programming: the Sim-plex Algorithm(see Section 3.2), interior point algorithms, and theEllipsoid Method.

Each of these has a disadvantage: In contrast to the other two, so far no variant of theSimplex Algorithm has been shown to have a polynomial running time.

In Sections 4.4 and 4.5 we present theEllipsoid Methodand prove that it leads to a polynomial-time algorithm forLinear Programming. However, the Ellip-soid Method is too inefficient to be used in practice. Interior point algorithms and, despite its exponential worst-case running time, theSimplex Algorithmare far more efficient, and they are both used in practice to solve LPs. In fact, both the Ellipsoid Methodand interior point algorithms can be used for more gen-eral convex optimization problems, e.g. for so-called semidefinite programming problems. We shall not go into details here.

An advantage of theSimplex Algorithmand theEllipsoid Methodis that they do not require the LP to be given explicitly. It suffices to have an oracle (a subroutine) which decides whether a given vector is feasible and, if not, returns a violated constraint. We shall discuss this in detail with respect to the Ellipsoid Methodin Section 4.6, because it implies that many combinatorial optimization problems can be solved in polynomial time; for some problems this is in fact the only known way to show polynomial solvability. This is the reason why we discuss theEllipsoid Method but not interior point algorithms in this book.

A prerequisite for polynomial-time algorithms is that there exists an optimum solution that has a binary representation whose length is bounded by a polyno-mial in the input size. We prove this in Section 4.1. In Sections 4.2 and 4.3 we review some basic algorithms needed later, including the well-known Gaussian elimination method for solving systems of equations.

Dans le document R.L. Graham, La Jolla B. Korte, Bonn (Page 72-77)