• Aucun résultat trouvé

Linear Programming

Dans le document R.L. Graham, La Jolla B. Korte, Bonn (Page 61-66)

Instance: A matrix A∈Rm×n and column vectorsb∈Rm,c∈Rn.

Task: Find a column vectorx∈Rnsuch thatAxbandcxis maximum, decide that{x∈Rn : Axb}is empty, or decide that for allα∈R there is an x∈Rn with Axb andcx> α.

Alinear program (LP)is an instance of the above problem. We often write a linear program as max{cx : Axb}. Afeasible solutionof an LP max{cx: Axb}is a vectorxwith Axb. A feasible solution attaining the maximum is called anoptimum solution.

Herecxdenotes the scalar product of the vectors. The notionxyfor vectors x andy (of equal size) means that the inequality holds in each component. If no sizes are specified, the matrices and vectors are always assumed to be compatible in size. We often omit indicating the transposition of column vectors and write e.g.cx for the scalar product.

As the problem formulation indicates, there are two possibilities when an LP has no solution: The problem can beinfeasible(i.e.P := {x∈Rn : Axb} = ∅) orunbounded(i.e. for allα∈Rthere is anxPwithcx > α). If an LP is neither infeasible nor unbounded it has an optimum solution, as we shall prove in Section 3.2. This justifies the notation max{cx: Axb}instead of sup{cx :Axb}.

Many combinatorial optimization problems can be formulated as LPs. To do this, we encode the feasible solutions as vectors inRn for somen. In Section 3.4 we show that one can optimize a linear objective function over a finite set S of vectors by solving a linear program. Although the feasible set of this LP contains not only the vectors in S but also all their convex combinations, one can show that among the optimum solutions there is always an element of S.

In Section 3.1 we compile some terminology and basic facts about polyhedra, the sets P = {x ∈Rn : Axb}of feasible solutions of LPs. In Section 3.2 we

50 3. Linear Programming

present theSimplex Algorithm, which we also use to derive the Duality Theorem and related results (Section 3.3). LP duality is a most important concept which explicitly or implicitly appears in almost all areas of combinatorial optimization;

we shall often refer to the results in Sections 3.3 and 3.4.

3.1 Polyhedra

Linear Programming deals with maximizing or minimizing a linear objective func-tion of finitely many variables subject to finitely many linear inequalities. So the set of feasible solutions is the intersection of finitely many halfspaces. Such a set is called a polyhedron:

Definition 3.1. ApolyhedroninRn is a set of type P= {x∈Rn : Axb}for some matrix A∈Rm×n and some vectorb ∈Rm. IfAandb are rational, then P is arationalpolyhedron. A bounded polyhedron is also called apolytope.

We denote by rank(A) the rank of a matrix A. The dimension dimX of a nonempty set X ⊆Rn is defined to be

n−max{rank(A): Ais ann×n-matrix with Ax =Ay for allx,yX}.

A polyhedron P⊆Rn is calledfull-dimensionalifdimP =n.

Equivalently, a polyhedron is full-dimensional if and only if there is a point in its interior. For most of this chapter it makes no difference whether we are in the rational or real space. We need the following standard terminology:

Definition 3.2. Let P := {x : Axb} be a nonempty polyhedron. If c is a nonzero vector for which δ := max{cx : xP}is finite, then{x : cx = δ} is called asupporting hyperplaneof P. Afaceof P is P itself or the intersection ofPwith a supporting hyperplane of P. A pointxfor which{x}is a face is called avertexof P, and also abasic solutionof the systemAxb.

Proposition 3.3. Let P = {x : Axb}be a polyhedron and FP. Then the following statements are equivalent:

(a) F is a face of P.

(b) There exists a vectorcsuch thatδ:=max{cx:xP}is finite and F= {xP :cx=δ}.

(c) F= {xP :Ax=b} = ∅for some subsystem Axbof Axb.

Proof: (a) and (b) are obviously equivalent.

(c)⇒(b): IfF = {x∈ P : Ax=b}is nonempty, letcbe the sum of the rows of A, and let δ be the sum of the components ofb. Then obviously cx ≤δ for allxP andF = {x∈ P:cx=δ}.

(b)⇒(c): Assume that c is a vector, δ := max{cx : xP} is finite and F = {x ∈ P :cx =δ}. Let Axb be the maximal subsystem of Axb such that Ax=b for allxF. Let Axb be the rest of the system Axb.

We first observe that for each inequalityaix≤βiof Axb(i =1, . . . ,k) there is a pointxiF such thataixi< βi. Letx:= 1kk

i=1xi be the center of gravity of these points (if k =0, we can choose an arbitrary xF); we have xF andaix < βifor all i.

We have to prove thatAy=bcannot hold for anyyP\F. So letyP\F. We have cy < δ. Now consider z :=x+(xy)for some small >0; in particular let be smaller than aβiaix

i(xy) for alli ∈ {1, . . . ,k}withaix >aiy.

We havecz> δ and thusz∈/ P. So there is an inequality ax≤β of Axb such thataz> β. Thusax>ay. The inequalityax ≤β cannot belong toAxb, since otherwise we haveaz=ax+a(xy) <ax+aβ−(xaxy)a(xy)=β (by the choice of ). Hence the inequality ax ≤ β belongs to Axb. Since ay=a(x+1(xz)) < β, this completes the proof. 2

As a trivial but important corollary we remark:

Corollary 3.4. Ifmax{cx:xP}is bounded for a nonempty polyhedronPand a vectorc, then the set of points where the maximum is attained is a face of P. 2

The relation “is a face of ” is transitive:

Corollary 3.5. Let P be a polyhedron and F a face of P. Then F is again a polyhedron. Furthermore, a set FF is a face of P if and only if it is a face

ofF. 2

The maximal faces distinct from P are particularly important:

Definition 3.6. Let P be a polyhedron. A facetof P is a maximal face distinct from P. An inequalitycx ≤δ isfacet-definingfor P ifcx≤δ for allxP and {xP :cx=δ}is a facet of P.

Proposition 3.7. Let P ⊆ {x ∈ Rn : Ax = b} be a nonempty polyhedron of dimension n −rank(A). Let Axb be a minimal inequality system such that P = {x : Ax =b, Axb}. Then each inequality of Axbis facet-defining for P, and each facet ofP is defined by an inequality of Axb.

Proof: If P = {x ∈ Rn : Ax =b}, then there are no facets and the statement is trivial. So let Axb be a minimal inequality system with P = {x : Ax = b, Axb}, letax ≤β be one of its inequalities and Axbbe the rest of the system Axb. Let y be a vector with Ay =b, Ayb and ay >b (such a vector y exists as the inequality axb is not redundant). Let xP such thatax<b(such a vector must exist because dimP =n−rank(A)).

Considerz:=x+aβyaaxx(y−x). We haveaz=βand, since 0< aβyaaxx <1, zP. Therefore F := {xP :ax} =0 and F= P (as xP\F). Thus F is a facet of P.

By Proposition 3.3 each facet is defined by an inequality of Axb. 2

52 3. Linear Programming

The other important class of faces (beside facets) are minimal faces (i.e. faces not containing any other face). Here we have:

Proposition 3.8. (Hoffman and Kruskal [1956]) Let P = {x : Axb}be a polyhedron. A nonempty subset FP is a minimal face of P if and only if F = {x:Ax=b}for some subsystem Axbof Axb.

Proof: If F is a minimal face of P, by Proposition 3.3 there is a subsystem Axb of Axb such that F = {x ∈ P : Ax =b}. We choose Axb maximal. Let Axb be a minimal subsystem of Axb such that F = {x : Ax=b, Axb}. We claim that Axbdoes not contain any inequality.

Suppose, on the contrary, thatax≤βis an inequality of Axb. Since it is not redundant for the description of F, Proposition 3.7 implies that F:= {x: Ax = b, Axb,ax}is a facet of F. By Corollary 3.5 F is also a face of P, contradicting the assumption that F is a minimal face of P.

Now let ∅ = F = {x : Ax = b} ⊆ P for some subsystem Axb of Axb. Obviously F has no faces except itself. By Proposition 3.3, F is a face of P. It follows by Corollary 3.5 that F is a minimal face of P. 2 Corollary 3.4 and Proposition 3.8 imply thatLinear Programming can be solved in finite time by solving the linear equation system Ax = b for each subsystemAxbof Axb. A more intelligent way is theSimplex Algorithm which is described in the next section.

Another consequence of Proposition 3.8 is:

Corollary 3.9. Let P = {x ∈Rn : Axb}be a polyhedron. Then all minimal faces ofPhave dimensionn−rank(A). The minimal faces of polytopes are vertices.

2 This is why polyhedra {x ∈ Rn : Axb} with rank(A) = n are called pointed: their minimal faces are points.

Let us close this section with some remarks on polyhedral cones.

Definition 3.10. Aconeis a setC⊆Rnfor whichx,yCandλ, µ≥0implies λxyC. AconeC is said to begeneratedby x1, . . . ,xk ifx1, . . . ,xkC and for anyxC there are numbersλ1, . . . , λk ≥0withx=k

i=1λixi. A cone is calledfinitely generatedif some finite set of vectors generates it. Apolyhedral coneis a polyhedron of type{x: Ax ≤0}.

It is immediately clear that polyhedral cones are indeed cones. We shall now show that polyhedral cones are finitely generated. I always denotes an identity matrix.

Lemma 3.11. (Minkowski [1896]) LetC = {x∈Rn: Ax ≤0}be a polyhedral cone. ThenCis generated by a subset of the set of solutions to the systemsM y=b, whereMconsists ofnlinearly independent rows of

A I

andb= ±ej for some unit vectorej.

Proof: Let A be an m×n-matrix. Consider the systems M y = b where M consists of n linearly independent rows of

ThenC is generated by the solutions of

For the general case we use induction on the dimension of C. If C is not a linear subspace, choose a row a of A and a submatrix A of A such that the rows of

Thus any polyhedral cone is finitely generated. We shall show the converse at the end of Section 3.3.

3.2 The Simplex Algorithm

The oldest and best known algorithm for Linear Programming is Dantzig’s [1951] simplex method. We first assume that the polyhedron has a vertex, and that some vertex is given as input. Later we shall show how general LPs can be solved with this method.

For a set J of row indices we write AJ for the submatrix of A consisting of the rows in J only, and bJ for the subvector of b consisting of the components with indices in J. We abbreviate ai := A{i}andβi :=b{i}.

54 3. Linear Programming

Dans le document R.L. Graham, La Jolla B. Korte, Bonn (Page 61-66)