• Aucun résultat trouvé

Total Dual Integrality

Dans le document R.L. Graham, La Jolla B. Korte, Bonn (Page 109-113)

Integer Programming

5.3 Total Dual Integrality

B 0 C D

for some nonsingular square matrix B. (InitiallyU =I,D=A, and the parts B, C and 0 have no entries.)

Let(δ1, . . . , δk)be the first row of D. Apply unimodular transformations such that all δi are nonnegative andk

i=1δi is minimum. W.l.o.g.δ1≥δ2≥ · · · ≥δk. Thenδ1>0 since the rows ofA(and hence those ofAU) are linearly independent.

If δ2 > 0, then subtracting the second column of D from the first one would decrease k

i=1δi. Soδ23 =. . .=δk =0. We can increase the size of B by

one and continue. 2

Note that the operations applied in the proof correspond to the Euclidean Algorithm. The matrix B we get is in fact a lower diagonal matrix. With a little more effort one can obtain the so-called Hermite normal form of A. The following lemma gives a criterion for integral solvability of equation systems, similar to Farkas’ Lemma.

Lemma 5.10. Let A be a rational matrix andb a rational column vector. Then Ax =bhas an integral solution if and only if ybis an integer for each rational vector yfor whichy Ais integral.

Proof: Necessity is obvious: if x and y Aare integral vectors and Ax =b, then yb=y Ax is an integer.

To prove sufficiency, suppose yb is an integer whenever y A is integral. We may assume that Ax =b contains no redundant equalities, i.e. y A =0 implies yb=0 for all y =0. Let m be the number of rows of A. If rank(A) < m then {y:y A=0}contains a nonzero vector yandy:=2y1bysatisfies yA=0 and yb=12 ∈/Z. So the rows of Aare linearly independent.

By Lemma 5.9 there exists a unimodular matrix U with AU = ( B 0 ), whereB is a nonsingularm×m-matrix. SinceB1AU =( I 0 )is an integral matrix, we have for each rowyofB1thaty AUis integral and thus by Proposition 5.8 y A is integral. Hence yb is an integer for each row y of B1, implying that B1b is an integral vector. SoU

B1b

0

is an integral solution of Ax =b. 2

5.3 Total Dual Integrality

In this and the next section we focus on integral polyhedra:

Definition 5.11. A polyhedronP isintegralif P=PI.

98 5. Integer Programming

Theorem 5.12. (Hoffman [1974], Edmonds and Giles [1977]) LetPbe a ratio-nal polyhedron. Then the following statements are equivalent:

(a) P is integral.

(b) Each face ofP contains integral vectors.

(c) Each minimal face of P contains integral vectors.

(d) Each supporting hyperplane contains integral vectors.

(e) Each rational supporting hyperplane contains integral vectors.

(f) max{cx : xP}is attained by an integral vector for eachc for which the maximum is finite.

(g) max{cx :xP}is an integer for each integralcfor which the maximum is finite.

Proof: We first prove (a)⇒(b)⇒(f)⇒(a), then (b)⇒(d)⇒(e)⇒(c)⇒(b), and fi-nally (f)⇒(g)⇒(e).

(a)⇒(b): LetFbe a face, sayF= PH, whereHis a supporting hyperplane, and letxF. If P =PI, thenx is a convex combination of integral points inP, and these must belong to H and thus to F.

(b)⇒(f) follows directly from Proposition 3.3, because {y ∈ P : cy = max{cx:xP}}is a face of P for eachcfor which the maximum is finite.

(f)⇒(a): Suppose there is a vector yP\PI. Then (since PI is a polyhedron by Theorem 5.7) there is an inequality ax ≤ β valid for PI for whichay > β. Then clearly (f) is violated, since max{ax :xP}(which is finite by Proposition 5.1) is not attained by any integral vector.

(b)⇒(d) is also trivial since the intersection of a supporting hyperplane with P is a face of P. (d)⇒(e) and (c)⇒(b) are trivial.

(e)⇒(c): Let P = {x : Axb}. We may assume that A andb are integral.

LetF = {x: Ax=b}be a minimal face ofP, where Axbis a subsystem of Axb (we use Proposition 3.8). If Ax =bhas no integral solution, then – by Lemma 5.10 – there exists a rational vector y such that c:= y A is integral but δ :=yb is not an integer. Adding integers to components of ydoes not destroy this property (A andbare integral), so we may assume that all components ofy are positive. So H := {x :cx =δ}contains no integral vectors. Observe that H is a rational hyperplane.

We finally show that His a supporting hyperplane by proving thatHP=F. Since FH is trivial, it remains to show that HPF. But for xHP we have y Ax=cx =δ =yb, so y(Axb)=0. Since y>0 and Axb, this implies Ax=b, soxF.

(f)⇒(g) is trivial, so we finally show (g)⇒(e). Let H = {x : cx = δ} be a rational supporting hyperplane of P, so max{cx : xP} = δ. Suppose H contains no integral vectors. Then – by Lemma 5.10 – there exists a number γ such thatγcis integral butγ δ /∈Z. Then

max{(|γ|c)x:xP} = |γ|max{cx :xP} = |γ|δ /∈ Z,

contradicting our assumption. 2

See also Gomory [1963], Fulkerson [1971] and Chv´atal [1973] for earlier partial results. By (a)⇔(b) and Corollary 3.5 every face of an integral polyhedron is integral. The equivalence of (f) and (g) of Theorem 5.12 motivated Edmonds and Giles to define TDI-systems:

Definition 5.13. (Edmonds and Giles [1977]) A system Axb of linear in-equalities is calledtotally dual integral (TDI)if the minimum in the LP duality equation

max{cx :Axb} = min{yb: y A=c, y≥0}

has an integral optimum solutionyfor each integral vectorcfor which the minimum is finite.

With this definition we get an easy corollary of (g)⇒(a) of Theorem 5.12:

Corollary 5.14. Let Axb be a TDI-system where Ais rational andbis inte-gral. Then the polyhedron{x: Axb}is integral. 2 But total dual integrality is not a property of polyhedra (cf. Exercise 7). In general, a TDI-system contains more inequalities than necessary for describing the polyhedron. Adding valid inequalities does not destroy total dual integrality:

Proposition 5.15. If Axb is TDI andax ≤ β is a valid inequality for{x : Axb}, then the system Axb, ax≤βis also TDI.

Proof: Letcbe an integral vector such that min{yb+γβ: y Aa =c, y≥ 0, γ ≥0}is finite. Sinceax ≤β is valid for{x: Axb},

min{yb:y A=c, y≥0} = max{cx :Axb}

= max{cx :Axb, ax ≤β}

= min{yb+γβ :y Aa =c, y≥0, γ ≥0}.

The first minimum is attained by some integral vector y, soy=y, γ =0 is an

integral optimum solution for the second minimum. 2

Theorem 5.16. (Giles and Pulleyblank [1979]) For each rational polyhedronP there exists a rational TDI-system Axbwith Aintegral and P = {x: Axb}. Herebcan be chosen to be integral if and only if Pis integral.

Proof: Let P = {x :C xd}withC andd rational. Let F be a minimal face of P. By Proposition 3.8, F = {x :Cx =d}for some subsystem Cxd of C xd. Let

KF := {c:cz=max{cx :xP}for all zF}.

Obviously, KF is a cone. We claim that KF is the cone generated by the rows of C.

Obviously, the rows of C belong to KF. On the other hand, for all zF, cKF and all y withCy≤ 0 there exists an >0 with z+y∈ P. Hence

100 5. Integer Programming

cy ≤ 0 for all cKF and all y with Cy ≤ 0. By Farkas’ Lemma (Corollary 3.21), this implies that there exists anx≥0 withc=xC.

So KF is indeed a polyhedral cone (Theorem 3.24). By Lemma 5.4 there exists an integral Hilbert basisa1, . . . ,at generatingKF. LetSF be the system of inequalities

a1x ≤ max{a1x:xP}, . . . ,atx ≤ max{atx:xP}.

Let Axb be the collection of all these systemsSF (for all minimal faces F).

Note that if Pis integral thenbis integral. CertainlyP= {x: Axb}. It remains to show that Axb is TDI.

Letcbe an integral vector for which

max{cx: Axb} = min{yb:y≥0, y A=c}

is finite. Let F := {zP : cz = max{cx : xP}}. F is a face of P, so let FF be a minimal face of P. LetSF be the systema1x ≤β1, . . . ,atx ≤βt. Thenc1a1+· · ·+λtat for some nonnegative integersλ1, . . . , λt. We add zero components toλ1, . . . , λt in order to get an integral vectorλ¯ ≥0 with λ¯A =c and thus λb¯ = ¯λ(Ax)=(λA)x¯ =cx for all xF. So λ¯ attains the minimum min{yb:y≥0, y A=c}, and Axb is TDI.

If P is integral, we have chosen b to be integral. Conversely, if b can be chosen integral, by Corollary 5.14 P must be integral. 2 Indeed, for full-dimensional rational polyhedra there is a unique minimal TDI-system describing it (Schrijver [1981]). For later use, we prove that each “face”

of a TDI-system is again TDI:

Theorem 5.17. (Cook [1983]) Let Axb, ax ≤β be a TDI-system, wherea is integral. Then the system Axb, axis also TDI.

Proof: (Schrijver [1986]) Letcbe an integral vector such that max{cx: Axb,ax=β}

= min{yb+(λ−µ)β:y, λ, µ≥0,y A+(λ−µ)a=c} (5.2) is finite. Letx,y, λ, µ attain these optima. We setc:=ca and observe that

max{cx: Axb, ax ≤β} = min{yb+λβ:y, λ≥0, y A+λa =c} (5.3) is finite, because x := x is feasible for the maximum and y := y, λ := λ+ µ −µ is feasible for the minimum.

Since Axb, ax ≤β is TDI, the minimum in (5.3) has an integral optimum solution y˜,λ˜. We finally set y := ˜y, λ := ˜λ and µ := µ and claim that (y, λ, µ)is an integral optimum solution for the minimum in (5.2).

Obviously(y, λ, µ)is feasible for the minimum in (5.2). Furthermore,

yb+(λ−µ)β = ˜yb+ ˜λβ− µβ

yb+(λ+ µ −µ)β− µβ

since(y, λ+ µ −µ)is feasible for the minimum in (5.3), and(y,˜ λ)˜ is an optimum solution. We conclude that

yb+(λ−µ)β ≤ yb+(λ−µ)β,

proving that (y, λ, µ) is an integral optimum solution for the minimum in (5.2).

2

Dans le document R.L. Graham, La Jolla B. Korte, Bonn (Page 109-113)