Integer Programming
5.3 Total Dual Integrality
B 0 C D
for some nonsingular square matrix B. (InitiallyU =I,D=A, and the parts B, C and 0 have no entries.)
Let(δ1, . . . , δk)be the first row of D. Apply unimodular transformations such that all δi are nonnegative andk
i=1δi is minimum. W.l.o.g.δ1≥δ2≥ · · · ≥δk. Thenδ1>0 since the rows ofA(and hence those ofAU) are linearly independent.
If δ2 > 0, then subtracting the second column of D from the first one would decrease k
i=1δi. Soδ2 =δ3 =. . .=δk =0. We can increase the size of B by
one and continue. 2
Note that the operations applied in the proof correspond to the Euclidean Algorithm. The matrix B we get is in fact a lower diagonal matrix. With a little more effort one can obtain the so-called Hermite normal form of A. The following lemma gives a criterion for integral solvability of equation systems, similar to Farkas’ Lemma.
Lemma 5.10. Let A be a rational matrix andb a rational column vector. Then Ax =bhas an integral solution if and only if ybis an integer for each rational vector yfor whichy Ais integral.
Proof: Necessity is obvious: if x and y Aare integral vectors and Ax =b, then yb=y Ax is an integer.
To prove sufficiency, suppose yb is an integer whenever y A is integral. We may assume that Ax =b contains no redundant equalities, i.e. y A =0 implies yb=0 for all y =0. Let m be the number of rows of A. If rank(A) < m then {y:y A=0}contains a nonzero vector yandy:=2y1bysatisfies yA=0 and yb=12 ∈/Z. So the rows of Aare linearly independent.
By Lemma 5.9 there exists a unimodular matrix U with AU = ( B 0 ), whereB is a nonsingularm×m-matrix. SinceB−1AU =( I 0 )is an integral matrix, we have for each rowyofB−1thaty AUis integral and thus by Proposition 5.8 y A is integral. Hence yb is an integer for each row y of B−1, implying that B−1b is an integral vector. SoU
B−1b
0
is an integral solution of Ax =b. 2
5.3 Total Dual Integrality
In this and the next section we focus on integral polyhedra:
Definition 5.11. A polyhedronP isintegralif P=PI.
98 5. Integer Programming
Theorem 5.12. (Hoffman [1974], Edmonds and Giles [1977]) LetPbe a ratio-nal polyhedron. Then the following statements are equivalent:
(a) P is integral.
(b) Each face ofP contains integral vectors.
(c) Each minimal face of P contains integral vectors.
(d) Each supporting hyperplane contains integral vectors.
(e) Each rational supporting hyperplane contains integral vectors.
(f) max{cx : x ∈ P}is attained by an integral vector for eachc for which the maximum is finite.
(g) max{cx :x ∈ P}is an integer for each integralcfor which the maximum is finite.
Proof: We first prove (a)⇒(b)⇒(f)⇒(a), then (b)⇒(d)⇒(e)⇒(c)⇒(b), and fi-nally (f)⇒(g)⇒(e).
(a)⇒(b): LetFbe a face, sayF= P∩H, whereHis a supporting hyperplane, and letx∈ F. If P =PI, thenx is a convex combination of integral points inP, and these must belong to H and thus to F.
(b)⇒(f) follows directly from Proposition 3.3, because {y ∈ P : cy = max{cx:x∈ P}}is a face of P for eachcfor which the maximum is finite.
(f)⇒(a): Suppose there is a vector y∈ P\PI. Then (since PI is a polyhedron by Theorem 5.7) there is an inequality ax ≤ β valid for PI for whichay > β. Then clearly (f) is violated, since max{ax :x∈ P}(which is finite by Proposition 5.1) is not attained by any integral vector.
(b)⇒(d) is also trivial since the intersection of a supporting hyperplane with P is a face of P. (d)⇒(e) and (c)⇒(b) are trivial.
(e)⇒(c): Let P = {x : Ax ≤b}. We may assume that A andb are integral.
LetF = {x: Ax=b}be a minimal face ofP, where Ax ≤bis a subsystem of Ax ≤b (we use Proposition 3.8). If Ax =bhas no integral solution, then – by Lemma 5.10 – there exists a rational vector y such that c:= y A is integral but δ :=yb is not an integer. Adding integers to components of ydoes not destroy this property (A andbare integral), so we may assume that all components ofy are positive. So H := {x :cx =δ}contains no integral vectors. Observe that H is a rational hyperplane.
We finally show that His a supporting hyperplane by proving thatH∩P=F. Since F⊆ H is trivial, it remains to show that H∩P ⊆F. But for x ∈ H∩P we have y Ax=cx =δ =yb, so y(Ax−b)=0. Since y>0 and Ax≤ b, this implies Ax=b, sox∈ F.
(f)⇒(g) is trivial, so we finally show (g)⇒(e). Let H = {x : cx = δ} be a rational supporting hyperplane of P, so max{cx : x ∈ P} = δ. Suppose H contains no integral vectors. Then – by Lemma 5.10 – there exists a number γ such thatγcis integral butγ δ /∈Z. Then
max{(|γ|c)x:x∈ P} = |γ|max{cx :x ∈P} = |γ|δ /∈ Z,
contradicting our assumption. 2
See also Gomory [1963], Fulkerson [1971] and Chv´atal [1973] for earlier partial results. By (a)⇔(b) and Corollary 3.5 every face of an integral polyhedron is integral. The equivalence of (f) and (g) of Theorem 5.12 motivated Edmonds and Giles to define TDI-systems:
Definition 5.13. (Edmonds and Giles [1977]) A system Ax ≤ b of linear in-equalities is calledtotally dual integral (TDI)if the minimum in the LP duality equation
max{cx :Ax ≤b} = min{yb: y A=c, y≥0}
has an integral optimum solutionyfor each integral vectorcfor which the minimum is finite.
With this definition we get an easy corollary of (g)⇒(a) of Theorem 5.12:
Corollary 5.14. Let Ax ≤b be a TDI-system where Ais rational andbis inte-gral. Then the polyhedron{x: Ax ≤b}is integral. 2 But total dual integrality is not a property of polyhedra (cf. Exercise 7). In general, a TDI-system contains more inequalities than necessary for describing the polyhedron. Adding valid inequalities does not destroy total dual integrality:
Proposition 5.15. If Ax ≤ b is TDI andax ≤ β is a valid inequality for{x : Ax ≤b}, then the system Ax ≤b, ax≤βis also TDI.
Proof: Letcbe an integral vector such that min{yb+γβ: y A+γa =c, y≥ 0, γ ≥0}is finite. Sinceax ≤β is valid for{x: Ax ≤b},
min{yb:y A=c, y≥0} = max{cx :Ax ≤b}
= max{cx :Ax ≤b, ax ≤β}
= min{yb+γβ :y A+γa =c, y≥0, γ ≥0}.
The first minimum is attained by some integral vector y∗, soy=y∗, γ =0 is an
integral optimum solution for the second minimum. 2
Theorem 5.16. (Giles and Pulleyblank [1979]) For each rational polyhedronP there exists a rational TDI-system Ax ≤bwith Aintegral and P = {x: Ax ≤b}. Herebcan be chosen to be integral if and only if Pis integral.
Proof: Let P = {x :C x ≤d}withC andd rational. Let F be a minimal face of P. By Proposition 3.8, F = {x :Cx =d}for some subsystem Cx ≤d of C x ≤d. Let
KF := {c:cz=max{cx :x ∈P}for all z∈F}.
Obviously, KF is a cone. We claim that KF is the cone generated by the rows of C.
Obviously, the rows of C belong to KF. On the other hand, for all z ∈ F, c∈ KF and all y withCy≤ 0 there exists an >0 with z+y∈ P. Hence
100 5. Integer Programming
cy ≤ 0 for all c ∈ KF and all y with Cy ≤ 0. By Farkas’ Lemma (Corollary 3.21), this implies that there exists anx≥0 withc=xC.
So KF is indeed a polyhedral cone (Theorem 3.24). By Lemma 5.4 there exists an integral Hilbert basisa1, . . . ,at generatingKF. LetSF be the system of inequalities
a1x ≤ max{a1x:x∈ P}, . . . ,atx ≤ max{atx:x ∈P}.
Let Ax ≤b be the collection of all these systemsSF (for all minimal faces F).
Note that if Pis integral thenbis integral. CertainlyP= {x: Ax ≤b}. It remains to show that Ax ≤b is TDI.
Letcbe an integral vector for which
max{cx: Ax ≤b} = min{yb:y≥0, y A=c}
is finite. Let F := {z ∈ P : cz = max{cx : x ∈ P}}. F is a face of P, so let F⊆F be a minimal face of P. LetSF be the systema1x ≤β1, . . . ,atx ≤βt. Thenc=λ1a1+· · ·+λtat for some nonnegative integersλ1, . . . , λt. We add zero components toλ1, . . . , λt in order to get an integral vectorλ¯ ≥0 with λ¯A =c and thus λb¯ = ¯λ(Ax)=(λA)x¯ =cx for all x ∈ F. So λ¯ attains the minimum min{yb:y≥0, y A=c}, and Ax ≤b is TDI.
If P is integral, we have chosen b to be integral. Conversely, if b can be chosen integral, by Corollary 5.14 P must be integral. 2 Indeed, for full-dimensional rational polyhedra there is a unique minimal TDI-system describing it (Schrijver [1981]). For later use, we prove that each “face”
of a TDI-system is again TDI:
Theorem 5.17. (Cook [1983]) Let Ax ≤b, ax ≤β be a TDI-system, wherea is integral. Then the system Ax ≤b, ax =β is also TDI.
Proof: (Schrijver [1986]) Letcbe an integral vector such that max{cx: Ax ≤b,ax=β}
= min{yb+(λ−µ)β:y, λ, µ≥0,y A+(λ−µ)a=c} (5.2) is finite. Letx∗,y∗, λ∗, µ∗ attain these optima. We setc:=c+µ∗a and observe that
max{cx: Ax ≤b, ax ≤β} = min{yb+λβ:y, λ≥0, y A+λa =c} (5.3) is finite, because x := x∗ is feasible for the maximum and y := y∗, λ := λ∗+ µ∗ −µ∗ is feasible for the minimum.
Since Ax ≤b, ax ≤β is TDI, the minimum in (5.3) has an integral optimum solution y˜,λ˜. We finally set y := ˜y, λ := ˜λ and µ := µ∗ and claim that (y, λ, µ)is an integral optimum solution for the minimum in (5.2).
Obviously(y, λ, µ)is feasible for the minimum in (5.2). Furthermore,
yb+(λ−µ)β = ˜yb+ ˜λβ− µ∗β
≤ y∗b+(λ∗+ µ∗ −µ∗)β− µ∗β
since(y∗, λ∗+ µ∗ −µ∗)is feasible for the minimum in (5.3), and(y,˜ λ)˜ is an optimum solution. We conclude that
yb+(λ−µ)β ≤ y∗b+(λ∗−µ∗)β,
proving that (y, λ, µ) is an integral optimum solution for the minimum in (5.2).
2