• Aucun résultat trouvé

The steepest-ascent method for the linear programming problem

N/A
N/A
Protected

Academic year: 2022

Partager "The steepest-ascent method for the linear programming problem"

Copied!
7
0
0

Texte intégral

(1)

RAIRO. A NALYSE NUMÉRIQUE

J. D ENEL

J. C. F IOROT

P. H UARD

The steepest-ascent method for the linear programming problem

RAIRO. Analyse numérique, tome 15, no3 (1981), p. 195-200

<http://www.numdam.org/item?id=M2AN_1981__15_3_195_0>

© AFCET, 1981, tous droits réservés.

L’accès aux archives de la revue « RAIRO. Analyse numérique » implique l’accord avec les conditions générales d’utilisation (http://www.numdam.org/

conditions). Toute utilisation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impression de ce fi- chier doit contenir la présente mention de copyright.

Article numérisé dans le cadre du programme Numérisation de documents anciens mathématiques

http://www.numdam.org/

(2)

(vol 15,n°3, 1981, p 195 a 200)

THE STEEPEST-ASCENT METHOD

FOR THE ÜWEAR PROGRAM MIN G PROBLEM (*)

by J. DENEL (*), J. C. FIOROT (2) and R HUARD (3)

Communiqué par P J LAURENT

Abstract — This paver deals with afinite projection method (called the steepest-ascent method) proposée in 1974 by one oj the authors for maximizing a hnear function on a polyhedron In the particular case of the maximizatwn of a piecewise-linear concave function the method simply gives a recently pubhshed algonthm stated in theframework of the nondifferentiable convex optimizatwn

Résumé — Ce papier traite d'une méthode de projection {appelée méthode de la plus faite pente) proposée en 1974 par Vun des auteurs pour maximiser une fonction linéaire sur un polyèdre Dans le cas particulier de la maximisation d'une fonction concave lineaire par morceaux la methode donne simplement un algorithme récemment publié dans le cadre de l'optimisation convexe non differentiatie

1. INTRODUCTION

Solving a hnear program is classically done by the simplex method (Réf. 3) or by any nonlinear programming method (for instance, projected gradient method (Réf. 7), reduced gradient method (Refs. 6, 8)). We propose in section 2 a finite method using the steepest ascent direction given by the projection of the gradient onto the cône of tangents at the current point, which is different from Rosen's method (Réf. 7) where the projection is done on a linear variety.

This finite method has certainly been well-known for a long time but apparently did not lead to a publication to our knowledge.

In section 3, this method is apphed in an obvious manner to the maximization of a piecewise-linear concave function by treating the problem m the hypograph

(•) Reçu le 19 février 1980

H Université de Ldle I» UER IEEA, Villeneuve d'Aseq, France (2) Université de Lille I, UER IEEA, Villeneuve d'Ascq, France

(3) Université de Lille I, UER IEEA, Villeneuve d'Ascq, France et Électricité de France, Clamart, France

R A I R O Analyse numérique/Numencal Analysis, 0399-0516/1981/195/S 500

© Bordas-Dunod

(3)

196 DENEL et al.

space. It is shown that it is identical with a recently published algorithm (Ref. 2) given in the framework of nondifferentiabie convex optimisation. This latter paper lead us to publish some details about this method.

2. LÏNEAR OPTIMIZATION BY THE STEEPEST ASCENT METHOD Let us consider the following problem

max f.x

st : xe P = { xeW \ Ax ^ a}

where A3 a LJ (m3 n) matrix whose rows are denoted by At (i = 1,2,..., m), is such that P # 0 .

Let us dénote for a given point x G P :

— I(x) = { i e { 1,..., m } i Ax x = a<},

— T(P, x) the cône of tangents of F at x i.e.

T(P, x) = {yemn\Aiy^0,ie I(x) } ,

— ƒ'(x) the projection of ƒ onto T(P, x), , X, ^ 0 }

is locally the steepest ascent direction in P (at x) for ƒ (See iii) of the following lemma for justification of this définition.)

— [t, u] =^ { z e Un \ z = Xt + (1 - X) u,0 ^ X ^ 1 } forany t9ubelongmg to Un.

2 . 1 . Description of the method Starting point : xoe P.

Step k : let xk be given :

— détermine ƒ'(**), if ƒ'(**) = 0 * stop,

— if D{xk) nP = D(xk) : stop otherwise construct xk+i such that

[xk,xk+1] = D(xk)n P .

2.2. Finite convergence

The next theorem proves that while sol ving (P) by the steepest ascent method one constructs a finite séquence {xk€P\0^k^k*} such that, either xk* is an optimal solution of (P) or D(xk) is an objective-increasing half-line.

R.A.I.R.O. Analyse numérique/Numerical Analysas

(4)

This result was proposed in 1974 for an exam at the University of Lille (Réf. l).

lts proof uses the following elementary lemma.

LEMMA 2 . 1 : For x e P, the following properties hold : i) ifff{x) = 0 then x is optimal for (P),

ü) *ƒƒ'(*) ^ 0 and D(x)n ^ ö(x) then there exists x ' e P , x ^ x such that [x, x'] = D(x) n P,

iii) if [x, x'] = D(x) n P

X Y Y Y

|[ A A j| || A A j)

THEOREM 2 . 1 : The algorithm, described in 2.1, converges in afinite number of steps.

Proof : The number of cônes of tangents of P is fmite ; this implies that the number of directions ƒ'(.) that can be used by the algorithm is finite. Thus, it suffices to prove only that the consécutive directions used have strictly decreasing slopes. This is obvious by using lemma 2.1 iii).

3. PARTICXJLARIZATION : MAXIMIZING A PIECEWISE, LINEAR CONCAVE FUNC- TION

3.1. Notations

Let us consider the following piecewise-linear concave function : 9 : Un -> R, 9(x) = min { p£ x + a(. | 1 < i ^ m } . Let us dénote by £(9) the hypograph of 9, Le.

£(6) = {(x, z) e W x R | z ^ 9(x)}

E(&) defines a polyedron in Un+1 whose boundary points (x, 9(x)) belong to one of the following subsets Fj :

Fj = { (x, z) G R" x U | pj x + aj = z Vj G J, p^. x + a,. ^ z Vj # J } with J an arbitrary subset of { 1, ..., m }.

This partition of the faces of £(9) into a fmite number of subsets F3 n £(9) leads to a partition of Un (by projection onto R") which is identical with the one given in référence 2.

vol 15, n° 3, 1981

(5)

198 DENEL et al.

If (Px) and (P2) are the following mathematical programs ; (Pt) : max { 0(x) | x e UR}

it is known that (Px) and (P2) are equivalent in the sense that if (x*, ^i*) is optimal for (P2) then x* is optimal for (Pt) and conversely, x being a solution of (Pi), then (3c, 0(x)) is optimal for (P2).

3 . 2 . Connexions bet ween the steepest ascent direction for 8 in Un (at x) and the steepest ascent direction for the objective fonction / ( x , ji) = 0.x + l . j i

in E(Q) at (JC, 0(JC))

In référence 2, the authors proposed for solving (Pt) an algorithm that uses (at x € Un) the steepest ascent direction for the function 0. This direction is classically defîned by the shortest subgradient g(x) given by the projection

Figure.

z

w

x

z'

x' = x + Xu

z =(x,6(x)) z' = (x',e(x'))e£(0) u e W, || u \[ = 1

D .. 9(x + Xu) - 0(x

Pu = hm -^ ' = 6'(x ; U)

for sufficiently small X, _ 9(x + Xu) - 9(x) _ 1

R.AJ.R.O, Analyse numérique/Numerical Analysis

(6)

of the origin on ö0(x). Recall that g(x) is colinear with the unique solution of the problem : max { 0'(x ; w) | u e Un, \\ u || = 1 } where 0'(x ; u) is the one-sided directional derivative at x with respect to u.

For a given MeRB(|| U\\ = 1), we have 0'(x;u) = l/tg a (see the two dimensional figure and its foot note). Thus the steepest ascent direction (for 0 in Un) corresponds to an a minimum which is obtained by the projection of the " vertical axis " of Un+1 onto the cone of tangents of £(0) at (x, 0(x)).

Indeed this projection defines the steepest ascent direction on E(Q) according to the objective 0.x -f 1. JLX.

As a conséquence, the stepsize walk of the algorithm defined in référence 2 for solving (P^ is identical with the projection onto Un of the piecewise-linear path on £(0) constructed by the steepest ascent method for solving the linear program (P2)- The construction of one or the other of these paths (in IR" or Un+1) involves the same computations. Recall that generally, the steepest ascent method differs from the simplex method (Ref. 3) or from Rosen's gradient projection method (Ref. 7).

3.3. Maximizing a piecewise-linear concave function subject to linear cons- traints (Ref. 5)

The problem is

(Pi) max { 0(x) | Dx > d } with 0 defined as in 3.1

D is a (p, n) matrix, d e Up (P[) is equivalent to

(P!2) max { p | Pi x + at ^ p, i = 1,2,..., m,Dx^ d} .

In this case, if x (resp. (x, 0(x))) is not an optimal solution, the projection on Un

of the steepest ascent direction in the (x, p) space as given in section 2 is colinear to the unique solution of the quadratic problem :

(qA max min t.h

IUIK1 heâ*(x)

Q ) )

where

In order to solve (Fi) it would be possible to define in Un an algorithm foliowing référence 2. The moving direction is then the unique solution of

vol 15, n° 3, 1981

(7)

200 DENEL et al.

which is called the feasible steepest ascent direction. The step-size criteria need to take into account the constraints Dt x ^ dt

The similitude between (P[) and {P'2) and the results of section 2 prove the finite convergence of such a method.

Remark 3.1 : It is to be noted that the fmite convergence is essentially connected with the linear character of the problem. In nonlinear maxmin problems even convergence alone cannot be guaranted without substantial modifications (Réf. 4).

REFERENCES

1. Annales de l'Université de Lille I, France, Examen A.E.A. de Traitement de l'Infor- mation. Épreuve de Programmation Mathématique, 13 février 1974.

2. M. S. BAZARRAA» J. J. GOODE and R. L. RARDIN, A finite Steepest-Ascent Algorithm for Maximizing Piecewise-linear Concave Functions, Journal of Optimization Theory

and Applications, Vol. 25, n° 3,1978, pp. 437-442.

3. G. B. DANTZIG, Linear Programming and Extensions, Princeton University Press, Princeton, New Jersey, 1963.

4. V. F. DEMYANOV, Aîgorithms for Some Minimax Problems, Journal of Computer and Systems Sciences, Vol. 22, 1968, pp. 342-380.

5. J. DENEL, J. C. FIOROT and P. HUARD, Maximisation d'une Fonction concave linéaire par morceaux, Cas sans et avec contraintes, Méthode finie de la plus forte pente,

Université de Lille I, UER IEEA, France, Publication AN04, 1979.

6. P. FAÜRE and P. HUARD, Résolution de Programmes Mathématiques à Fonction non linéaire par la Méthode du Gradient Réduit, Revue Française de Recherche Opéra- tionnelle, Vol. 36, 1965, pp. 167-206.

7. J. B. ROSEN, The Gradient Projection Method for Nonlinear Programming, Part I, Linear Constraints, SIAM Journal Applied Mathematics, Vol. 8, I960, pp. 181-217.

8. P. WOLFE, Method of Nonlinear Programming, in : Recent Ad vances in Mathematical Programming, R. L. Graves and P. Wolfe éd., Me Graw-Hill, New York, 1963, pp. 67-86.

R.A.I.R.O. Analyse numérique/Numerical Analysis

Références

Documents relatifs

The aim of the Self-organized Gradient Percolation model is to predict the capillary pressure curve at any time using a proposed capillary pressure curve at initial step time (Figure

The Gradient Discretization method (GDM) [5, 3] provides a common mathematical framework for a number of numerical schemes dedicated to the approximation of elliptic or

We apply Scheme (21) with three different gradient discretisations, corresponding respectively to the mass- lumped conforming P 1 finite element method (or CVFE method, see [18] for

Second, because of the longitudinal periodicities of the appendages carried by the ODPs (dynein arms, nexin links and radial spokes) and the cylindrical organization of the

i) The imaginary part of the phase function g applied to h is bounded in domains with bounded imaginary part.. In addition, h c r is not an admissible steepest descent path, but

Keywords: Riemannian manifold, gradient flow, steepest descent, convex- ity, minimization.. AMS Subject Classification:

D’une mani`ere quantitative, on observe des d´eviations qui proviennent du fait que les r´egressions ont ´et´e effectu´ees sur seulement trois valeurs diff´erentes de c et

Neumann or Robin boundary conditions â First results with a mixed formulation Dynamic equation : heat equation. Michel Duprez Rule of the mesh in the Finite Element