• Aucun résultat trouvé

DLM: An Implementation of First-Order Search Framework

Dans le document List of Tables (Page 101-104)

4.1 A Discrete-Space First-Order Search Framework

4.1.1 DLM: An Implementation of First-Order Search Framework

The framework in Figure 4.1 consists of two major loops. One loop is the “x loop” that generates new candidate points (or trial points) in the original-variable space and accepts them based on their Lagrangian values. The other loop, the “λ loop,” updates the Lagrange multipliers in order to suppress constraint violations, if they exist.

Figure 4.2 shows an implementation of the framework in Figure 4.1. In the following, we describe some of the considerations and trade-offs in implementing the procedure.

a) Initialization (Lines 0-2). We choose either a fixed or a randomly generated starting point using a fixed initial seed. Both allow our results to be reproducible by others. We initialize all Lagrange multipliers to zero. An optimal initial setting of x and λ is difficult because it depends on the amount of constraint violation. Also, if dynamic weight adaptation is to be used, thenweight initializationis done. Variablejused in dynamic weight adaptation

83

counts the number of round robins in the search. It is initially set to zero in Line 0 and is increased in Line 10.

b)Duration of each run(Line 3). We use the idea of iterative deepening proposed in [204]

to decide on the suitable duration of a run. See Section 4.1.6 for details.

c) Modification of problem formulations (Line 4). Many heuristics in DLM can be char-acterized as some kind of modification of problem formulations. Three such modifications are included in DLM and are discussed next.

d) Dynamic weight adaptation (Lines 2 and 5). Dynamic weight adaptation changes the Lagrangian formulation by adding a weight to the objective and adjusts the weight dynamically in order to improve convergence of DLM. Section 4.1.3 examines the issues and alternatives of weight adaptation. This approach addresses a similar issue as our previous approach [179] that scales the Lagrange multipliers periodically.

e) Global search (Lines 6). A search trajectory generated by DLM may be stuck in an infeasible local minimum. For example, when the trajectory is at a local minimum of both the objective and the constraint functions, then increasing the Lagrange multipliers at this point will not help bring the trajectory out of the local minimum. Global search will be performed to enable the search trajectory traverse wider regions in the search space. In Section 4.1.4, we propose to add a distance penalty term to the Lagrangian formulation to implement the above idea.

f)Relax-and-tighten(Lines 7). In applying DLM to solve constrained problems with many equality constraints, we find that it is difficult to find feasible solutions. Section 4.1.5 ad-dresses this issue by presenting relax-and-tighten, a strategy that relaxes the original equality constraints into inequality constraints and that gradually tightens the relaxed constraints.

g) x Loop (Line 8) performs neighborhood search. Here, we evaluate some possible neighboring points of x in order to find improvements in its Lagrangian value. We try

x1, . . . xn in a round-robin fashion, one variable at a time, and compare the Lagrangian value of x with that of its neighbor. To save time, we apply a greedy strategy rather than a hill-climbing strategy, switching from one variable to the next once any improvement in its Lagrangian value has been found. For solving general nonlinear constrained NLPs, neighborhoods are generated using a random distribution like Gaussian or Cauchy. Details are discussed in Section 4.1.2.

A consequence of applying a greedy instead of a hill-climbing strategy is that the associ-ated search trajectory is not guided by the DMPD of Ld. Hence, when the algorithm stops at a feasible point x, the point may not be a CLMdn because not all neighboring points in Ndn(x) have been examined. In general, our proposed search algorithm will only find a feasible solution when it stops and has no guarantee that it will reach a CLMdn (or SPdn).

h)λLoop(Line 9). The Lagrange multipliers are updated when the search reaches a local minimum in the objective space. We do not update the multipliers more frequently due to instability of the trajectory. The amount of update is controlled by an application-dependent constant %1 (>0).

A search based on DLM-General can be considered a local search in the Lagrangian space because the search stops when all the constraints are satisfied and when there is no improvement in the Lagrangian value of the neighboring points probed. However, the search can be considered a global search in the original-variable space because a trajectory can overcome local basins and minima in the original-variable space by manipulating its Lagrange multipliers. When some of the constraints are not satisfied, Lines 8 and 9 of DLM-General perform, respectively, descents in the original-variable space and ascents in the Lagrange-multiplier space. Obviously, descents in the original-variable space will stop after a certain number of iterations; that is, xk+1 = xk. However, Line 9 of DLM-General will never stop as long as there are violated constraints, and λ will continue to increase to

85

suppress the unsatisfied constraints. Increases in λ allows Line 8 in DLM-General to move on and get out of local minima in the original-variable space.

The following theorem ensures that when DLM-General stops, a feasible point will be located.

Theorem 4.1 Termination condition. A feasible pointx is reached when Procedure DLM-General stops.

Proof. When the procedure shown in Figure 4.2 stops at a point x, it is obvious that the Lagrange multipliers stop to grow. Mathematically, it implies that λi = λi+%1hi. Hence, hi(x) = 0 for i∈ {1,2, . . . , m}, since %1 6= 0. It follows thatx is a feasible point.

It is important to note that a global search in the original-variable space does not imply convergence in finite time. Similar to continuous Lagrange-multiplier methods, the time for DLM-Generalto find a feasible solution may be unbounded, even if feasible solutions exist.

The framework of DLM-Generalis very general and can be implemented in many ways.

For example, in modifying a problem formulation, new objectives, additional constraints, or relaxed constraints may be added; in generating trial points in thexsubspace, deterministic or stochastic neighborhoods may be selected; in determining the stopping condition, fixed or adaptive strategies may be employed. Within these various possibilities, we select five of the most important components ofDLM-Generaland explore them carefully in the following five subsections.

Dans le document List of Tables (Page 101-104)