Unit´e de recherche INRIA Lorraine, Technopˆole de Nancy-Brabois, Campus scientifique, ` NANCY 615 rue du Jardin Botanique, BP 101, 54600 VILLERS LES Unit´e de recherche INRIA Rennes, Ir[r]

The widespread availability of parallel computers and their potential **for** the numerical solution of difficult to solve partial differential equations have led to large amount of re- search in domain **decomposition** **methods**. Domain **decomposition** **methods** are general flexible **methods** **for** the solution of linear or non-linear system of equations arising from the discretization of partial differential equations (PDEs). **For** the linear **problems**, domain **decomposition** **methods** can often be viewed as preconditioners **for** Krylov subspace tech- niques such as generalised minimum residual (GMRES). **For** non-linear **problems**, they may be viewed as preconditioners **for** the solution of the linear system arising from the use of Newton’s method or as preconditioners **for** solvers. The term domain **decomposition** has slightly different meanings to specialists within the discipline of PDEs. In parallel comput- ing it means the process of distributing data among the processors in a distributed memory computer. On the other hand in preconditioning **methods**, domain **decomposition** refers to the process of subdividing the solution of large linear system into smaller **problems** whose solutions can be used to produce a preconditioner (or solver) **for** the system of equations that results from the discretizing the PDE on the entire domain. In this context, domain **decomposition** refers only to the solution method **for** the algebraic system of equations aris- ing from discretization. Finally in some situations, the domain **decomposition** is natural from the physics of the problem: different physics in different subdomains, moving domains or strongly heterogeneous media. Those separated regions can be modelled **with** different equations, **with** the interfaces between the domains handled by various conditions. Note that all three of these may occur in a single program. We can conclude that the most impor- tant motivations **for** a domain **decomposition** method are their ease of parallelization and good parallel performance as well as simplification of **problems** on complicated geometry.

En savoir plus
164 En savoir plus

Mots clés — propagation des ondes, équation de Helmholtz, éléments finis, calcul parallèle
1 Introduction
Solving high-frequency time-harmonic scattering **problems** using finite element techniques is challeng- ing, as such **problems** lead to very large, complex and indefinite linear systems. Optimized Schwarz domain **decomposition** **methods** (DDMs) are currently a very promising approach, where subproblems of smaller sizes are solved in parallel using direct solvers, and are combined in an iterative procedure.

We are interested in domain **decomposition** (DD) **methods** since they are naturally well-fitted to modern parallel architectures. **For** specific systems of partial differential equations **with** a saddle **point** formulation, efficient DD **methods** have been designed, see e.g., [28, 23, 29] and [34] references therein. Also in [16], a GenEO coarse space is introduced **for** the P.L. Lions’ algorithm and its efficiency is mathematically proved **for** symmetric definite positive **problems**. In the above article, numerical experiments are conducted on three dimensional elasticity **problems** **for** steel-rubber structures discretized by a finite element **with** continuous pressure. Although the method works well in practice, the method lacks theoretical convergence guarantees and also demands the design of specific absorbing conditions as interface conditions. The method we propose in our article has a provable efficiency and bypasses the need **for** absorbing boundary conditions.

En savoir plus
space **with** the aim to achieve robustness **with** respect to heterogeneities in any of the coeﬃcients in the PDEs and the number of subdomains. In the previous chapter we proposed and studied the DtN coarse space **for** scalar elliptic **problems**. The proof **for** DtN relies on uniform (in the coeﬃcients) weighted Poincaré inequalities [89]. While this allows **for** full robustness in the small overlap case (cf. [21]), in a completely general setting it has two drawbacks: (i) **for** larger overlap some assumptions are needed on the coeﬃcient distribution in the overlaps and (ii) the arguments cannot be generalized easily to the case of systems of PDEs. This second **point** was the motivation to look **for** a new coarse space. In this Chapter, we propose a coarse space construction based on Generalized Eigenprob- lems in the Overlap (which we will refer to as the GenEO coarse space). We deﬁne the coarse space, prove a convergence result and illustrate it **with** some numerical results. The coarse space construction applies to systems of PDEs discretized by ﬁnite elements **with** only a few extra assumptions. The implementation only relies on having access to element stiﬀness matrices and the connectivity graph between elements. The subdomain partition is carried out using Metis. Overlap is added based on the connectivity graph and the coarse space is constructed automatically solving a generalized eigenproblem on each subdomain. In our analysis, we identify the fact that the abstract Schwarz framework makes it possible to reduce the proof of convergence to an energy bound in the overlap, and **for** this reason, the second matrix in the pencil of our generalized eigenvalue problem is a matrix that has zero blocks corresponding to the **interior** of the subdomain.

En savoir plus
219 En savoir plus

converge **for** the Helmholtz equation. In fact, in the elliptic case, the boundary value **problems** (BVPs) associated **with** (1) enjoy many nice properties including the H 1 - coercivity of a(u, v) = ∫ (∇u · ∇v + k 2 u v), the associated bilinear form, and their solutions are often interpreted as the solutions of convex minimization **problems**. **With** this **point** of view, P.L. Lions gave a general proof of convergence of the Schwarz method by interpreting the error at each step of the algorithm as the result of successive orthogonal projections on two (**with** two subdomains) supplementary subspaces of H 1 [17]. These **problems** also benefit from the maximum principle,

En savoir plus
λv(x) + max
u∈U {−f (x, u) · Dv(x) − L(x, u)} = 0, **for** x ∈ R
d . (1.2)
Due to the complexity to find an analitycal solution of the HJB equation, several ap- proximation schemes have been proposed **for** this class of equations, based on finite difference [37], semi-Lagrangian [28, 43, 45] and finite volume **methods** [70]. These algo- rithms compute the solution iterating on the value space and looking **for** a fixed **point** of the equation. They converge to the value function, but the convergence is slow (see [44] **for** error estimates on Semi-Lagrangian schemes). A possible approach, which has a rather long history, is based on the iteration in the space of controls (or policies) **for** the solution of HJB equations. The Policy Iteration (PI) method, known as Howard’s algorithm [65], has been investigated by Kalaba [67] and Pollatschek and Avi-Itzhak [105], who proved that it corresponds to the Newton method applied to the functional equation of dynamic programming. Later, Puterman and Brumelle [107] have given suf- ficient conditions **for** the rate of convergence to be either superlinear or quadratic. More recent contributions on the policy iteration method can be found in Santos and Rust [111] and Bokanowski et al. [18]. Results on its numerical implementation and diverse hy- brid algorithm have been reported in Capuzzo-Dolcetta and Falcone [27], Gonz´ ales and Sagastiz´ abal [59] and Gr¨ une [56]. We mention also that an acceleration method based on the set of subsolutions has been studied in Falcone [43]. Finally Alla et al., [5] have presented and accelerated algorithm **for** the solution of static Hamilton-Jacobi-Bellman equations related to optimal control **problems**. In particular, they use a classic policy ite- ration procedure giving a smart initial guess given by the solution of the value iteration scheme on a coarse mesh. More in general, dealing **with** domain **decomposition** **methods** **for** HJB equations, we should also mention approaches based on domain **decomposition** algorithms as in Falcone et al. [46] and more recently by Cacace et al. [23], on geometric considerations as in Botkin et al. [19].

En savoir plus
It is well-known that the convergence rate of these **methods** strongly depends on the transmission condition enforced on the interfaces between the subdomains. Local transmission conditions based on high-order absorbing boundary conditions (HABCs) have proved well suited [1, 2]. They represent a good compromise between basic impedance conditions (which lead to suboptimal convergence) and the exact Dirichlet-to-Neumann (DtN) map related to the complementary of the subdomain (which is expensive to compute). However, a direct application of this approach **for** domain **decomposition** conﬁgurations **with** cross-points, where more than two subdomains meet, does not provide satisfactory results.

En savoir plus
Our first test instances are related to the fixed-charge **multicommodity** capacitated network design (FMCND) problem **with** stochastic demands. This problem naturally appears in many practical applications (Klibi et al., 2010) and it has been numerically shown to be notoriously hard to solve (Crainic et al., 2011). In addition, it lacks the complete recourse property, which entails that the generation of feasibility cuts is necessary to ensure the convergence of the BD method. We have considered 7 classes of instances (r04 to r10) from the R set, as developed in Crainic et al. (2001). Each class includes 5 instances **with** varying cost and capacity ratios. Our second test instances are related to the capacitatied facility location (CFL) problem, which was introduced by Louveaux (1986) and addressed in Bodur et al. (2017); Fischetti et al. (2016) and Boland et al. (2015) among others. To avoid the generation of feasibility cuts, which do not contribute towards the improvement of the lower bound generated by the BD method, the complete recourse property can be enforced via the inclusion of a constraint in the MP. As **for** the instances, we use the deterministic CAP instances (101 to 134), which are part of the OR-Library. These instances include 50 customers **with** 25 to 50 potential facilities. **For** the stochastic variant, we have used the scenarios generated by Bodur et al. (2017) where each scenario includes 250 scenarios. It should be noted that the deterministic instances of this problem are referred to as “CFL" and the stochastic ones as “CFL-S". Finally, our third set of benchmark instances are associated to the stochastic network inter- diction (SNI) problem proposed by Pan and Morton (2008). It is important to note that this problem is structurally different from the previous ones, in the sense that there are no fixed costs associated to the master variables in the objective function. Moreover, due to the pre- sence of a budget constraint, the variable fixing strategy as detailed in Section 5.4.2 cannot be applied. Regarding the instances, we have considered those which have been described and used by Pan and Morton (2008), Bodur et al. (2017) and Boland et al. (2015). All instances have 456 scenarios and 320 binary master variables associated to the same network of 783 nodes and 2586 arcs. We specifically consider those instances which are part of the classes referred to as “snipno" 3 and 4 (see Pan and Morton (2008) Tables 3 and 4). Each class includes 5 different instances and **for** each instance we have considered varying budget limits (i.e., 30, 40, 50, 60, 70, 80 and 90 units).

En savoir plus
218 En savoir plus

typically it could be of the form m α , **for** α < 1. Notice that we changed on purpose the notation from |v| to m, since defining such a functional F is not evident in this case and usually requires v to be highly concentrated. The good framework is that of vector measures (and |v| denotes in this case the total variation measure of v). This framework also fits the convex case; in the concave case, the functional F will only be finite on measures concentrated on (possibly infinite) one-dimensional rectifiable graphs, and the quantity m will represent the density of |v| w.r.t. the one-dimensional measure: it stands **for** the amount of mass (and not the density of mass) passing through a precise **point**, and it will only be positive on a thin set, standing **for** the transportation network. Notice finally that the intermediate and concentration-neutral case where we just minimize the total mass of v, i.e. ||v|| = ´ |v| is also interesting. It has been introduced by the spatial economist M. Beckmann in the ’50s and it turns out to be equivalent to the Monge-Kantorovich problem **for** the optimal transport between µ and ν **with** cost |x− y| (see [22, 20, 29] and [27] **for** the equivalence . In the same paper [2] Beckmann also proposed to minimize convex costs. This idea also appears in [11] as a variant of the well-known Benamou-Brenier formula in a dynamical setting, and could also be considered now as a particular case of the non-linear mobility of Dolbeaut-Nazaret-Savaré, [14] (re- cently, the concave case as well got a sort of Benamou-Brenier formulation, see [10])

En savoir plus
Chapter 2
Architecture of the power grid
An electrical grid, or power grid, is an interconnected network **for** delivering electricity from producers to consumers. It consists of generating stations that produce electrical power, high voltage transmission lines that carry power from distant sources to demand centers, and distribution lines that connect individual customers. A scheme of a typical electric power grid is illustrated in Figure 2.1. Extrahigh-voltage electricity (380 kV and 220 kV) reaches the transmission grid from power plants as well as imports from abroad. The voltage must be as high as possible so that as much energy as possible can be transported over great distances **with** minimal losses. Depending on the target customer (industrial center or a typical family house) the voltage level is stepped down across multiple stages and different grid levels into medium- and low-voltage distribution grids. Recent developments in modern power grids involve widespread deployment of intermittent renewable generation, embrace installation of a wide variety of energy storage devices, as well as an increasing and widespread usage of electric vehicles. On the other hand, conventional energy sources are continually dis- continued, such as coal or nuclear power Swiss Federal Office of Energy [2018]. These developments motivate fundamental changes in **methods** and tools **for** the optimal daily operation and planning of modern power grids. Operational deci- sions taken by power system operators on a daily basis are commonly assisted by repeatedly solving complex optimization **problems**, aiming to determine optimal operating levels **for** electric power plants, so that the overall electricity genera- tion cost is minimized, while at the same time it satisfies load demands imposed throughout the transmission grid and meets safe operating limits. However, ex- ploitation of renewable energy sources and their grid integration poses many new challenges **for** grid operations due to their intermittent nature and high variabil- ity. New strategies **for** the operation and management of the electricity grid have

En savoir plus
163 En savoir plus

The numerical **methods** used **for** viscoplastic **flow** simulations over the past three decades can be classified into two groups. The first approach hinges on introducing a small artificial parameter in the constitutive relation, thereby replacing solid zones by flowing zones **with** a very high viscosity, as in the work by Bercovier & Engelman [14] or Papanastasiou [114]. The advantage is that the regularized equations become differentiable and are suitable **for** Newto- nian fluid solvers. Nonetheless, this benefit comes at the expense of difficulties in accurately capturing the yield surface. The second approach is based on introducing an augmented La- grangian and using a steepest descent method of Uzawa-type to solve the problem. Augmented Lagrangian **methods** have been introduced by Hestenes [89] and by Powell [115] **for** nonlinear constrained minimization **problems** and have been successfully used in the context of Bing- ham **flow** models and nonlinear mechanics by Fortin & Glowinski [72], and by Glowinski & Le Tallec [77]. The work by Saramito & Roquet [118, 117] demonstrated the effectivity of the approach, combined **with** adaptive finite element techniques, to accurately capture the yield surface in various settings, see also the works by Wang [128] and more recently Zhang [132]. Despite the need **for** introducing two additional tensor fields (a proxy **for** the strain rate tensor and the corresponding tensor-valued Lagrange multiplier), Augmented Lagrangian **methods** have progressively emerged over the last decade as the method of choice to simulate viscoplas- tic flows. **For** a recent review, we refer the reader to the paper of Saramito & Wachs [119]. We also mention the recent **interior**-**point** **methods** combined **with** second-order cone programming considered by Bleyer et al. [16, 17].

En savoir plus
109 En savoir plus

One of the difﬁculties arising in the resolution of the problem (9) is that the variable p and s are related by the maximal monotone graph p that may have vertical or horizontal parts. Similar difﬁculties occur **for** instance in the context of reactive ﬂows in porous media, cf. [ 16 ]. **For** Richards equation, the graph p has vertical parts as depicted on Figure 1 . The solution ðs; pÞ to the problem (9) can be deduced from the knowledge of p, but not from s. But the choice of p as a primary unknown in a naive numerical method can yield severe difﬁculties. Typically mass conservation can be lost, and severe troubles can be encountered in the convergence of the iterative procedure to compute the solution to the nonlinear system arising from the numerical scheme. These difﬁculties motivated the development of several strategies to optimize the con- vergence properties of the iterative procedures. An popular approach consist in making used of robust ﬁxed **point** procedures **with** linear convergence speed rather than **with** Newton’s method (see **for** instance [ 17 – 22 ]). There were also important efforts carried out to ﬁx the difﬁculties of Newton’s method [ 23 – 25 ]. Comparisons between the ﬁxed **point** and the Newton’s strategies are presented **for** instance in [ 26 , 27 ] (see also [ 28 ]). In [ 29 ], the authors combine a Picard-type ﬁxed **point** strategy **with** Newton’s method (i.e., they perform a few ﬁxed points iterations before run- ning Newton’s algorithm). An alternative approach would consist in keeping both s and p as unknowns together **with** the additional equation p 2 pðsÞ that is often rephrased as a complementary constraint and then solving the problem **with** a non-smooth Newton method (see **for** instance [ 30 – 32 ]). Another classical solution consists in partitioning X at each time t into a part X s ðtÞ where s is chosen as a

En savoir plus
the H-IP method in the energy-norm is linearly δ-dependent if δ ≤ 0 and optimal if δ ≥ 0, which is in accordance **with** Lemma
4.1 (see Figure 3 ). A brief analysis of the convergence in the L 2 -
norm indicates that both the H-IIP and H-NIP schemes behave differently from the H-SIP scheme. Nonsymmetric variants are strongly influenced by the polynomial parity of k and by the penalty parameter δ. We observe that the convergence rate in- creases linearly and optimally if δ ≥ 0 **for** odd k and δ ≥ 2 **for** even k. In this last case, let us **point** out that the optimal conver- gence is nearly reached once δ ≥ 1. As expected, the symmetric scheme converges optimally when δ ≥ 0. These results agree **with** the theoretical results established in Theorem 4.2 .

En savoir plus
1 Introduction
We will consider a network routing problem **with** multiple pairs of origins and des- tinations where we want to minimize the maximal relative congestion on the arcs of the network under the restriction that the number of paths used to carry the traf- fic is bounded. The problem of minimizing the **flow** on the most congested link is of valuable interest **for** the design of data communication networks (see [4]). This basic problem is known to be hard to solve even if it can be written as a linear pro- gram. When additional constraints are present in the model, like path restrictions as considered here, it will result in very difficult **problems** **for** which exact **methods** are likely to be useless (see [15] **for** instance).

En savoir plus
Optimized Schwarz DDMs are currently a very promising approach **for** the parallel solution of high-frequency time-harmonic **problems**. **With** these **methods**, subproblems of smaller sizes are solved in parallel using direct solvers, and are combined in an iterative procedure [ 1 , 2 , 4 ].

CONCLUSION
Lagrangian-based algorithms are one of the most effective solution **methods** to solve network design **problems**, in particular the single-layer MCFND. The usual Lagrangian relaxations **for** the formulation are the so-called shortest path (commodity-based) and knapsack (arc-based) relaxations. **For** the first one, the resulting Lagrangian subproblem decomposes into a collection of shortest path subproblems, one **for** each commodity, while the second one allows solving the Lagrangian subproblem as a series of contin- uous knapsack subproblems, one **for** each arc. The nodes of a network are the other entities that can be considered as **decomposition** components. We have proposed three new node-based Lagrangian relaxation-reformulations **for** the **multicommodity** capaci- tated fixed-charge network design problem. A Lagrangian-based matheuristic has also been proposed to find upper bounds. We have conducted significant computational ex- periments on the benchmark instances. The Lagrangian dual bound of the new node- based Lagrangian relaxations improve significantly upon the so-called strong LP bound (known to be equal to the Lagrangian dual bounds of the **flow** and knapsack relaxations). The node-based proposed Lagrangian heuristic based on the Location relaxation outper- forms traditional **flow** and knapsack based heuristics. The proposed node-based La- grangian heuristic algorithm outperforms almost all the previously proposed heuristics in the literature in average.

En savoir plus
165 En savoir plus

(Ω) to R implies weak continuity from L s
(Ω) to R. Thus, it is enough to show the property **for** s = 2. By corollary 2, the function J 0 is w.l.s.c.
hence, adapting the argument of proposition 1 in [2] (which is based in Fatou’s lemma), we obtain that u ∈ L 2 (Ω) → R Ω `(u(x))dx is convex l.s.c. and hence convex w.l.s.c. which yields the first assertion. The second assertion follows directly by taking a minimizing sequence and using that J ε is w.l.s.c.

i 6= j.
In practice, writing the diffusion equation in its mixed form allows to compute precisely both the solution and its gradient: it avoids the propagation of the nu- merical error from the solution to its gradient. On the other hand, using a domain **decomposition** method is interesting **for** many reasons: it is necessary when one wants to compute the solution on a parallel computer. It is also useful when, **for** some physical reason, one needs to capture rapidly oscillating phenomena in some, but not all, subregions. This may happen when D has large variations (as it is the case in neutronics). In this case, approximations **with** different scales can be used. Hence the choice of a mixed, multi-domain setting.

En savoir plus
Multiphase, compositional porous media ﬂow models, used in reservoir simulations or basin modeling, lead to the solution of complex non linear systems of Partial Differential Equations (PDE). These PDE are typically discretized using a cell-centered ﬁnite volume scheme and a fully implicit Euler integration in time in order to allow **for** large time steps. After Newton type lineari- zation, one ends up **with** the solution of a linear system at each Newton iteration which represents up to 90 per- cents of the total simulation elapsed time. The corre- sponding pressure block matrix is related to the discretization of a Darcy equation **with** high contrasts and anisotropy in the coefﬁcients. We focus on overlap- ping Schwarz type **methods** on parallel computers and on multiscale **methods**.

En savoir plus