In a broad sense, there exist many applications where far from the boundary the solution is weakly dependent on the details of the boundary geometry. In such regions we use a reduced order **model** based on proper orthogonal **decomposition** (POD) [4] to solve the problem. This approach allows a representation of the solution **by** a small number of unknowns that are the coefficients of an appropriate Galerkin expansion. Therefore away from a narrow region close to the boundary of interest the number of unknowns to be solved **for** is drastically reduced. This idea was previously explored in the context of transonic flows with shocks [3], [2]. Here we extend those works **by** adapting to that context some classical **domain** **decomposition** techniques.

En savoir plus
Fortunately, this is certainly not the end of the story and there is still a lot of room **for** improvement of this class of **methods**. **For** instance, **by** combining cleverly several integral operators, one could expect the same gain that has been achieved with Pad´ e-like operators **by** passing from ( 1.16 ) to ( 1.17 ). Another perspective is to try to combine local operators with non local ones. Indeed, if one look at the **model** problem of Section 4 , our modal convergence analysis shows that the use of non local operator is necessary and efficient to cope with higher modes, while it is well known that local operators are well adapted **for** dealing with low order modes. The difficulty is then to try to fit in the general theoretical framework of Section 2 . Finally, a challenging perspective is the extension of the ideas developed in this paper to 3D Maxwell’s equation.

En savoir plus
The organization of the present manuscript is as follow.
Chapter 2. We observe in practice that a bad condition number of the opera- tor A(ξ) may yield inefficient **model** order **reduction**: the use of a preconditioner P (ξ) ≈ A(ξ) −1 is necessary to obtain accurate approximations. In Chapter 2, we propose a parameter-dependent preconditioner defined as an interpolation of A(ξ) −1 (here we consider that equation ( 1.1 ) is an algebraic equation, i.e. A(ξ) is a ma- trix). This interpolation is defined **by** a projection method based on the Frobenius norm. Here we use tools of the randomized numerical linear algebra (see [ 79 ] **for** an introduction) **for** handling large matrices. We propose strategies **for** the selec- tion of the interpolation points which are dedicated either to the improvement of Galerkin projections or to the estimation of projection errors. Then we show how such preconditioner can be used **for** projection-based MOR, such as the reduced basis method or the proper orthogonal **decomposition** method.

En savoir plus
206 En savoir plus

Definition of problem
Porous media flow simulations lead to the solution of complex non linear systems of coupled Partial Differential Equations (PDEs) accounting **for** the mass conservation of each component and the multiphase Darcy law. These PDEs are discretized using a cell-centered finite volume scheme and a fully implicit Euler integration in time in order to allow **for** large time steps. After Newton type linearization, one ends up with the solution of a linear sys- tem at each Newton iteration which all together represents up to 90 percents of the total simulation elapsed time. The linear systems couple an elliptic (or parabolic) unknown, the pressure, and hyperbolic (or degenerate parabolic) unknowns, the volume or molar frac- tions. They are non symmetric, and ill-conditioned in particular due to the elliptic part of the system, and the strong heterogeneities and anisotropy of the media. Their solution **by** an **iterative** Krylov method such as GMRES or BiCGStab requires the construction of an efficient preconditioner which should be scalable with respect to the heterogeneities, anisotropies of the media, the mesh size and the number of processors, and should cope with the coupling of the elliptic and hyperbolic unknowns.

En savoir plus
164 En savoir plus

We discretize the Helmholtz equation, eq. (1), with Lagrange finite elements (FE) on a tetrahedral mesh Γ of the **domain** Ω. The rationale behind this choice is multiple: Compared to finite-difference **methods** on uniform Cartesian grid (Op- erto et al., 2014; Gosselin-Cliche and Giroux, 2014), the flex- ibility offered **by** unstructured meshes to adapt the size of the elements to the local wavelengths (the so-called h-adaptivity) offers a good trade-off between the precision and the number of degrees of freedom (d.o.f) in the mesh. This is particularly true **for** elastic wave simulation where the shear wavespeeds can reach very low values just below the sea bottom. Also, compared to hexahedral meshes used with the spectral element method (Li et al., 2020), tetrahedral elements are more versa- tile to conform the mesh to complex known boundaries (to- pography, sea bottom, salt bodies) and refine the discretization when FWI proceeds over different frequency bands.

En savoir plus
images than any other iterates. In Figures 9 and 10, comparing our results with one-step **model** (25) proposed **by** Le et al, we also observe that the proposed **iterative** method gives much sharper and cleaner images.
In Figures 11-13, we apply the **iterative** algorithm to image deblurring via (cartoon + texture) **decomposition** with blurry data (as explained in Table 4.2). In this case, we do not have any stopping criteria either **for** the inner iteration to obtain (u k+1 , v k+1 ) of g k (u, v) or **for** the outer

The reduced number of POD modes (N s i
r ) with high energy content are selected to form the POD
subspace. There are two main problems while constructing the ROM **for** this FSI system: first, the traditional Galerkin projection of the Navier-Stokes equations on the POD time invariant modes (or space modes) when the computation **domain** is time dependent, which is discussed in a great detail **by** Tadmor et al. [37]. Second, the validity of a POD-based ROM is generally limited to a small range of controlling parameter, since the POD reduced basis non-linearly depends on the controlling parameter. Bui-Thanh et al. [38] proposed a gappy POD procedure to construct the off-reference ROM solutions, thereby dealing with the change of controlling parameters. The method uses the POD coupled with an interpolation method, which avoids the Galerkin projection of the governing equations. In Lieu et al. [39] a ROM **for** a complete aircraft is formulated based on a Mach-adaptation strategy, where the angle between POD subspaces is interpolated in order to deal with the changes in controlling parameter. The interpolation of the reduced basis **for** the change in controlling parameter is performed in a tangent space to a Grassmann manifold in Amsallem and Farhat [40] and further on matrix manifolds in Amsallem and Farhat [41]. In the present work, the periodic reconstruction of the POD time modes as per Eq. 14 is used to circumvent the Galerkin projection; and the solution, including mesh deformation, is readily built using Eq. 15. A direct linear interpolation of the POD space as well as time modes is performed in order to predict an off-reference solution state **by** using the pre-simulated reference cases.

En savoir plus
On the other hand, non-linear POD **methods** (those designed **for** data lying on a manifold) are always difficult to develop. See [5, 17–19] to name a few. Interpolation among reduced models continues to be an issue too [18, 20].
Proper generalized **decomposition** (PGD) **methods**, on the contrary, arose recently as a gener- alization of POD techniques. PGD roots can be traced back to the pioneer work **by** P. Ladeveze on the LATIN method [21] and, particularly, the so-called radial loading approximation scheme, in which a separated space–time approximation of the displacement in structural mechanics prob- lems was used. Independently, the method was re-invented in the framework of problems defined in high dimensional state spaces [22, 23]. It was then soon realized that PGD **methods** can be con- sidered as a generalization of POD, in which the basis are computed on the fly without no previous snapshot. Instead, the essential field is approximated as a finite sum of separable functions, very much similar to the space–time structure of the radial approximation within the LATIN method. **For** recent surveys on PGD **methods**, the reader is referred to [24–26]. Some of the nice properties of POD are lost, however. **For** instance, optimality of the POD basis is no more guaranteed, although convergence properties have been demonstrated recently [27], and error estimators have also been proposed [28, 29].

En savoir plus
F ig. 3: RDM **for** the isotropic case, **for** pure and coupled **methods**.
III. N UMERICAL RESULTS .
In order to validate results of the forward problem, analyti- cal solutions are computed on a three-layer concentric sphere **model** [2], **for** both isotropic and anisotropic skull layer. Radii of the spheres and conductivities of the different layers are respectively {0.87, 0.92, 1.0} and {1.0, 0.0125, 1.0} **for** the isotropic case. Considering the anisotropic one, we have set up the conductivity in the tangential direction to ten times the normal one. The sBEM is computed on 642 point mesh per surface, and the iFEM considers a Cartesian grid of 90 points in each direction. The BEM-FEM coupling as well as the BEM-FEM-BEM coupling use the same previous grid sizes. Computations have been done **for** 5 dipoles oriented in Cartesian coordinates: (1, 1, 0), and locations along the Z-axis: {0.465, 0.615, 0.765, 0.8075, 0.8415}. Accuracies of the numerical solutions are given **by** the Relative Difference Measure (RDM) of the potential on the scalp:

En savoir plus
3 Two DDMs using the 2 nd order ABC
This Section is dedicated to the proof that problem (16), where the boundary operator is global, can be solved using two **iterative** DDM algorithms. The difference between these algorithms is the level of decoupling between the subdomains. Both **methods** are shown to be convergent using a technique based on a suitable quadratic energy defined on the skeleton of the mesh [9]. Using a comparison argument between the **iterative** solution of the DDM and the exact solution, it will be shown that the quadratic energy is decreasing **by** a factor which controls the solution at the boundary Γ . This stability property is sufficient to show the convergence of the DDM using propagation techniques explained in [9, 3, 25]. In this work we concentrate on the essential part which concerns the stability of the quadratic energy.

En savoir plus
5. Conclusion
An innovative numerical strategy has been proposed in order to compute adaptively the effective properties of complex micro-structures with uncertain material properties. The method relies on two main ingredients: (i) a high-order fictitious **domain** method ap- proximation which enables to avoid the meshing burden with such complex geometries, and (ii) a **model** **reduction** technique based on the Proper Generalized **Decomposition** of the spectral stochastic representation of uncertainties (Linear Elasticity was considered here, with random Young’s modulus and Poisson’s ratio). The accuracy of the output was ensured even with large uncertainties on the material properties thanks to a feed- back method which is used to monitor the convergence of the effective properties. The method has been verified on a simple 2D example which has shown that error estimates based on extensions were not sufficiently reliable **for** this type of application. However, it was seen that the use of a stagnation criterion allowed to obtain very accurate results as both expectation and standard deviation errors converge exponentially upon p refine- ment. The resulting stochastic effective tensor was seen to match Monte-Carlo’s results with only a fraction of the computational cost. The method was finally applied to a real bone micro-structure whose stochastic effective properties could be obtained very effi- ciently. In order to improve the proposed strategy, goal oriented error estimation could be considered, as in [ 52 , 38 ] **for** the deterministic side, but also on the stochastic side. Note however that the influence of the stochastic discretization on the computational cost is very low thanks to the **model** **reduction** approach. Finally, multi-material and/or nonlinear materials could be considered, as well as the influence of a possible geometrical randomness.

En savoir plus
The main target when partitioning the **domain** is to minimize the interfaces between the sub-domains. This will allow **for** lower communication requirements and a simpler handling of interface variables. In PDE problems, where DDMs have been mostly applied, the decom- position is usually based on the geometrical data and the order of the discretization scheme used [ Saa03 , TW05 ]. Conversely, in DAE/ODE problems (such as the one under considera- tion in this work), no a priori knowledge of the coupling variables is available since there are no regular data dependencies (such as those defined **by** geometric structures). In several cases, the so-called dependency matrix (D) can be used. **For** a system with N equations and N unknown variables, D is an NxN matrix with D ( i, j ) = 1, if the i-th equation involves the j-th variable, and D ( i, j ) = 0 otherwise. However, each system **model** can be composed of several sub-models which are sometimes hidden, too complex, or used as black boxes. Hence, an automatic calculation of D is not trivial to implement [ GTD08 ].

En savoir plus
211 En savoir plus

extension of the spectral coarse space of Section 2 to a
ﬁnite volume discretization is thus mandatory **for** its
use in subsurface modeling. As explained in Section 2.2,
the rationale behind this coarse space is written in terms of the original **model** i.e. in terms of partial differential equations. Thus the basis of the method does not depend on the discretization scheme. Therefore the deﬁnition and implementation of the spectral coarse space in a ﬁnite volume discretization will demand some work but can deﬁnitely be done. It would improve the method introduced in [25] **by** selecting in a sure (Theorem 2.2) and optimal (Sect. 2.4.1) manner more efﬁcient coarse spaces when the channelized character of the permeabil- ity distribution makes it necessary.

En savoir plus
Figure 10: History-match of a cavern: Well-head pressure and temperature estimated **by** **model** (black marks) compared to availble data (grey marks) - flow rate (black line) is imposed in the **model**.
used a Conjugate Gradient algorithm preconditionned **by** an Algebraic MultiGrid (AMG) method **for** the rock linear subproblem and a local to each well cell Newton solver **for** the well nonlinear submodel, ordering the well cells in the flow direction. The monolithic approach would converge in a few Newton iterations making the total number of linear solves quite similar **for** both **methods**. On the other hand, the monolithic approach requires to design an efficient preconditioner **for** the fully coupled Jacobian system. This is not a trivial task considering the highly contrasted rock and well submodels both in terms of geometry and physics. A possible solution would be to apply the DDM algorithm to the fully coupled Jacobian system as a preconditioner combined with an approximate solve of the rock **model** based on a single AMG V-cycle. In other words, the **domain** **decomposition** method can also be exploited to make the monolithic approach more efficient.

En savoir plus
in the form of a rst-order system of partial dierential equations. Our eorts are towards
the design of a parallel solution strategy **for** the resulting large, sparse and
omplex
oe-
ients algebrai
systems. Indeed, as far as non-trivial propagation problems are
onsidered,
the asso
iated matrix operators are in most
ases solved with di
ulty **by**
lassi
al **iterative**

Preliminary results are shown on Table IV. It is worth noticing that the high iteration count is explained **by** the lack of preconditioning of the **iterative** solver [15]. Without it, the iteration count increases with the number of sub-domains. The Pad´e-localized transmission condition was used on 24 sub- domains with 10 order 2 mesh elements per wavelength. The number of unknowns and the memory consumption are given as the mean and standard deviation per sub-problems. Fig. 3 depicts the real part of the z-component (i.e., along the rods) of e i .

153 En savoir plus

Remark 2. **By** definition, one has that Π∂ n = ∂ n Π. However, since T is a global operator on the skeleton Σ, Π and T a priori do not commute and ΠT 6= T Π. The study of the modified commutation property ΠT = T ∗ Π will be key **for** this work.
Provided the operator T is defined in a consistent way, most of the **iterative** DDM already quoted in the literature [3, 6, 9, 14, 17, 20, 21, 22, 8] can be recast under the form (8). The original DDM **for** the Helmholtz equation [1, 10, 11] corresponds to the simplest choice T = I Σ .

ible Navier–Stokes problem. These deterministic problems can be handled **by** classical deterministic solvers, thus making the proposed algorithm a partially non intrusive method. The algorithm is applied to a divergence free formula- tion of the Navier–Stokes equations, yielding an approximation of the random velocity ﬁeld on a reduced basis of divergence free deterministic velocity ﬁelds. A methodology is then proposed **for** the reconstruction of an approximation of the pressure ﬁeld, the random velocity ﬁeld being given. This approximation is deﬁned through a minimal residual formulation of the Navier–Stokes equations. Two alternative **methods** are introduced **for** the construction of an approximation of the pressure. The ﬁrst method is a direct application of a PGD algorithm to the minimal residual formulation of the Navier–Stokes equations, thus yielding to the construction of a convergent **decomposition** of the pressure. The second method, which is more computationally eﬃcient, reuses as a reduced basis the deterministic pressure ﬁelds associated to the deterministic problems that were solved during the construction of the **decomposition** of the velocity ﬁeld (i.e. the Lagrange multipliers associated with the divergence-free constraint).

En savoir plus
Abstract
The time-harmonic Maxwell equations describe the propagation of electromag- netic waves and are therefore fundamental **for** the simulation of many modern devices we have become used to in everyday life. The numerical solution of these equations is hampered **by** two fundamental problems: first, in the high frequency regime, very fine meshes need to be used in order to avoid the pollution effect well known **for** the Helmholtz equation, and second the large scale systems ob- tained from the vector valued equations in three spatial dimensions need to be solved **by** **iterative** **methods**, since direct factorizations are not feasible any more at that scale. As **for** the Helmholtz equation, classical **iterative** **methods** applied to discretized Maxwell equations have severe convergence problems.

En savoir plus