The purpose of this paper is to derive a posteriorierror estimates, for such a model in order to introduce an auto- adaptive mesh refinement technique.
Since the pioneering work of Babushka and Reinbold , much has been written about adaptive methods for finite el- ement approximation with emphasis on both theoretical and computational aspects of the method. Several a posteriorierrorestimators for mixed finite element discretization of elliptic problems have been derived. For the residual-based estimators, which is the type of estimator that we use here, we can distinguish two types of estimation: the first of these was introduced by Braess and Verf¨urth  and gives bounds on the error in a mesh dependent norm which is close to the energy norm of the continuous problem in its primal form. In the presense of a saturation assumption (which is not always satisfied) this estimator is reliable and efficient in this norm, but somehow it is not efficient in the natural norm of the mixed formulation. This estimate was improved by Lovadina and Stenberg  and Larson and Malqvist  by introducing a postprocessing technique.
Several works in the literature have highlighted the great potential of coupling a posteriorierrorestimators to shape optimization algorithms. In the pioneering work , the authors identify two diﬀerent sources for the numerical error: on the one hand, the error arising from the approximation of the diﬀerential problem and on the other hand, the error due to the approximation of the geometry. Starting from this observation, Banichuk et al. present a ﬁrst attempt to use the information on the discretization of the diﬀerential problem provided by a recovery-based estimator and the error arising from the approximation of the geometry to develop an adaptive shape optimization strategy. This work has been later extended by Morin et al. in , where the adaptive discretization of the governing equations by means of the Adaptive Finite Element Method is linked to an adaptive strategy for the approximation of the geometry. The authors derive estimators of the numerical error that are later used to drive an Adaptive Sequential Quadratic Programming algorithm to appropriately reﬁne and coarsen the computational mesh. Several other authors have used adaptive techniques for the approximation of PDE’s in order to improve the accuracy of the solution and obtain better ﬁnal conﬁgurations in optimal structural design problems. We refer to [3, 25, 34, 36] for some examples. We remark that in all these works, a posterioriestimators only provide qualitative information about the numerical error due to the discretization of the problems and are essentially used to drive mesh adaptation procedures. To the best of our knowledge, no guaranteed fully-computable estimate has been investigated and the error in the shape gradient itself is not accounted for, thus preventing reliable stopping criteria to be derived.
First, we consider the conﬁguration described in ﬁgure 5.2a. The initial guess for the inclusion is represented by the circle of radius ρ ini = 2. The Certiﬁed Descent Algorithm is able to correctly identify
the interface along which the conductivity parameter k Ω is discontinuous (Fig. 5.2a). Moreover, ﬁgure
5.2b shows that the objective functional J(Ω) is monotonically decreasing, meaning a genuine descent direction is computed at each iteration of the algorithm. In tables 5.1a-5.1b we present the speciﬁcs of the meshes used to certify the descent direction at several iterations of the CDA. In particular, we observe that coarse meshes are reliable during the initial iterations of the algorithm to identify a genuine descent direction, whereas the number of Degrees of Freedom increases when approaching a minimum of the functional J(Ω). This is also well-explained by ﬁgure 5.2c in which the evolution of the number of Degrees of Freedom is depicted. Eventually, we remark that ﬁgure 5.2c also highlights the ill-posed nature of the problem since a huge amount of Degrees of Freedom is rapidly required by the CDA to certify the descent direction, testifying the diﬃculties of gradient-based methods to handle inverse problems as the Electrical Impedance Tomography. By comparing the approximations arising from conforming Finite Element and Discontinuous Galerkin formulations, we remark that the latter provides sharper bounds of the error in the shape gradient thus allowing the algorithm to automatically stop for a given tolerance tol = 10 −6 (cf. table 5.1b). On the contrary, the certiﬁcation in the case of conforming Finite Element is still able to identify a genuine descent direction at each iteration but rapidly requires a huge number of mesh elements making the computational cost explode. The aforementioned issues are conﬁrmed and highlighted by the test case in ﬁgure 5.2d. It is straightforward to observe that the Certiﬁed Descent Algorithm is able to identify a genuine descent direction at each iteration (Fig. 5.2e) and to reconstruct the interface in the region near the external boundary whereas the inner part is not correctly recovered. As previously stated, this phenomenon is due to the well-known ill-posedness of the problem and we cannot expect gradient-based strategies to successfully overcome this issue. These remarks are conﬁrmed again by the rapidly exploding number of Degrees of Freedom required by the algorithm to certify the descent direction (Fig. 5.2f). A possible workaround is represented by the emerging ﬁeld of hybrid imaging in which classical tomography techniques are coupled with acoustic or elastic waves (cf. ).
Residual type a posteriorierrorestimators (Babuška and Rheinboldt, 1978)
Constitutive relation errorestimators (Ladevèze and Leguillon, 1983)
Based on the fact that stress field of FE solution is not statically admissible
Define local problems to construct a statically admissible solution
with order h s has been observed in ; this motivates the study of a posteriorierrorestimators that could efficiently drive an adaptive refinement strategy.
For the system (1.1)–(1.3), a posteriorierror estimations for conforming Lagrange finite element methods (FEM) are now very common. The reader is referred to, e.g., [2, 5, 27] in which several types of estimators are detailed. In the residual based estimators, the main terms are inter-element jumps of the normal components of the gradients of the computed solution, weighted by constants whose explicit computa- tion was performed in  and . Efficiencies of the estimators obtained in  vary, according to the problems, between 30 and 70, and between 1.5 and 3.5 if one numer- ically evaluates eigenvalues of some vertex centered local problems, as reported in . References for non-conforming FEM may be found in  and for mixed FEM in . The case of cell-centered FVM has been less studied, on the one hand because of their more recent use for elliptic problems, and, on the other hand, because they generally lack a discrete variational formulation. For the basic ”four point” scheme on so-called ”admissible” triangular meshes (see [14, 16]), Agouzal and Oudin  have used the connection of this scheme with mixed finite elements to derive an a posteriori estimator for the L 2 norm of the error; this estimator is not an upper bound for the error, but is asymptotically exact under mild hypothesis. A second estimator for this scheme has been given by Nicaise in . This estimator is shown to be equivalent to the (broken) energy norm of the difference between the exact solution and an el- ementwise second order polynomial (globally discontinuous) reconstructed numerical solution. Then, in , Nicaise extends his ideas to the so-called ”diamond-cell” FVM (as described in ) and proposes an a posteriorierror estimator which may be used if the cells of the mesh are triangles or rectangles (or tetrahedrons in dimension three). This estimator is completely computable (no unknown constant) and its efficiency is around 7 for the tests performed in . Finally, Nicaise has extended his work to diffusion-convection-reaction equations in . More recently, Vohral´ık  has also proposed a fully computable a posteriorierror estimator for numerical approximations of diffusion-convection-reaction equations by cell-centered FVM on general meshes. The main improvement over [21, 22] is the asymptotic exactness of the error bound which, like in , measures the energy norm of the difference between the exact solution and a reconstructed, globally discontinuous, elementwise second order poly- nomial numerical solution. Note that in  the reconstructed numerical solution is globally continuous and may involve higher order polynomials on each element.
ELEMENT DISCRETIZATIONS OF MAXWELL’S EQUATIONS
T. CHAUMONT-FRELET ?,† AND P. VEGA ?,†
Abstract. We consider residual-based a posteriorierrorestimators for Galerkin-type dis- cretizations of time-harmonic Maxwell’s equations. We focus on configurations where the fre- quency is high, or close to a resonance frequency, and derive reliability and efficiency estimates. In contrast to previous related works, our estimates are frequency-explicit. In particular, our key contribution is to show that even if the constants appearing in the reliability and efficiency esti- mates may blow up on coarse meshes, they become independent of the frequency for sufficiently refined meshes. Such results were previously known for the Helmholtz equation describing scalar wave propagation problems and we show that they naturally extend, at the price of many tech- nicalities in the proofs, to Maxwell’s equations. Our mathematical analysis is performed in the 3D case, and covers conforming N´ ed´ elec discretizations of the first and second family, as well as first-order (and hybridizable) discontinuous Galerkin schemes. We also present numerical experiments in the 2D case, where Maxwell’s equations are discretized with N´ ed´ elec elements of the first family. These illustrating examples perfectly fit our key theoretical findings, and suggest that our estimates are sharp.
a huge number of degrees of freedom, which implies long computational times. Thus, in order to get a good compromise between precision and computational times, adapted refinement mesh techniques are performed. There exist different kinds of a posteriorierrorestimators which indicate the local error, so that they can drive the mesh adaptivity process. For eddy current problems the residual
∇ · (S K ∇ p h )| K is always equal to zero on all K ∈ T h , the element residuals ( 4.5 ) are relevant even when the original solution is elementwise constant.
Nicaise [ 36 , 37 ] also proposed a posteriorierrorestimators for the finite vol- ume method. His basic idea is also to first postprocess the original piecewise constant finite volume approximation. He uses for this purpose Morley-type in- terpolants. However, only the means of the fluxes of this interpolant through the mesh sides are continuous, so that, in the general case, one has to penalize both the improper mass balance of −S ∇ p ˜ h and the nonconformity of ˜ p h. We note, however, that in certain cases, the Morley interpolant is conforming (contained in H 1 ( Ω )), so that the nonconformity penalization disappears. Another remark in this com- parison may be that the postprocessed approximation presented in [ 36 , 37 ] has to be constructed differently in dependence on whether convection and reaction are present. This on the one hand permits to prove the local efficiency of the estimates (see the next section for the discussion of the efficiency of our estimates), but it on the other one complicates the implementation. Finally, the question of the a priori error estimates (convergence) of the postprocessed approximation is not investigated in [ 36 , 37 ].
We analyze the Biot system solved with a fixed-stress split, Enriched Galerkin (EG) discretiza- tion for the flow equation, and Galerkin for the mechanics equation. Residual-based a posteriorierror estimates are established with both lower and upper bounds. These theoretical results are confirmed by numerical experiments performed with Mandel’s problem. The efficiency of these a posteriorierrorestimators to guide dynamic mesh refinement is demonstrated with a prototype unconventional reservoir model containing a fracture network. We further propose a novel stopping criterion for the fixed-stress iterations using the error indicators to balance the fixed-stress split error with the discretization errors. The new stopping criterion does not require hyperparameter tuning and demonstrates efficiency and accuracy in numerical experiments.
the regular tetrahedron registered in the same circumscribed ball, Fig. 9 ) is smaller than 0.2.
Only for the first step, the initial finite elements discretization using a tetrahedral element is provided. Then, for each time step an ABAQUS 6.13/ Explicit FE calculation is performed with a small test displacement, and the resulting simulation is analyzed through a posteriorierrorestimators and element quality. If the estimated elementary error and / or the number of distorted elements does not exceed a given threshold, the previous FE simulation will be continued. Else, the mesh is then modified automatically (refined and / or coarsened) with MeshGems software according to the con- stantly changing physical fields and geometrical shape and a new solution for this loading sequence is computed. For each time step, this process is repeated many times until the mesh no longer changes and/or the error level has been reached. Finally, all field variables are transferred from the old mesh to the new one and the simulation is restarted from the previ- ous time step. It should also be noted that after each time step,
3 EDF R&D, IMSIA, 7 boulevard Gaspard Monge, 91120 Palaiseau, France
February 20, 2017
We derive equilibrated reconstructions of the Darcy velocity and of the total stress ten- sor for Biot’s poro-elasticity problem. Both reconstructions are obtained from mixed finite element solutions of local Neumann problems posed over patches of elements around mesh vertices. The Darcy velocity is reconstructed using Raviart–Thomas finite elements and the stress tensor using Arnold–Winther finite elements so that the reconstructed stress tensor is symmetric. Both reconstructions have continuous normal component across mesh interfaces. Using these reconstructions, we derive a posteriorierrorestimators for Biot’s poro-elasticity problem, and we devise an adaptive space-time algorithm driven by these estimators. The algorithm is illustrated on test cases with analytical solution, on the quarter five-spot prob- lem, and on an industrial test case simulating the excavation of two galleries.
Figure 9. Flow chart of the FE-simulation for 3D remeshing module
Only for the first step, the initial finite elements discretization using a tetrahedral element is provided. Then, for each time step an ABAQUS/Explicit FE calculation is performed with a small test displacement, and the resulting simulation is analyzed through a posteriorierrorestimators and element quality. If the estimated elementary error and / or the number of distorted elements does not exceed a given threshold, the previous FE simulation will be continued. Else, the mesh is then modified automatically (refined and / or coarsened) according to the constantly changing physical fields and geometrical shape and a new solution for this loading sequence is computed. For each time step, this process is repeated many times until the mesh no longer changes and/or the error level has been reached. Finally, all field variables are transferred from the old mesh to the new one and the simulation is restarted from the previous time step. It should also be noted that after each time step, the boundary and loading conditions are generated according to the old step modification.
One of the first works on a posteriorierror estimates for finite element discretizations of steady convection–diffusion–reaction problems are those of Angermann [ 2 ] and of Eriksson and John- son [ 9 ]. In these works, the overestimation factor depends unfavorably on the ratio between convection and diffusion. Estimates with semi-robust lower bounds in the energy norm and esti- mates with robust lower bounds in the energy norm augmented by the dual norm of the material derivative were then derived by Verf¨ urth respectively in [ 23 ] and [ 26 ]. The robustness result has been extended to the unsteady case in [ 25 ]. Recently, attention has also been paid to vertex- centered finite volume methods. Let us mention, in the steady convection–diffusion–reaction case and energy norm setting, Lazarov and Tomov [ 17 ], Carstensen et al. [ 7 ], Nicaise [ 18 ], [ 30 ], and Ju et al. [ 16 ]. Fewer results are known in the unsteady case. L 1 -norm estimates for nonlinear problems
Keywords. Navier–Stokes equations, mixed boundary conditions, finite element method, a posteriori analysis.
This work starts from the observation that a huge amount of research work has been performed on the numericaal analysis of the Navier-Stokes equations when provided with no-slip boundary conditions, i.e., Dirichlet boundary conditions on the velocity. But much less papers deal with mixed boundary conditions, which are may be more realistic in practical situations: For instance, in the general case, all the faces of a tank are not identical and, when one face of the tank is a membrane, mixed boundary conditions of the type considered in this paper are introduced. For these reasons, we intend to work with the time-dependent Navier–Stokes equations where a condition on the normal component of the velocity and the vorticity is enforced on part of the boundary.
The design of suitable a posterioriestimators is of paramount importance both for control of the error between u and u h and for the efficiency of algorithms adaptively refining h and/or
p [24, 56]. Pioneering works on a posteriorierror estimation for the Helmholtz equation are reported in [4, 5]. The authors focus on first-order discretizations (p = 1) of one-dimensional problems and prove that in the asymptotic regime, the residual  and the Zienkiewicz– Zhu  estimators are reliable (yielding an error upper bound) and efficient (yielding an error lower bound). Numerous refinements of these results, including goal-oriented estimation and hp adaptivity, can be found in [22, 36, 46, 49, 52, 53], see also the references therein.
A posteriorierror estimation of hp − dG finite element methods for highly indefinite Helmholtz problems.
Recent works using equilibrated fluxes:
S. Congreve, J. Gedicke, I. Perugia, SIAM J. Sci. Comp., 2019:
Robust adaptive hp-discontinuous Galerkin finite element methods for the Helmholtz equation. T. Chaumont-Frelet, A. Ern, M. Vohral´ık, submittted, 2019:
Les estimateurs d’erreur a posteriori que nous avons vus jusqu’`a maintenant per- mettent de contˆoler l’erreur de discr´etisation en une certaine norme. Pourtant une autre source d’erreur existe : l’erreur de mod´elisation. En effet, il est tr`es fr´equent que le mod`ele le plus appropri´e `a la situation consid´er´ee et donc le plus pr´ecis ne puisse ˆetre simul´e en raison des coˆ uts de calcul prohibitifs qu’il engendre. Ainsi, un autre mod`ele plus simple et moins pr´ecis va ˆetre pr´ef´er´e, au moins dans certaines parties du domaine de calcul. Cependant, nous ne savons pas a priori quelles par- ties du domaine requi`erent le mod`ele pr´ecis. R´ecemment Braack et Ern [BE03] ont propos´e une m´ethode d’adaptation conjointe du mod`ele et du maillage bas´ee sur la r´esolution d’un probl`eme dual. Le probl`eme dual pemet de mesurer l’influence du mod`ele en une fonctionnelle de la solution approch´ee.
dered in (1.1) as the estimated pooled covariance matrix of the k populations.
Whatever the rule established is, it is subject to a probability of misclassifications. Then, an actual error rate is associated with any classification rule established on data samples in order to evaluate its efficiency. In practice, it is impossible to precisely determine the actual error rate, because it is only computed on the actual parameters of the populations, which are usually unknown. To solve this problem, some parametric and non parametric estimators of the actual error rate were established (McLachlan, 1992). Parametric estimators were established for two normal homoscedastic groups and estimated the actual error rate, using some para- meters related to the considered samples such as the estimated Mahalanobis distance between the two groups. On the contrary, non-parametric error rate estimators do not depend on any hypothesis of use and were based on resampling methods. For two-group discriminant analysis, many comparison studies of error rate esti- mators have been done in linear discriminant analysis, in order to deduce the ones that have the lowest errors compared with the theoretical actual error rate. A thorough review of these studies was provided by Schiavo and Hand (2000). However, in real world problems, more than two groups are often considered in discriminant analysis. This paper evaluated and com- pared by simulation technique, the efficiency of ten non parametric error rate estimators for 2, 3 and 5 groups submitted to linear discriminant analysis.
The aim of this work is to extend the a posteriori estimates to the more realistic case of mixed boundary conditions. We propose a very standard low cost discretization relying on the Euler’s implicit scheme in time combined with finite elements in space, and prove optimal a posteriorierror estimates for the discrete problem. To do this, we have rather follow the approach of  introduced in  which consists in uncoupling as much as possible the time and space errors in view of a simple adaptivity strategy.
The discrete problem amounts to a system of nonlinear equations, and, in practice, is solved using an iterative method involving some kind of linearization. Given an approximate solution, say u L,h , at a given stage of the iterative process and on a given mesh, there are actually two sources of error, namely linearization and discretization. Balancing these two sources of error can be of paramount importance in practice, since it can avoid performing an excessive number of nonlinear solver iterations if the discretiza- tion error dominates. Therefore, the second objective of this work is to design a posteriorierror estimates distinguishing linearization and discretization errors in the context of an adaptive procedure. This type of analysis has been started by Chaillou and Suri [11, 12] for a certain class of nonlinear problems similar to the present one and in the context of iterative solution of linear algebraic systems in . Chaillou and Suri only considered a fixed stage of the linearization process, while we take here the analysis one step further in the context of an iterative loop. Furthermore, they only considered a specific form for the linearization, namely of quasi-Newton type, while we allow for a wider choice, including Newton– Raphson methods. We consider an adaptive loop in which at each step, a fixed mesh is considered and the nonlinear solver is iterated until the linearization error estimate is brought below the discretization error estimate; then, the mesh is adaptively refined and the loop is advanced. In this work, we will not tackle the delicate issue of proving the convergence of the above adaptive algorithm. We will also assume that at each iterate of the nonlinear solver, a well-posed problem is obtained. This property is by no means granted in general; it amounts, for the p-Laplacian, to assume, as mentioned before in , that the gradient norm of the approximate solution is positive everywhere in the domain. We mention that in our numerical experiments, all the discrete problems were indeed found to be well-posed.