Editors: Will be set by the publisher
A NEW NUMERICALALGORITHM FOR TWO-PHASE FLOWS DRIFT-FLUX MODEL WITH STAGGERED GRID IN POROUS MEDIA
Anouar MEKKAS 1 , Anne CHARMEAU 2 and Sami KOURAICHI 3 Abstract. FLICA4 is a 3D compressible code dedicated to reactor core analysis. It solves a compressible drift-flux model for two-phase flows in a porous medium . To define convective fluxes, FLICA4 uses a specific finite volume numerical method based on an extension of the Roe’s approximate Riemann colocated solver . Nevertheless, analysis of this method shows that at low Mach number, it is necessary to apply modifications to the 2D or 3D geometries on a cartesian mesh otherwise this method does not converge to the right solution when the mach number goes to zero . For this reason, we apply a so-called “pressure correction“. Although this correction is necessary to reach the required precision, it may produces some checkerboard oscillations in space in the situations we are interested in, especially in the 1D case. Since these checkerboard oscillations are sometimes critical and may lead to unstable solutions in some cases, we investigate another numericalalgorithm to solve this compressible drift-flux model in the low Mach regim. The aim of this work is to propose a new compressible scheme accurate and robust at low Mach number on staggered grid since checkerboard oscillations cannot exist on this type of discretisation . The accuracy and robustness of this new scheme are verified in low Mach regime with test cases describing a simplified nuclear core ”Boiling channel”. The behavior of this scheme is also tested in the compressible regime with or without shock waves.
In this paper, an effective numericalalgorithm is proposed for the first time to solve the fractional visco-elastic rotating beam in the time domain. On the basis of fractional derivative Kelvin-Voigt and fractional derivative element con- stitutive models, the two governing equations of fractional visco-elastic rotating beams are established. According to the approximation technique of shifted Chebyshev polynomials, the integer and fractional differential operator matri- ces of polynomials are derived. By means of the collocation method and matrix technique, the operator matrices of governing equations can be transformed into the algebraic equations. In addition, the convergence analysis is performed. In particular, unlike the existing results, we can get the displacement and the stress numerical solution of the governing equation directly in the time domain. Finally, the sensitivity of the algorithm is verified by numerical examples.
and we also denote it more classically as Res z (P, Q). Finally, V( f 1 , . . . , f n ) denotes the solutions
of the system f 1 = · · · = f n = 0.
Previous and related work.. There are many works addressing the topology computation via symbolic methods, see for instance the book chapter by Mourrain et al. ( 2006 ) and references therein. Most of them use subresultant theory, but there are also some alternatives using only resultants (e.g. Seidel and Wolpert ( 2005 ); Emeliyanenko and Sagralo ff ( 2012 )) or Gr¨obner bases and rational univariate representations ( Cheng et al. ( 2010 )). An alternative by Akoglu et al. ( 2014 ) even computes a rational univariate representation numerically if all approximate solutions are known. For the restricted case of computing the topology of non-singular curves, certified numerical methods are usually faster and can in addition reduce the computation to a user defined bounding box. One can mention interval analysis methods ( Martin et al. ( 2013 )) or more generally certified homotopy methods ( Beltr´an and Leykin ( 2013 ); Van Der Hoeven
INRIA, Campus de Beaulieu, 35042 Rennes Cedex, France E-mail: firstname.lastname@example.org, email@example.com
The inverse scattering problem for coupled wave equations has various ap- plications such as waveguide filter design and electric transmission line fault di- agnosis. In this paper, an efficient numericalalgorithm is presented for solving the inverse scattering problem related to generalized Zakharov-Shabat equa- tions with two potential functions. Inspired by the work of Xiao and Yashiro on Zakharov-Shabat equations with a single potential function, this new algo- rithm considerably improves the numerical efficiency of an algorithm proposed by Frangos and Jaggard, by transforming the original iterative algorithm to a one-shot algorithm.
3.2 Comparison based on numerical properties
In computations of practical interest, only modest resolutions are used be- cause a large portion of the computational resources must be dedicated to other physical processes, e.g. chemical reactions or radiation schemes. As a result, an efficient numericalalgorithm, or “dynamical core” in atmospheric models, is designed to produce the best flow representation for a given reso- lution, or equivalently, to result in the fastest convergence as the resolution is increased. We present a direct, visual comparison of the convergence rate of each method in figure 10 showing PV contours in a sixteenth of the domain for increasing resolution, i.e. N = 32, 64, 128, 256 and 512 from top to bottom. The corresponding simulations (rows 5 to 24 of Table 2) start from an initial random PV field structured on larger scales, i.e. smaller k 0 , so that the flow can be properly represented even for the smallest resolution used, N = 32. The fields are plotted at t = 5 when all simulations are still comparable as discussed in the previous section. While essential features of the flow are al- ready captured by HCASL and CASL at N = 64, convergence seems to be reached only at N = 512 for the PS and VIC methods. The ripples observed in the upper part of the PV field in the N = 512 PS simulation are a sign that convergence is not totally achieved. The key difference when comparing the four images across a row is the steepness of the PV gradients. The PV fields obtained with HCASL and CASL at resolution N = 64 show PV gradients as steep as those produced by the PS and VIC methods at resolution N = 512. From these figures it is obvious that both algorithms based on contour advec- tion converge much faster than PS and VIC methods.
Editors: Marius Lindauer, Jan N. van Rijn and Lars Kotthoff
The SUNNY algorithm is a portfolio technique originally tailored for Constraint Satisfac- tion Problems (CSPs). SUNNY allows to select a set of solvers to be run on a given CSP, and was proven to be effective in the MiniZinc Challenge, i.e., the yearly international competition for CP solvers. In 2015, SUNNY was compared with other solver selectors in the first ICON Challenge on algorithm selection with less satisfactory performance. In this paper we briefly describe the new version of the SUNNY approach for algorithm selection, that was submitted to the first Open Algorithm Selection Challenge.
d’un polynôme aussi canonique que possible. Nous utilisons l’algorithme LLL pour trouver une base de petits vecteurs pour le réseau de Rn image
des entiers de K par le plongement canonique.
Abstract 2014 The algorithm described in this paper is a practical approach
3. Unstability of Euclidean-like algorithms
In this section, we provide strong evidences for explaining the average loss of precision observed while executing Algorithm 2.1. Concretely, in §3.1 we establish 3 a lower bound on the losses of precision which depends on extra parameters, that are the valuations of the principal subresultants. The next subsections (§§3.2 and 3.3) aim at studying the behaviour of these valuations on random inputs; they thus have a strong probabilistic flavour. Remark 3.1. The locution Euclidean-like algorithms (which appears in the title of the Section) refers to the family of algorithms computing gcds or subresultants by means of successive Euclidean divisions. We believe that the stability of all algorithms in this family is comparable since we are precisely loosing precision while performing Euclidean divisions. Among all algorithms in this family, we chose to focus on Algorithm 2.1 because it is simpler due to the fact that it only manipulates polynomials with coeffi- cients in W . Nevertheless, our method extends to many other Euclidean-like algorithms including Algorithm 1.1; this extension is left as an exercice to the reader.
We present in section 2 the new adaptive point of view. This requires to formulate an ide- alized version of the parareal algorithm in an infinite dimensional function space where the fine propagations are replaced by the exact ones (section 2.2). Since this scheme is obviously not implementable in practice, we formulate a feasible “perturbed” version that involves approxi- mations of the exact propagations at increasing accuracy across the iterations (section 2.3). The accuracies are tightened in such a way that the feasible adaptive algorithm converges at the same rate as the ideal one and with a near-minimal numerical cost. The identified tolerances involve quantities that are difficult to estimate in practice. In addition, they may not be optimal because they are derived from a theoretical convergence analysis based on abstract conditions for the coarse and fine solvers. We bridge this gap between theory and actual implementation by proposing practical guidelines to set these tolerances. We next explain in section 2.4 how the new formulation invites to use adaptive schemes not only in the time variable, but also in other variables that may be involved in the dynamics. The performance of the algorithm could also be enhanced by re-using informations from previous iterations in order to limit the cost of internal solvers. The techniques for this will strongly depend on the nature of the specific prob- lem. We discuss common situations in Appendix A. We close section 2.5 by listing the main advantages of the new framework and how the classical parareal paradigm can be formulated with the optics of the new standpoint.
VI. C ONCLUSION
In this paper, one solves for the first time a more general and interesting inverse problem of design (formulated by (10)) than those already solved in , . The so-obtained solutions using our new algorithm IBBA+NUMT satisfy now the constraint about the torque (which is fixed by the schedule of conditions) by a numerical way. Thus, the so-generated so- lutions are directly validated numerically. Of course, problems of type (10) are much more complicated than its corresponding problem considering the analytical equation in place of the numerical constraint (1). Thus, in this first work, only small- sized problems has been solved. However, some differences are denoted comparing solutions found for (1) and (10). Regarding the CPU-times in Table I, we can expect to solve using IBBA+NUMT more general problems of design with more parameters such as those presented in , . This emphasizes the interest of this first work. In the field of global optimization, this is, in our knowledge, the first time that problems with a black-box constraint are solved by an exact algorithm.
Note that components are never empty since we have x −→ ∗ r x for all x.
Formalising Kosaraju’s algorithm has been an interesting exercise. We have chosen a direct formalisation with almost no abstraction. The key part of the formalisation has been to define the notion of well-formed pairs. It gives us a direct way to derive the correctness of the algorithm. Other formalisations of algorithms that compute strong components already exist [ 1 , 2 ], we believe ours is one of the more concise.
Problem 7.5. Generalize Roberts’ observations on the geography of the Katz algorithm  to the nonrigid case.
It would be good to compare explicitly what is happening in our presentation, which basically follows Kostov’s notation and setup, with the notation and setup used by Crawley- Boevey. Note that in Crawley-Boevey’s point of view, the Katz operations are root reflec- tions, and he uses several reflections in a row to get into a positive Weyl chamber before giving an explicit construction. This is obviously basically the same procedure as what we are doing here. It would be good to compare the numbers, and also to recover Roberts’ results and observations  in the Crawley-Boevey formulation.
ator of state mutation is applied which possibly change only the state of solutions. If
a solution changes from search space k to search space l, the solution is moved with
the state change operator stateM utation kl . Remember that the fitness of solutions is maintained by a state change. It is possible to have state mutation operators which changes the state of solutions in the same way for all solutions in the same states in- dependently of the solutions in other state. In that case, it is possible to parallelize this stage. Otherwise, it is not possible when for example, the operator changes the state of a fixed number of solutions. The operator of replacement is the classical operator which creates a new population for the next iteration according to the population from EAs and the old population of solutions. Again, the replacement does not take into account the states of solutions. The algorithm 1 is the algorithm of State based EA.
The simulated residual stresses were never been conﬁrmed by an assessment of plastic deformation ﬁelds which is the bases of their generation.
For these reasons, we propose, in this study, a numerical approach to predict the surface residual stress and strain gradients resulting from cutting material process. This approach is based on the ALE formulation using the commercial ﬁnite element code Abaqus–Explicit and pre-deﬁned experimental material behavior laws and friction models. The ﬁnite element model is calibrated by residual stresses
The general idea of MSA is that the search process for each di- mension successively discovers candidates for the global k-NN re- sult. An approximation parameter controls the moment when the MSA algorithm stops. The returned result is the set of the k best can- didates at that moment. As shown below, MSA provides monotone approximation; increasing values lead to a later stop, thus adding new candidates and improving the quality of the approximate result. We first characterize the approximate k-NN result produced by MSA, then we present the MSA index structure and algorithm.
In other respects, the various subspace trackers do not have the same behavior regarding the orthonormality of the estimated signal subspace basis. The need for orthonormality only depends on the post-processing method which uses the signal subspace es- timate to extract the desired signal information. For instance, in the context of DOA or frequency estimation, the MUSIC  and the minimum-norm  estimators require an orthonormal basis, whereas this is not the case of the ESPRIT algorithm .
The Unadjusted Langevin Algorithm (ULA) first introduced in the physics literature by [ Par81 ] and popularised in the computational statistics community by [ Gre83 ] and [ GM94 ] is a technique to sample complex and high-dimensional probability distributions. This issue has far-reaching consequences in Bayesian statistics and machine learning [ And+03 ], [ Cot+13 ], aggregation of estimators [ DT12 ] and molecular dynamics [ LS16 ]. More precisely, let π be a probability distribution on R d which has density (also denoted by π) with respect to the Lebesgue measure given for all x ∈ R d by,