Now by large-order analysis through exponential asymptotics,  we found that if RH is false, u n will also admit a clear-cut asymptotic form but
of a wholly different nature, dominated by individual terms F n (ρ) contributed
by every zero ρ = 1
from [ 13 ] for straight cylindrical scatterers with arbitrary cross-sections of small area. The asymptotic
representation formula for the scattered field together with this pointwise description of the polarization tensors yields an efficient simplified model for scattering by thin tubular structures.
For the special case, when the cross-section of the thin tubular scattering object is an ellipse, explicit formulas for the two-dimensional polarization tensors of the cross-section are available, which then gives a completely explicitasymptotic representation formula for the scattered field. We will exemplify how to use this asymptotic representation formula in possible applications by discussing an inverse scattering problem with thin tubular scattering objects with circular cross-sections. The goal is to recover the center curve of such a scatterer from far field observations of a single scattered field. We make use the asymptotic repre- sentation formula to develop an inexpensive iterative reconstruction scheme that does not require to solve a single Maxwell system during the reconstruction process. A similar method for electrical impedance tomog- raphy has been considered in [ 29 ] (see also [ 13 ] for a related inverse problem with thin straight cylinders). Further applications of asymptotic representation formulas for electrostatic potentials as well as elastic and electromagnetic fields with thin objects in inverse problems, image processing, or shape optimization can, e.g., be found in [ 2 , 3 , 16 , 23 , 27 , 38 ].
LSA results are compared with the impedance-based predictions in the next section, showing a good agreement where the asymptotic theory is valid.
One of the possible causes associated with these differences could be the computational resolution. On one hand, NM16 used a stabilised finite element method based on streamline-upwind/Petrov–Galerkin (known as SUPG) and pressure- stabilizing/Petrov–Galerkin (known as PSPG) techniques (Navrose & Mittal 2016 ). On the other hand, ZLYJ15 used a reduced-order model of the unsteady Navier–Stokes equations in a finite-volume formulation coupled with the structural motion equations. Both methods differ from that implemented here and could be the cause of the discrepancy. Since the influence of the mesh refinement, boundary conditions and domain size was already explored in the present investigation (see Fabre et al. 2018 ), the latter does not seem to be the source of error.
the subject of a new fast orthogonalization step . From the estimation accuracy point of view, the choice of the weighting
matrix crucially affects the algorithm’s performance as been
shown by simulations in . Hence, it would be of interest to analyse the effect of this weighting matrix on the algorithm’s performance and derive, if possible, its optimal value in some sense. This point will be an application of our analysis.
Although many efforts have been oriented towards the translation of Vershik’s theory of decreasing sequence of measurable partitions in a theory of filtrations written in the language of stochastic processes (, , , ), many papers in the ergodic-theoretic literature dealing with standard filtrations still remain difficult to read for probabilists outside the class of experts in this topic. Difficulties do not lie in basic concepts of ergodic theory such as the ones presented in introductory books on measure-preserving systems, but rather in the language of the theory of measurable partitions initiated by Rokhlin (see ). Rokhlin’s correspondence (see ) between measurable partitions and complete σ- fields is not a complicated thing, but the approach to filtrations is somewhat geometrical in the language of partitions, whereas probabilists are more comfortable with considering a filtration as the history of a stochastic process whose dynamic is clearly described.
August 22, 2005
This paper considers the problem of determining the optimal sequence of stopping times for a diffusion process subject to regime switching decisions. This is motivated in the economics literature, by the investment problem under uncertainty for a multi- activity firm involving opening and closing decisions. We use a viscosity solutions approach, and explicitly solve the problem in the two regimes case when the state process is of geometric Brownian nature.
fixed-case cylinder, when the mass ratio parameter is small. Subsequently, Buffoni ( 2003 ) carried out an experimental study showing that the unsteady vortex shedding could be triggered at Reynolds numbers as low as Re = 25 by forcing the cylinder displacement in the crosswise direction (1DOF) at certain frequencies and amplitudes. Later, a transverse and in-line vibration (2DOF) was investigated by Mittal & Singh ( 2005 ), showing that vortex shedding and VIV could be found for Reynolds numbers as low as Re = 20 for certain values of mass ratio and natural frequency of the coupled spring system. Furthermore, similar conclusions were made by Meliga & Chomaz ( 2011 ) using a 2DOF asymptotic expansion approach and by Zhang et al. ( 2015 ) using a 1DOF reduced-order model approach. Subsequently, a study of the lock-in regime (match of the wake vortex mode and body oscillation frequencies) was conducted by Navrose & Mittal ( 2016 ) using a 1DOF global LSA, and the subcritical regime of the 1DOF cylinder was studied by Kou et al. ( 2017 ) using a dynamic mode decomposition.
Currently, statistics deals with problems where data are explained by many variables. In principle, the more information we have about each individual, the better a clustering method is expected to perform. Nevertheless, some variables can be useless or even harmful to obtain a good data clustering. Thus, it is important to take into account the variable role in the clustering process. To this aim, Gaussian mixtures with a specific form are considered. On irrelevant variables, data are assumed to have an homogeneous behavior around the null mean (centered data) allowing not to distinguish a possible clustering. Hence the data density is modelled by a spherical Gaussian joint law with null mean vector on these variables. On the contrary, the different component mean vectors are free on relevant variables. Moreover, the variance matrices restricted on relevant variables are either taken completely free or are chosen in a specified set of definite positive matrices. This idea is now formalized.
numerical examples, the critical time step determination is very easy and the number of time steps is minimal while accuracy is maximum.
The application of the proposed method to dynamic crack growth in mixed mode reveals its ability to simulate moving interfaces inan fully explicit X- FEM framework with a standard critical time step. In this particular case, It can be noticed that asymptotic enrichments with moving crack has been used here for the first time with anexplicit X-FEM strategy. This is of great im- portance for future application such as tridimensional dynamic crack growth simulations. Indeed, in such a case, a very good accuracy is required along the crack front, both in term of shape modeling, and also in term of discontinuous and asymptotic behavior along the crack front. In a next step, extensions of such an approach to non-linear behavior, both in the bulk or inside the in- terface will be considered inanexplicit X-FEM framework. This is of great interest for transient highly nonlinear dynamics simulations with a large num- ber of time step which also require to take into account moving interfaces.
associated either with different training conditions, or with different training phases. The cerebral regions that are specifically linked to either process therefore remain uncertain.
Moreover, these and other behavioral studies have tended to use absolute measures of awareness. Such measures, however, are not immune from possible contamination by implicit influences. Indeed, in the absence of a clear operational criterion for awareness, it appears premature to consider that there exist tasks that exclusively involve either conscious or unconscious processes. In a memory task, for instance, after studying a list of words, participants may use these words in a stem completion task either because they recollect them explicitly, or simply because of a feeling of familiarity that may not be associated with conscious recollection—a distinction between recollective experiences that are referred to as remembering and knowing in the memory literature . This example suggests that studies on implicit learning and memory should involve more sensitive methods—methods that allow us to disentangle conscious from unconscious processing within a single task. The 'contamination problem' is also problematic in brain imaging studies that attempt to identify the cerebral correlates of explicit and implicit processes, for many such studies have relied on identifying discrete regions involved in either process.
where the tensor product is thus seen as a linear kind of conjunction. Note that, for clarity's sake, we use the same notation for a formula A and for its interpretation (or denotation) in the monoidal category.
This linearity policy on proofs is far too restrictive in order to reect traditional forms of reasoning, where it is accepted to repeat or to discard an hypothesis in the course of a logical argument. This diculty is nicely resolved by providing linear logic with an exponential modality, whose task is to strengthen every formula A into a for- mula !A which may be repeated or discarded. From a semantic point of view, the formula !A is most naturally interpreted as a comonoid of the monoidal category. Recall that a comonoid (C, d, u) in a monoidal category C is dened as an object C equipped with two morphisms
f τ,X (τ ;t + ∆t,x) dτ. (1.3’)
There exist two main strategies depending of the overall problem.
① If the sole transport equation (1.1) is concerned, it is possible to work with (1.3) or (1.3’) formulations. However, the (1.3)-based method induces the propagation of numerical diffusion since one has to go back upstream to the origin of time t = 0 while in (1.3’), there is only a local calculation between t + ∆t and t. Then, the integral is computed by means of a numerical integration formula (Euler, Gauss). The method consists in two steps: the construction of the characteristic to provide the foot X (t;t + ∆t,x) of the curve passing through x at time t + ∆t (as well as other values required by the integration formula) and the evaluation of the computed solution at time t and position X (t;t + ∆t,x). We remark that this algorithm only requires values of the solution of ODE (1.2) with s = t + ∆t over the interval [t,t + ∆t]. See for instance  or the method we shall describe in § 2.
Our aim is to demonstrate the effectiveness of the matched asymptotic expansion method in obtaining a simplified model for the influence of small identical heterogeneities period- ically distributed on an internal surface on the overall response of a linearly elastic body. The results of several numerical experiments corroborate the precise identification of the different steps, in particular of the outer/inner regions with their normalized coordinate systems and the scale separation, leading to the model.
unramified. The representations have the same central
quasicharacter det QK, so XP3m = 1, as required.
Step 2. We now prove (i) in the case (1 E If p ~’ [K : F], the result is
given by [5, Th. 2.3(vi)~. If K/F is unramified of degree p, it is given by Step 1. We therefore need only consider the case where K/F is totally ramified of degree p. If we in fact have (1 E then ~19~. In general, let E/F be the normal closure of a defect field for (1. Then E/F is tame Galois, and (1E E The metacyclic base change operations
dynamics of pandemic is accelerating or decelerating. If with an increasing number of tests ever more positive cases are found, the harm of the pandemic is accelerating. If with an increasing number of tests the number of detected positive cases declines on a daily basis, the pandemic starts to be contained and harm decelerates. We represented this in terms of a scatter-plot of total number of positive cases over total number of tests and pointed to the curvature of the underlying functional relationship. The pandemic ends when further tests do not find any new positive cases and the functional relationship becomes a flat line. The test-distribution criterion uses the information of the tangent to this functional form at endpoint, which we claim is a useful measure of the marginal benefit of testing. The policy implication of our criterion is that more tests will be needed in regions where the tangent is steeper and indicates an acceleration of the pandemic in comparison to regions where the tangent is flatter and thus means that the pandemic is spreading less quickly. Our criterion could be combined with group or pool testing (see  and ) and drive-in testing ( and ) so as to use the limited amount of test capacity in a best way in each region or province. Pool testing is currently developed and implemented in Germany () and Israel 6 . We show how the criterion can be applied to Italy and its regions where the
5. An instructive counterexample: niteness spaces
We have just established that the very same limit formula enables to compute the free exponential modality in the coherence space model as well as in the Conway game model. Interestingly, this does not mean that the formula works in every model of linear logic. This is precisely the purpose of this section: we explain why the formula does not work in the niteness space model of linear logic, an important relational model introduced by Thomas Ehrhard (Ehrhard 2005). Our purpose is not only to analyze the reasons for the defect, but also to pave the way for the solution based on conguration spaces developed in the subsequent Sections 6 and 8 .
In Figure 2, each line represents an Italian region, whose population n i appears in parentheses under the name of the region in the first column. In the second column we report the population- weighted and population-unweighted shares, respectively, Ai and ai, as well as the actual share denoted xi. The third column helps visualising those shares. As can be seen in Figure 2, differences between the optimal shares as computed by using our (population-weighted and unweighted) criterion and actual shares of tests in all regions are significant. A striking conclusion is that certain regions currently receive too many tests, compared to others who have received too little. In particular, a substantial share of tests conducted in regions such as Lombardia, Veneto and Emilia-Romagna, where the pandemic spread more rapidely at the beginning, should be allocated to other regions. This is particularly true if we consider the population-unweighted share ai of tests but also if we take the population-weighted share Ai into account. However, what also becomes apparent is that Lombardia and Veneto have currently about the same share of total tests allocated (about 20% each) despite the fact that Veneto has only about half of the population of Lombardia. Our population-weighted criterion Ai takes this into account and allocates more tests to Lombardia than to Veneto given its population size. If we continue to focus on the population- weighted share Ai, another striking example are the regions of Lazio, but also Sicily, which should receive a much larger share of tests, given the number of inhabitants in those regions, than they currently do. Indeed, Lazio, the second most populated region, should receive twice as many tests as it currently does, that is about 13% of the total number of Italian tests. This is because both its marginal benefit of testing and its regional population are among the largest across Italy. Sicily is in a similar situation and should also get twice as many tests. Generally, in view of Figure 2, it is hard to conclude that tests have been allocated across Italian regions where they are the most efficient and mostly needed. Obviously, to achieve the goal of efficient distribution of tests, it is important that regional governments and administrations are team-players and a nation-wide coordination is possible.
Surprisingly, little work has been published on the necessity to make modeling assumptions a full part of a system model. Conversely, this paper advocates for anexplicit inclusion of modeling assumptions into the system model and dis- cusses solutions in the context of the OMG’s System Modeling Language (SysML ). The solution illustrated on the definition of so-called ”Modeling Assump- tions Diagrams” is not restricted to SysML and may be reused, e.g., for UML or AADL. An important issue discussed by the paper is the versioning one. Indeed, our approach proposes to explicitly set or release modeling assumptions along an incremental modeling process.