4.3 The case of N -point **measures**
In this section, we develop an algorithm specific to the **projection** on the set of N -point **measures** defined in (2). This algorithm generates stippling results such as in Fig. 1. In stippling, the measure is supported by **a** union of discs, i.e., **a** sum of diracs convoluted with **a** disc indicator. We simply have to consider the image deconvoluted with this disc indicator as π to include stippling in the framework of N -point **measures**. We will generalize this algorithm to arbitrary **sets** of **measures** in the next section. We assume without further mention that ˆ H(ξ) is real and positive for all ξ. This implies that H is real and even. Moreover, Proposition 7 implies that problems (6) and (14) yield the same solutions **sets**. We let p = (p 1 , . . . , p N ) and set

En savoir plus
In this section, we develop an algorithm specific to the **projection** on the set of N -point **measures** defined in (2). This algorithm generates stippling results such as in Fig. 1. In stippling, the measure is supported by **a** union of discs, i.e., **a** sum of diracs convoluted with **a** disc indicator. We simply have to consider the image deconvoluted with this disc indicator as π to include stippling in the framework of N -point **measures**. We will generalize this algorithm to arbitrary **sets** of **measures** in the next section. We assume without further mention that ˆ H(ξ) is real and positive for all ξ. This implies that H is real and even. Moreover, Proposition 7 implies that problems (6) and (14) yield the same solutions **sets**. We let p = (p 1 , . . . , p N ) and set

En savoir plus
In this section, we develop an algorithm specific to the **projection** on the set of N -point **measures** defined in (2). This algorithm generates stippling results such as in Fig. 1. In stippling, the measure is supported by **a** union of discs, i.e., **a** sum of diracs convoluted with **a** disc indicator. We simply have to consider the image deconvoluted with this disc indicator as π to include stippling in the framework of N -point **measures**. We will generalize this algorithm to arbitrary **sets** of **measures** in the next section. We assume without further mention that ˆ H(ξ) is real and positive for all ξ. This implies that H is real and even. Moreover, Proposition 7 implies that problems (6) and (14) yield the same solutions **sets**. We let p = (p 1 , . . . , p N ) and set

En savoir plus
2.2.2 Investigated semantic similarity **measures**. Pesquita et al. proposed **a** survey of
semantic similarity **measures** to compare the terms in the context of GO [ 10 ]. Authors have categorized these **measures** according to the following three classes: (i) “node-based”, which is based on the features of the GO terms, (ii) “edge-based”, which is based on the number of edges that exist between two GO terms, and (iii) hybrid, which is based on **a** mix of the meth- ods used in the two previous classes. More recently, Guzzi et al. conducted **a** survey of existing semantic similarity **measures** [ 11 ]. The authors displayed the latter using **a** Venn diagram, which facilitates the identification of the features used by many **measures** and also those that are used by only **a** few **measures**. In addition, **measures** that harness multiple features can be easily identified at the intersection of multiple ovals. Mazandu et al. recently realized an addi- tional review of semantic similarity **measures** (referred to in their paper as “term semantic sim- ilarity”) [ 12 ]. These authors provided an exhaustive list of the different ICs that have been proposed in the literature and refined the classification proposed by Pesquita et al. by adding the following subcategory to the node-based **measures**: graph-based **measures**. This subcate- gory has been previously introduced by Mazandu and Mulder [ 21 ] for specifying **measures** based on the ancestors and/or descendants of the terms to be compared. The authors empha- sized that the old **measures** only used the MICA or simply counted the number of descendants of the terms, which is more limited. The purpose of this paper is not to propose **a** new categori- zation of existing semantic similarity **measures** for comparing GO terms but rather to evaluate their varying impact while analyzing gene **sets**. Thus, we selected nine pairwise semantic simi- larity **measures** according to the classifications provided by Pesquita et al. [ 10 ], Guzzi et al. [ 11 ] and Mazandu et al. [ 12 ] by choosing at least one measure belonging to each category. The **measures** are listed above with **a** description of the feature(s) that they use.

En savoir plus
A convergence result for this iteration has been given by Bertsekas [1] and Goldstein [2] for particular types.. of convex sets X.[r]

In this paper, we have introduced **a** simple yet powerful data structure for storing and reusing the results of partial projections of finite automata, i.e., **projection** operations carried out from **a** given state. We have applied our results to the problem of visualizing **a** part of the set of reachable configurations produced by **a** state-space exploration tool. In this setting, the advantage of our approach is not to reduce the inherent cost of **projection**, but rather to avoid performing redundant computations. This makes the procedure efficient when only slight modifications are applied to the value of parameters, such as the zoom factor or the coordinates of the visualization window. **A** prototype implementation of the proposed **method** has been developed, showing that the approach provides clear benefits. Although the focus of this paper was on **sets** on integers represented by finite-word automata, it is worth mentioning that our technique straightfor- wardly generalizes to mixed integer and real **sets** represented by weak infinite- word automata [BJW05]. Future work will address the problem of making PPA compatible with efficient representations of automata, such as [Cou04].

En savoir plus
6 Conclusion
This work was motivated by two facts: First, numerous binary similarity **measures** have been used in various scientific fields. Second, model-based mixtures offer **a** coherent response to the problem of classification by providing classification probabilities and natural multi-class support. Basing on these remarks, our main contribution is the proposal of **a** new classification **method** combining mixture models and binary similarity **measures**. The **method** provides good classification performances on challenging data **sets** (high number of variables and classes). We believe that this **method** can reveal useful in **a** wide variety of classification problems with binary predictors. As **a** by-product of this work, some new similarity **measures** are proposed to unify the existing literature.

En savoir plus
Fitting the inferred photospheric displacement and ob- served angular diameter variations, we adjust three parame- ters: the mean angular diameter θ, **a** free phase shift φ 0 and
the **projection** factor p (see Fig. 1). The mean angular diame- ter is found to be 1.475 ± 0.004 mas (milliarcsecond) for both radial velocity data **sets**. Assuming **a** distance of 274 ± 11 pc (Benedict et al. 2002), this leads to **a** linear radius of 43.3 ± 1.7 solar radii. The fitted phase shift is very small in both cases (of the order of 0.01). We used the same parameters (Moffett & Barnes 1985) to compute the phase from both observation **sets** and considering that they were obtained more than ten years apart, this phase shift corresponds to an uncertainty in the pe- riod of approximately five seconds. We thus consider the phase shift to be reasonably the result of uncertainty in the ephemeris. The two different radial velocity data **sets** lead to **a** consoli- dated value of p = 1.27 ± 0.06, once again assuming **a** distance of 274 ± 11 pc. The final reduced χ 2 is 1.5. The error bars ac- count for three independent contributions: uncertainties in the radial velocities, the angular diameters and the distance. The first was estimated using **a** bootstrap approach, while the oth- ers were estimated analytically (taking into account calibration correlation for interferometric errors): for p, the detailed error is p = 1.273 ± 0.007 Vrad. ± 0.020 interf. ± 0.050 dist. . The error is

En savoir plus
It is also worth noting that the **method** also provides **a** convergent numerical scheme to approximate as closely as desired any (` **a** priori fixed) number of moments of the measure µΩ, the restriction of µ to Ω (µ(Ω) being only the mass of µΩ).
• The acceleration technique described in [13] also applies to our context of the Gaussian measure on non-compact **sets** Ω. But again the monotone convergence is lost. Therefore we also provide another technique of independent interest to accelerate the convergence of the numerical scheme. It uses Stokes’ Theorem for integration which permits to obtain linear relations between moments of µΩ. Comptational remarks. As we did in [13] for the compact case and the Lebesgue measure, the problem is formulated as an instance of the the generalized problem of moments with polynomial data and we approximate µ(Ω) as closely as desired by solving **a** hierarchy of semidefinite programs of increasing size. This procedure is implemented in the software package GloptiPoly 2 which for modelling convenience uses the standard basis of monomials. As the monomial basis is well-known to be **a** source of numerical ill-conditioning, only **a** limited control on the output accuracy is possible and so in this case only bounds (ωd, ω d ) with d ≤ 10 for n = 2, 3 are meaningful. Therefore for simple **sets** like rectangles (1.4) and ellipsoids (with n = 3), in its present form our technique does not compete in terms of accuracy with ad-hoc procedures like in e.g. Genz [10] (rectangles) and the recent [22] (for ellipsoids). However, in view of the growing interest for semidefinite programming and its use in many applications, it is expected that more efficient packages will be available soon. For instance the semidefinite package SDPA [9] has now been provided with **a** double precision variant 3 . Moreover **a** much better accuracy could be obtained if one uses other bases for polynomials than the standard monomial 2 GloptiPoly [12] is **a** software package for solving the Generalized Problem of Moments with

En savoir plus
[ 14 ], compressed sensing [ 2 ], classification [ 1 ] and so on. Minimizing the number of non-zero components of **a** given vector, also called the 0 norm minimization, is generally **a** very difficult problem. The 0 minimization problem is known to be **a** non-convex NP-hard combinatorial problem [ 11 ]. Hence, the common solution is to minimize the 1 norm of the vector [ 6 , 13 ]. To this purpose, it is crucially important to have **a** simple algorithm to project **a** vector onto the 1 ball or, equivalently, onto the probabilistic simplex [ 3 ]. In many applications, for instance in machine learning, **a** large number of projections is needed and it is especially crucial to use an algorithm whose worst-case complexity is as small as possible.

En savoir plus
tool to rigorously prove results obtained by Riemann through **a** minimization principle (see [?]). Re- newed interest in these methods was sparked by the arrival of parallel computers, and variants of the **method** have been introduced and analyzed (for **a** historical presentation of these kind of methods see [?]). In this paper, we will use the domain decomposition methods that are one of the dominant paradigms in contemporary large scale partial differential equation simulations to parallelize the problem of mould filling in iron foundry in the context of overlapping methods. The problem is discretized by the finite element **method** in space and by the **method** of characteristics in time. This discretization in time allows us to relax the CFL condition and to use **a** relatively large time step in comparison with other methods, resulting in **a** significant decrease of the number of iterations to simulate the filling of the moulds, cut- ting down simulation time tremendously. However, applying this **method** next to the boundaries of each subdomain in the domain decomposition **method** creates small numerical errors that accumulate and add unwanted numerical diffusion especially with **a** time step greater than the space mesh step.

En savoir plus
Thus, the theorem says that every risk measure satisfying (1), (2) and (3) corresponds to an integration over **a** set **measures**, but integration is in the sense of Choquet. Clearly, in the special case where ν is **a** measure, integration is Lebesgue integration and one obtains risk **measures** that are linear, i.e. ρ(f + g) = ρ(f) + ρ(g), for all f, g ∈ B(Σ). The proof of the theorem is based on the following two results. The first was proved in [1, Theorem 2 and Corollary 1]. The second was essentially proved in [4]. We include its proof here for completeness.

Given **a** convex subset **A** ⊆ R p , the relative interior of **A**, ri **A**, is the interior which results when **A** is regarded as **a** subset of its affine hull aff **A**.
3.1 Heath and Ku’s Pareto equilibrium condition
Heath and Ku [13] introduced the HKPE condition for **a** subclass of risk mea- sures and called it the Pareto equilibrium condition. They showed the equiva- lence between HKPE and the non-emptyness of the intersection of the relative interiors of agents’ **sets** of priors (see their proposition 4.2). They however did not address the question of existence of Pareto optima and equilibria in the sense of definitions 2 and 3. The next theorem which contains the main result of the paper, may be viewed as an elaboration of Heath and Ku’s [13] propo- sition 4.2. It establishes that HKPE is **a** sufficient condition for existence of **a** Pareto allocation or equivalently of an equilibrium for monetary utilities.

En savoir plus
1 Introduction
The problem of the existence and characterization of Pareto optima and equi- libria in markets with short-selling, an old problem in the economic literature, has recently been addressed by Barrieu and El Karoui [4], Jouini et al [15], Filipovic and Kupper [9] and Burgert and R¨ uschendorf [6] for convex **measures** of risk in infinite markets. Existence of an equilibrium for finite markets where short sales are allowed has first been considered in the early seventies by Grand- mont [11], Hart [13], Green [12]. Debreu’s standard theorems on existence of equilibrium which assume that the **sets** of portfolios that investors may hold are bounded below could not be applied. In these early papers, investors were assumed to hold **a** single homogeneous or heterogeneous probabilistic belief and be von Neumann-Morgenstern (vNM), risk averse utility maximizers. Two suf- ficient conditions for existence of an equilibrium were given:

En savoir plus
rouge rouge bleu bleu blanc rouge blanc rouge noir rouge.. Écriture d’une équation de réaction chimique : Pour décrire précisément ce qu’il se passe lors d’une transformation chimique,[r]

Received: 27 June 2015 / Accepted: 23 December 2015 / Published online: 01 January 2016
ABSTRACT
The catastrophic floods in semi-arid areas are often caused by floods storm that occur at any time during the year, including the hot season. The prevention of these floods could be done by the construction of small dam hills. This requires the control of theoretical concepts hydrological sizing, especially the hydrological structure to evacuate floods. We suggest **a** **method** to calculate the optimal regulation flow of the flood and also the development of **a** direct calculation formula of **a** laminated maximum flow. The analysis of the hydro graph’s analogy at the input and the output of the dam, allow searching the dependencies between their characteristics. Knowing the characteristics of the hydrograph flood of the project and the reserved capacity for the amortization of the flood, we can directly determine the laminated maximum flow and project the hydrograph of the laminated flood.

En savoir plus
Different from previous CIIR techniques which project each point of integral image II to the reconstructed plane pixel by pixel, the proposed method reconstruct the 3D image by mapping a[r]

Image compression through **a** **projection** onto **a** polyhedral set
F. Malgouyres ∗
Abstract
In image denoising, many researchers have tried for several years to combine wavelet-like approaches and optimization methods (typically based on the total variation minimization). However, despite the well-known links between image denoising and image compression when solved with wavelet-like approaches, these hybrid image denoising methods have not found counterparts in image compression. This is the gap that this paper aims at filling. To do so, we provide **a** generalization of the standard image compression model. However, important numerical limitations still need to be addressed in order to make such models practical.

En savoir plus
R x j+1/2
x j−1/2 u 0 (x)dx where u 0 (x) is the initial condition.
3.1 The Two-Flux **method** revisited
Aim of this section is to review the Two-Flux **method** proposed by Ab- grall and Karni [2]. Let us first recall that pressure oscillations do not systematically appear in single-fluid computations. Abgrall and Karni [2] then propose to replace any conservative multi-fluid strategy by **a** non conservative approach based on the definition of two single-fluid nu- merical fluxes at each interface. We first recall the algorithm in details and then suggest **a** slight modification in order to lessen the conservation errors. This strategy will be used as **a** reference to assess the validity of the Lagrangian strategies proposed in the next subsection.

En savoir plus