Unité de recherche Inria Lorraine, Technopôle de Nancy-Brabois, Campus scientifique, 615 rue du Jardin Botanique, BP 101, 54600 Villers Lès Nancy Unité de recherche Inria Rennes, Irisa, [r]

Keywords: Gromov hyperbolicity; Hyperbolic space; Cop and Robber games.
1 Introduction
In **a** seminal paper [12], Kleinberg proved that every graph can be embedded in Hyperbolic space in such **a** way that, between any two vertices s and t, there exists an st-path in G where the Hyperbolic distance to t is monotonically decreasing. This fact has paved the way to an in-depth study of greedy routings in Hyperbolic space. In particular, Verbeek and Suri proved in [14] that **for** any embedding of G = (V, E) in Hyperbolic space the multiplicative distortion of the distances is Ω(δ(G)/ log δ(G)), with δ(G) being the hyperbolicity of the graph. Roughly, Gromov hyperbolicity is **a** parameter which gives bounds on the least distortion of the distances in **a** graph when its vertices are mapped to **points** in some “tree-like” metric space such as: (weighted) trees, Hyperbolic space, and more generally speaking spaces with negative curvature (formal definitions are postponed to Section 2). So far, tight upper-bounds on the hyperbolicity have been proved **for** various graph classes (e.g., see [15]). **For** general graphs, it has been observed that this parameter is linearly upper-bounded by the diameter [3]. However, it is not clear when this bound is close to be optimal. More generally, there is **a** need **for** better understanding the cases where the hyperbolicity is large. As an example, lower-bounds on the hyperbolicity can be helpful in order to decide whether, given **a** graph, we should use greedy routing in Hyperbolic space or another routing method of the literature.

En savoir plus
the level set w ε ≃ 0 obtained by fixing **a** threshold value α = 0.5
8. Conclusion
In this work, **a** new variational method **for** point detection in biological images has been proposed and tested. We emphasize that, according to our knowledge, this is the first method which makes possible isolating the spots from **a** filament in the observed image. Moreover it also permits in **a** noisy image to fix **a** threshold value in **a** **simple** and direct way. Moreover we believe that **a** suitable generalization of this method **for** the detection of spots and even filaments in 3-D biological images can be provided. This is **a** subject of our current investigation. Certainly there are many rooms **for** improvement both from **a** theoretical and numerical point of view such as **a** deep investigation of the Γ-convergence approximation, as **a** well as **a** significant acceleration of the algorithm.

En savoir plus
The restriction ε ≤ 1.6 and the requirement of no plastic hinges between column end-**points** are necessary to preclude local instability of highly compressed slender columns. Above restriction (c)
makes the commonly termed P − ∆ effect negligible.
In order to help the designer in his daily work, several studies have been aimed at giving yet more **simple** limits within which second-order effects can be neglected in plastic analysis of plane single-

1 Introduction
Our motivation **for** this work is twofold. On one hand, in **a** recent paper [1], two of the co-authors of the present paper studied **a** certification method **for** approximate roots of exact overdetermined and **singular** polynomial systems, and wanted to extend the method to certify the multiplicity structure at the root as well. Since all these problems are ill-posed, in [1] **a** hybrid symbolic- numeric **approach** was proposed, that included the exact computation of **a** square polynomial system that had the original root with multiplicity one. In certifying **singular** roots, this exact square system was obtain from **a** deflation technique that added subdeterminants of the Jacobian matrix to the system iteratively. However, the multiplicity structure is destroyed by this deflation technique, that is why it remained an open question how to certify the multiplicity structure of **singular** roots of exact polynomial systems.

En savoir plus
Most of the methods developed **for** **multiple** change-point detection assume the time series to be piecewise Gaussian. In the case of detecting one change-point, the Gaussian assumption allows to derive exact approaches based on maximum likelihood (Hawkins, 2001). Other Bayesian methods have also been developed using **a** Bernoulli-Gaussian modeling (Dobigeon et al., 2007; Bazot et al., 2010). In the case of **simple** Gaussian models, Fearnhead (Fearnhead, 2006) builds **a** Bayesian **approach** based on independent parameters that avoids the need **for** simulations. However from **a** practical point of view, real datasets present often non Gaussian behavior, and the classical approaches may fail. In order to propose **a** model-free **approach**, we choose to build our model based on an inference function from the p-values of **a** statistical test computed on the data. The choice of the statistical test introduces **a** crucial free parameter in the model, choosing **a** t-test or Welch’s t-test is similar to the Bernoulli-Gaussian model developed previously (Dobigeon et al., 2007). In this paper, we propose to use the Wilcoxon test so as to be free to the Gaussian assumption and to be robust to outliers.

En savoir plus
In order to illustrate the validity of the adopted CAD/CAM **approach** **for** AFM machining, **a** pattern already reported by Johannes et al. in [14] was designed as shown in Figure 2. In addition, the particular G-code that was fed to the developed post-processor is given in Table 1. It should be noted that the controller of the AFM instrument used in this study could only generate the lateral displacements of the AFM stage along four axes, namely in the directions perpendicular, parallel and at ± 45° angle with respect to the orientation of the long axis of the cantilever. Thus, to execute G-code instructions between **points** as accurately as possible, curves or lines representing planned tip trajectories have to be discretised into smaller segments oriented along one of these four constrained directions. In this study, such **a** discretisation step was achieved with the Bresenham’s line algorithm, which is commonly used in computer graphic applications. Finally, based on preliminary trial experiments, it was decided to process the designed pattern with **a** normal applied force set at 12 μN and with **a** Bresenham discretisation step of 100 nm. Figure 3 shows an AFM image of the obtained result.

En savoir plus
∞ enables also to determine the controllability **singular** **points**, to which the generically accessible system is not weakly controllable.
Finally, in (Hanba, 2017) it has been proven that controllability to the origin is equivalent to state feedback stabilizability. Necessity of controllability to the origin can, **for** instance, be checked using the results of this paper. As **for** the other results on reachability and controllability, see (Kawano, Ohtsuka, 2016), (Kawano, Ohtsuka, 2013). These papers developed conditions **for** reachability (controllability) from an

En savoir plus
TABLE V
Jacobian matrix includes hand translation, hand orientation and gaze set **points** in task space. In the case of captured move- ments, the motion adaptation method is used to compute joint values, thereafter the **singular** values of the Jacobian matrix can be computed using the global scheme. **Singular** values belong to **a** 9-dimensional space. Results obtained **for** the captured and simulated movement are given in table VI. The beginning time of each movement of the sequence are given in table VII. Patterns must be compared **for** each movement and thus **for** ad hoc extracted period. Since the whole set of tasks is considered, it is not possible to decouple **singular** values corresponding to hand translation, hand orientation or the gaze tasks but patterns are very similar and 3 groups of 3 values are observed: one around 2.1 (2.4, 2.1 and 1.9), one around 1.3 (1.4, 1.3, 1.1) and one around 0.4 (0.5, 0.4 and 0.3) **for** both captured and simulated movements. Thus, **singular** value patterns are an efficient way to characterize complex motion since it captures simultaneously information relative to the configuration and to the task. .

En savoir plus
Fig. 4 shows an example of output **for** an input image with 9 organs to annotate and thus 9 explanations to provide.
We evaluate our model using the accuracy, which is the ratio **for** all organs of the number of correct anno- tations over the total number of annotations. We got an accuracy of 100% **for** **a** model containing only directional relations. The outer cross-validation is actually **a** 3-fold cross-validation (23/24 training examples **for** 12/11 test examples in each iteration) and the inner one is **a** 4-fold cross-validation. As there are 9 organs to annotate, there are 9 hyperparameters that need to be set **for** extracting frequent relations (Table 1). Constraints could be added to the hyperparameter optimization process to make ex- planations longer or shorter. We observe the explanations rightfully rely on the relations that have been extracted and later turned into constraints. **For** example in Fig. 4, the set of constraints associated to the right kidney is:

En savoir plus
The two step method is more accurate in terms of average error QPS, even though it produces extra signals. The difference between the average one-step method and two-step method in both QPS and average FPS is negligible, so the average performance of the two methods is comparable. However, taking into account that the one-step method results are free of extra signals (because we managed to select the series so), this means that **for** the OECD recessions the two-step method is more precise. Actually, the two-step method is much more accurate with respect to the beginning and the end of recessions, with **a** tendency to indicate the beginning of **a** recession on average one quarter of **a** month earlier; the one-step method is late with the beginning on 2,5 months and precip- itates the end 2,6 months earlier. In general, both methods produce early warning signals. **For** the one-step method, with the exception **for** the combination #9982, the data in each of the retained combinations are updated with 1 month or even 0 month lag (case #6089). This means that the current phase of the GDP growth cycle in January 2015 can be determined either in February or March 2015, with no need to wait **for** the release of quarterly GDP growth data in April 2015. Though the gain in time is not very big, it may still be of **a** great imporance **for** policy makers. **For** the two-step method, the estimates are available in two months. This is still less than the timing of OECD, although one month more than the timing of the one-step method. In this respect, the estimation of the factor on the first step with the help of one of the procedures proposed by Doz

En savoir plus
Problem : Most of these models don’t take into account both synaptic plasticity and neural dynamic. Adding dynamics on the weights makes the analysis more difficult which explains that most models consider either **a** neural [2, 3, 4] or **a** synaptic weight dynamic [5, 6, 7, 8]. We decided to study the binary synapses model of [9], model we wish to complete with **a** neural network afterwards in order to get closer to biology.

His research interests and activities include estimation theory and performance analysis in statistical signal processing, especially lower bounds on the mean square error for signal mod[r]

archive **for** the deposit and dissemination of sci- entific research documents, whether they are pub-
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non,

S values to drift and made them unreliable.
4 | C O N C L U S I O N S
We present **a** **simple** and reliable reducing method modified from the literature **for** the conversion of sulfate into sulfide **for** four ‐sulfur isotopes analysis. This system is **simple** to set up, easy to replace and cheap to acquire and is made from sealed test tubes and PEEK flow lines (metal part, e.g. needle, in contact with the hot reducing solution is not allowed). This method uses **a** reducing solution made of 100 mL 57% HI and 13 g NaH 2 PO 2 , and **a** very small amount (1 mL) of reducing

En savoir plus
Pierre, R. BERTRAND and Mehdi FHIMA 6
3 Application to real data
Recent measurement methods allow us to access to electrocardiograms (ECG) **for** healthy people over **a** long period of time: marathon runners, daily (24 hours) records, etc. These large data sets allow us to characterize the variation of the heartbeat rate in the parasympathetic frequency band (0.15 Hz, 0.5 Hz). According to the recommendations of the Task Force of Cardiologists [14], this frequency band corresponds to the parasympathetic system of control of the heartbeat. Moreover, the spectral density of the heart beat time series follows **a** power law, thus after having substract its mean, this series can be modelized by (2). Figure 7 provides an example of interbeat time series record on an healthy subject during 24 hours. We have calculated its wavelet coefficients and used the Filtered derivative algorithm with **A** = 500 and α critic = 10 −11 . We obtain the following

En savoir plus
A simple and reliable method reducing sulfate to sulfide for multiple sulfur isotope analysis3. Lei Geng, Joel Savarino, Clara Savarino, Nicolas Caillon, Pierre Cartigny, Shohei Hattori,[r]

These questions appear particularly relevant **for** online settings, which occur more commonly in practice than batch settings. Currently, most computer engineers rely on simulations, not theoretical analyses, to gain confi- dence in backoff algorithms. As **a** consequence, systems employing backoff are generally nonalgorithmic, in the sense that performance is not guaranteed, not even statistically. Consequently, systems can exhibit wildly unpredictable performance, making it difficult or impos- sible to meet real-time constraints. We are optimistic that further research on backoff algorithms, using techniques such as adversarial queueing theory, will lead to more stable and higher-performing computer systems. References

En savoir plus
Abstract
We propose **a** numerical **approach** to study the invasion fitness of **a** mutant and to determine evolutionary **singular** strategies in evolutionary structured models in which the competitive exclusion principle holds. Our **approach** is based on **a** dual representation, which consists of the modelling of the small size mutant population by **a** stochastic model and the computation of its corresponding deterministic model. The use of the deterministic model greatly facilitates the numerical determination of the feasibility of invasion as well as the convergence-stability of the evolutionary **singular** strategy. Our **approach** combines standard adaptive dynamics with the link between the mutant survival criterion in the stochastic model and the sign of the eigenvalue in the corresponding deterministic model. We present our method in the context of **a** mass-structured individual-based chemostat model. We exploit **a** previously derived mathematical relationship between stochastic and deterministic representations of the mutant population in the chemostat model to derive **a** general numerical method **for** analyzing the invasion fitness in the stochastic models. Our method can be applied to the broad class of evolutionary models **for** which **a** link between the stochastic and deterministic invasion fitnesses can be established.

En savoir plus