Haut PDF A Hybrid hyperquadric model for 2-D and 3-D data fitting

A Hybrid hyperquadric model for 2-D and 3-D data fitting

A Hybrid hyperquadric model for 2-D and 3-D data fitting

Unite´ de recherche Inria Lorraine, Technopoˆle de Nancy-Brabois, Campus scientifique, 615 rue du Jardin Botanique, BP 101, 54600 Villers Le`s Nancy Unite´ de recherche Inria Rennes, Iri[r]

40 En savoir plus

An Anycast Communication Model for Data Offloading in Intermittently-Connected Hybrid Networks

An Anycast Communication Model for Data Offloading in Intermittently-Connected Hybrid Networks

In their inventory of offloading techniques in cellular networks 3 , Passarella et al. underline the same issues and conclude that non-cooperative models relying exclusively on cellular base stations or Wi-Fi access points, such as 4,5 , do not efficiently work in sparse networks where devices may be disconnected for a long time, and so that offloading solutions exploiting both available infrastructures and opportunistic device-to-device communications, and implementing a cooperative data diffusion, must instead be used. Opportunistic communications help tolerate the absence of end-to-end connectivity in an intermittently-connected networks 6 . The communications rely on the ”store, carry and forward” principle. The basic idea is to take advantage of radio contacts between devices to ex- change messages, while exploiting the mobility of these devices to carry messages between different parts of the network. Two devices can thus communicate even if there never exists any temporaneous end-to-end path between them. Recent experiments conducted in real conditions have shown that applications such as voice-messaging, e-mail, or data sharing can indeed perform quite satisfactorily in networks that rely on the ”store, carry and forward” prin- ciple 7,8,9,10 . Based on this observation, we argue that an interesting alternative and cost-effective solution may be to resort to new kind of hybrid networks combining an infrastructure part with intermittently or partially connected parts formed by mobile devices communicating in ad hoc mode. Figure 1 illustrates a worthwhile hybrid config- uration that involves 1) mobile ad hoc networks formed spontaneously by the devices carried by people, 2) mesh routers deployed by mobile operators in locations where significant data transfers are expected (e.g., transportation hubs, shopping malls, campuses, homes, offices, etc.), 3) Wi-Fi access points installed and operated by volunteers.
En savoir plus

9 En savoir plus

Variance analysis for model updating with a finite element based subspace fitting approach

Variance analysis for model updating with a finite element based subspace fitting approach

Keywords: Stochastic system identification, Subspace fitting, Uncertainty bounds, Finite element model 1. Introduction Linear system identification methods are of interest in mechanical engineering for modal analysis. Using output- only vibration measurements from structures, Operational Modal Analysis (OMA) has been successfully used as a complementary technique to the traditional Experimental Modal Analysis (EMA) methods [1–3]. With methods originating from stochastic system realization theory for linear systems, estimates of the modal parameters of interest (natural frequencies, damping ratios and observed mode shapes) can be obtained from vibration data. Among these methods, the stochastic subspace identification (SSI) techniques [4, 5] identify the system matrices of a state-space model, from which the modal parameters are retrieved. Subspace methods are well-suited for the vibration analysis
En savoir plus

24 En savoir plus

3D Building Model Fitting Using A New Kinetic Framework

3D Building Model Fitting Using A New Kinetic Framework

2.3. Kinetic Data Structure The generic kinetic framework that has been developed in [6,7] appears to be a good solution for our problem. The idea is to move continuously the geometry of the data from the initial geometry at time t = 0 to the target geometry at t = 1. In a kinetic framework, a global geometric prop- erty is maintained by constructing and maintaining a ki- netic data structure (KDS) throughout the time evolution. The purpose of the KDS is to track the list of geometric tests or predicates that are required to prove the global property. This data structure is responsible for ensuring that this finite set of local tests or predicates, called cer- tificates, which together prove the global geometric proper- ties, remain valid. These certificates are typically polyno- mial or rational functions in terms of the interpolation time t, that are respectively valid, degenerate or invalid when their signs are respectively positive, null or negative. Roots (and poles) of these certificates are called failing times, be- cause the continuous nature of the certificates ensure that the sign of a certificate remains constant between failing times. During the simulation, the time does not evolve con- tinuously: the KDS computes the failing times of each of its certificates and orders them in a priority queue. The inter- polation time t is then iteratively advanced to the closest failure time in the future. At that time, a certificate fails and the KDS has to make minimal changes to the topology of the object and has to update its internal proof accord- ingly. These updates reestablish a set of valid certificates, so that time can then be advanced either to the next fail- ing time of one of the certificates or to the evolution ending time t = 1.
En savoir plus

10 En savoir plus

A Hybrid Data Mining Approach for the Identification of Biomarkers in Metabolomic Data

A Hybrid Data Mining Approach for the Identification of Biomarkers in Metabolomic Data

3 LORIA, B.P. 239, F-54506 Vandoeuvre-l` es-Nancy, France Abstract. In this paper, we introduce an approach for analyzing com- plex biological data obtained from metabolomic analytical platforms. Such platforms generate massive and complex data that need appro- priate methods for discovering meaningful biological information. The datasets to analyze consist in a limited set of individuals and a large set of attributes (variables). In this study, we are interested in mining metabolomic data to identify predictive biomarkers of metabolic diseases, such as type 2 diabetes. Our experiments show that a combination of nu- merical methods, e.g. SVM, Random Forests (RF), and ANOVA, with a symbolic method such as FCA, can be successfully used for discovering the best combination of predictive features. Our results show that RF and ANOVA seem to be the best suited methods for feature selection and discovery. We then use FCA for visualizing the markers in a sugges- tive and interpretable concept lattice. The outputs of our experiments consist in a short list of the 10 best potential predictive biomarkers. Keywords: hybrid knowledge discovery, random forest, SVM, ANOVA, formal concept analysis, feature selection, biological data analysis, lattice- based visualization
En savoir plus

15 En savoir plus

Constraint fitting of experimental data with a jet quenching model embedded in a hydrodynamical bulk medium

Constraint fitting of experimental data with a jet quenching model embedded in a hydrodynamical bulk medium

The relative contribution of charm and beauty to the final yield of electrons is not known precisely, as it suffers from large theoretical uncertainties due to the variation of the actual value of the mass of the quarks and unknown higher orders [40], see [51] for recent experimental efforts. This uncertainty is particularly relevant for the medium effects as we will discuss below (also [52]). Besides single-particle inclusive cross sections, we are interested in the case of two-particle correlations. The corresponding expression is similar to Eq. (3.1), but now two partons created in the hard scattering fragment independently. The fragmentation of each parton is again described by D vac k→h or D k→h med for the vacuum and the medium respectively. Notice that here we will be interested in two-particle correlations which are well separated in azimuthal angle (back-to-back correlations), so that correlations from the fragmentation of a single parton into two final hadrons are negligible [53, 54].
En savoir plus

22 En savoir plus

3D model fitting for facial expression analysis under uncontrolled imaging conditions

3D model fitting for facial expression analysis under uncontrolled imaging conditions

Two principal classes of approaches have been de- veloped: Image-based [2, 9] and Model-based ap- proaches [5, 6]. Image-based methods extract features from images without relying on elaborate knowledge about the object of interest. Their principal quality is their quickness and their simplicity. However, if the data images are very diverse (e.g.: variation of illumina- tion, of view, of head pose) image-based approaches can become erratic and unsatisfactory. On the other hand model-based methods use models which maintain the essential characteristics of the face (position of the eyes relative to the nose for example), but which can deform to fit a range of possible facial shapes and expressions.
En savoir plus

5 En savoir plus

A Flexible Model and a Hybrid Exact Method for Integrated Employee Timetabling and Production Scheduling

A Flexible Model and a Hybrid Exact Method for Integrated Employee Timetabling and Production Scheduling

3 ILP Models of Integrated Employee Timetabling and Machine Scheduling Problems The model of integration proposed by [20] is centered on the concept of configu- ration which is a partition of the machines at a given time period such that each subset is managed by a single operator. In this paper we propose to perform the integration through the concept of activity which is widely used in the employee timetabling literature. We first provide a model with a common time represen- tation for timetabling and scheduling (Section 3.1). Then we extend this model to the case where there is a time representation for employee timetabling and another time representation for job scheduling (Section 3.2). We show how these models can be extended to tackle the variability in job durations and machine assignment through the concept of modes (Section 3.3). The three latter models are based on time indexed and assignment variable formulations. In Section 3.4 we show how the set covering formulation usually used in efficient employee scheduling methods can also be used in the production scheduling context.
En savoir plus

19 En savoir plus

An hybrid model for radiative transfer

An hybrid model for radiative transfer

Since our main objective is to perform fully coupled simulations, we have to consider cheaper models ie models that are somehow integrated over directions and frequencies. There are two categories of such models: flux-limited diffusion and moments models (see [12] and references therein for a list of the classic choices). Our choice is to select the M 1 model, which belongs to the category of moments models. It was introduced in [2] and has several variations among which [14], [3], [13] and [4]. To built it, the first step consists in obtaining the moments equations from (0.1). To do so, an integration over directions and inside each frequency group Q q is
En savoir plus

15 En savoir plus

2012 — Hybrid visualizations for data exploration

2012 — Hybrid visualizations for data exploration

We can also compare our implemented FlowVizMenu with the control menu used in GraphDice (Bezerianos et al., 2010a). Both are used to transition between dimensions in a scatterplot, and both allow for scrubbing. However, because GraphDice’s control menu only makes use of outward motions, it contains 2 copies of every dimension: one for the scatterplot’s horizontal axis, and another for the vertical axis. Furthermore, the menu items in GraphDice’s control menu only cover half a circle. The end result is that the dimensions in our FlowVizMenu each cover an angle that is four times larger, enabling easier and faster selection. Our FlowVizMenu also differs in that it contains a visualization, whereas GraphDice’s control menu is used to control an underlying visualization. As shown in Figure 4.2, this means the user can perform brushing and linking for coordination with other views without leaving the FlowVizMenu. In our FlowVizMenu, each repeated outward-inward motion will normally replace the two dimensions in the scatterplot with new dimensions. Hence, a repeated figure-8 motion (Fig- ure 4.3, right) cycles between two scatterplots. However, it also occurred to us that repeated outward-inward motion could be useful for accumulating dimensions. In our implementation, holding down the shift key during motions causes the dimensions to be accumulated along the axes using principle component analysis (PCA). For example, holding down Shift, and moving out through dimension A, in through B, out through C, and in through D, will cause the system to compute a PCA projection from A × C to the horizontal axis, and a separate PCA projection from B × D to the vertical axis.
En savoir plus

127 En savoir plus

Hybrid Data Reduction Technique for Classification of Transaction Data

Hybrid Data Reduction Technique for Classification of Transaction Data

3.2 Data Size Reduction Reducing the number of samples have been implemented with several methods such as sampling procedures (e.g. simple random sampling and stratified or cluster sampling). These methods are based on statistical sampling which view data as expensive resources and assumes that it is practically impossible to collect population data. This approach does not suit data reduction in databases where population data is assumed to be known. Other methods are Adaptive sampling [1] and adaptive sampling with Genetic Algorithm [10] and Discernibility [9]. Moreso, other approaches such as Wavelets [11] and Clustering [12] for reducing the quantity of instances have been used. The adaptive sampling approach which employs chi-square criterion in our view is simple and adaptive in nature. It segments data into categories to ease computation; but it is intractable with very large and high-dimensional data. The approaches of [1] and [10] are only based on dimensionality reduction.
En savoir plus

6 En savoir plus

A new WPAN Model for NS-3 simulator

A new WPAN Model for NS-3 simulator

The comparison amongst existing propagation models is out of the scope of this paper. However, our WPAN model was successfully simulated and tested with all of them. The aim of this scenario is to verify data transmission through the WPAN physical channel. In this scenario, we set two nodes; a sender which is a fixe node that send continuously 50 bytes UDP packets every 100ms (4kbps), and a destination mobile node moving away from the first node’s position in a constant  speed (0.1m/s). The MAC sub-layer operates in beaconless mode in order to avoid any interruptions due to the association time, beacon transmissions or sleep time. We use the 2.4GHz PHY with a receiver sensitivity of -85dBm and a transmit power of 1mW (0dBm). Figure 9 points out the evolution of the received signal strength for different distances. The measured RSS continues to decrease while the receiver rolls away from the sender. The receiver continues to receive the packets until nearly 82 meters. After this distance, the RSS decreases to less than -85dBm and the signal is considered as undetectable. This distance represents the transmission range that can vary depending different factors such as transmitting power and the receiving sensitivity.
En savoir plus

9 En savoir plus

Fitting experimental dispersion data with a simulated annealing method for nano-optics applications

Fitting experimental dispersion data with a simulated annealing method for nano-optics applications

ω 2 − ω 2 l + iωγ l . This method shows a good agreement with experimental data for noble metals, and has been widely imple- mented, often with four or more Lorentz poles [ HN07 ]. However, this model fails to properly fit materials such as transition or post-transition metals, because of the electronic correlation existing in such materi- als, introducing a retardation effect [ WROB13 ]. During the last decade, advanced dispersion models were designed to tackle dispersive materials more efficiently. Among them, the Critical Points (CP) model [ VLDC11 ] and the Complex-Conjugate Pole-Residue Pairs (CCPRP) model [ HDF06 ] have shown great improvements over the standard Drude-Lorentz model in fitting metals and compounds on broad frequency ranges with a limited amount of poles. As briefly sketched in [ VLDC11 ], these two models are mathem- atically equivalent at the continuous level. While both of these models use coupled first-order poles with complex coefficients, in [ Viq15 ] we introduced a generalized model exploiting real coefficients only:
En savoir plus

20 En savoir plus

A probabilistic model for data cube compression and query approximation

A probabilistic model for data cube compression and query approximation

Table 2: Features of the three data cubes. “Density” is the ratio of non-zero cells. mart data delivered with Analysis Services of Microsoft SQL Server. From the Customer table (10281 records) we constructed a data cube of five dimensions: Status, Income, Child, Occupation and Education. Status ∈ {1, 2} indicates whether the customer is single (value equal to 1) or not. Income takes eight possible values indicating the level of income (e.g., 1 for income between 10K and 30K, and 8 for income ≥ 150K). Child ∈ {0, 1, 2, 3, 4, 5} repre- sents the number of children. Occupation takes five possi- ble values indicating the customer’s occupation (e.g., 1 for a manual worker and 5 for a manager). Education refers to customer’s education level and can take five possible values (e.g., 1 for partial high school studies and 5 for graduate studies). The other data set was also extracted from Food mart data and concerns a data cube of Sales according to product category (44 values), time (in quarters) and coun- try (USA, Canada and Mexico). This cube has a relatively small set of dimensions but one of them has a large number of modalities (members).
En savoir plus

10 En savoir plus

A Hybrid Approach for Mining Metabolomic Data

A Hybrid Approach for Mining Metabolomic Data

Accordingly, we propose a knowledge discovery process which is based on a combination of numeric-symbolic techniques for dierent purposes, such as noise ltration for avoiding overtting which occurs when the analysis describes ran- dom error or noise instead of the underlying relationships, feature selection for reducing dimension, and checking the relevance of selected features w.r.t. predic- tion. FCA [3] is then applied to the resulting reduced dataset for visualization and interpretation purposes. More precisely, this hybrid data mining process combines FCA with several numerical classiers including Random Forest (RF) [1], Support Vector Machine (SVM) [8], and Analysis of Variance (ANOVA) [2]. RF, SVM and ANOVA are used to discover discriminant biological patterns which are then organized and visualized thanks to FCA.
En savoir plus

11 En savoir plus

Data fitting with geometric-programming-compatible softmax functions

Data fitting with geometric-programming-compatible softmax functions

1.1 Overview of geometric programming First introduced in 1967 by Duffin, Peterson, and Zener [10], a geometric pro- gram (GP) 1 is a specific type of constrained, nonlinear optimization problem that becomes convex after a logarithmic change of variables. Despite signifi- cant work on early applications in structural design, network flow, and optimal control [3, 25], reliable and efficient numerical methods for solving GPs were not available until the 1990’s [21]. GP has recently undergone a resurgence as researchers have discovered promising applications in statistics [5], digital cir- cuit design [6], antenna optimization [2], communication systems [7], aircraft design [13], and other engineering fields.
En savoir plus

23 En savoir plus

A model free hybrid algorithm for real time tracking

A model free hybrid algorithm for real time tracking

1. INTRODUCTION Elaboration of real-time object tracking algorithms in image sequences is an important issue for numerous applications related to computer vision, robotic, etc. For the time-being, most of the available tracking techniques that don’t require a 3D model can be divided into two main classes: edge-based and texture-based tracking. The edge-based tracking relies on the high spatial gradients outlining the contour of the object or some geometrical features of its pattern (points, lines, circles, distances, splines, ...). When 2D tracking is considered, such edge points enable to defined the parame- ters of some geometrical features (such as lines, splines,...) and the position of the object is defined by the parameters of these features [6]. Snakes or active contours are also based on high gradients and can be used to outline a complex shape [3]. These edge-based techniques have proved to be very effective for applications that required a fast tracking approach. On the other hand, they have also often proved to fail in the presence of highly textured environments. Previ- ous approaches rely mainly on the analysis of intensity gra- dients in the images. When the scene is too complex, other approaches are required. Another possibility is to directly consider the image intensity and to perform 2D matching on a part of the image without any feature extraction by minimizing a given correlation criterion: we then refer to template-based tracking or motion estimation (according to the problem formulation). It is possible to solve the prob- lem using efficient minimization techniques that are able to consider quite complex 2D transformations [2, 5, 9]. Let it be noted that these methods are closely related to classi-
En savoir plus

5 En savoir plus

A Data-Driven Regularization Model for Stereo and Flow

A Data-Driven Regularization Model for Stereo and Flow

[ 29 ] [ 26 ] Ours Ours Error (a) (16.8%, 10.7% ) (b) (12.2%, 5.8% ) (c) (10.0%, 7.1% ) (d) (19.6%, 11.3% ) Figure 4: Visual comparison on examples from the KITTI stereo and flow datasets. We initialize our model with [ 29 ] (rank 3 rd ) for stereo and [ 26 ] (rank 4 th ) for optical flow. For each example, we show (top row) its left-camera image, (middle row) color-coded disparity map and (bottom row) error map scaled between 0 (black) and 5 (white). We mark regions with big improvement with cyan rectangles. Note that those regions often correspond to semantic objects like cars, buildings, trees, and roads, where our data-driven regularizer can use the good matches from training examples to make better guesses.
En savoir plus

9 En savoir plus

A differential evolution-based approach for fitting a nonlinear biophysical model to fMRI BOLD data

A differential evolution-based approach for fitting a nonlinear biophysical model to fMRI BOLD data

Abstract—Physiological and biophysical models have been proposed to link neuronal activity to the Blood Oxygen Level-De- pendent (BOLD) signal in functional MRI (fMRI). Those models rely on a set of parameter values that cannot always be extracted from the literature. In some applications, interesting insight into the brain physiology or physiopathology can be gained from an estimation of the model parameters from measured BOLD signals. This estimation is challenging because there are more than 10 potentially interesting parameters involved in nonlinear equations and whose interactions may result in identifiability issues. However, the availability of statistical prior knowledge about these parameters can greatly simplify the estimation task. In this work we focus on the extended Balloon model and propose the estimation of 15 parameters using two stochastic approaches: an Evolutionary Computation global search method called Differ- ential Evolution (DE) and a Markov Chain Monte Carlo version of DE. To combine both the ability to escape local optima and to incorporate prior knowledge, we derive the target function from Bayesian modeling. The general behavior of these algorithms is analyzed and compared with the de facto standard Expectation Maximization Gauss-Newton (EM/GN) approach, providing very promising results on challenging real and synthetic fMRI data sets involving rats with epileptic activity. These stochastic optimizers provided a better performance than EM/GN in terms of distance to the ground truth in 3 out of 6 synthetic data sets and a better signal fitting in 11 out of 12 real data sets. Non-parametric statistical tests showed the existence of statistically significant differences between the real data results obtained by DE and EM/GN. Finally, the estimates obtained from DE for these parameters seem both more realistic and more stable or at least as stable across sessions as the estimates from EM/GN.
En savoir plus

13 En savoir plus

LITpro: a model fitting software for optical interferometry

LITpro: a model fitting software for optical interferometry

The solution came from applying a photometric constraint. A synthetic K magnitude was determine for each (T ! , T L ) in the χ 2 map, and the intersection of the temperature trough (see Fig. 2) and of the photometry-compliant area yield the best set of temperatures and their uncertainties. The values of the parameters for this solution are summarized in Table 1. A synthetic spectrum computed from the parameters fitted on the interferometric data was successfully compared to photometric data established from the measurements of Whitelock et al. 12 in J, H, K and L bands, after the K band was used to determine the temperatures. The same comparison is shown on Fig. 3. The synthetic spectrum (continuous line) is computed from the parameters fitted to the interferometric data when assuming a layer optical depth of 1 at any wavelength. It is not the place here to discuss the validity of this gross simplication, nor to justify the presented modeling on an astrophysical point of view. Our aim is only to evaluate the fitting procedures themselves. On the same graph, we have also plotted (squares) the flux computed with the actual values of the optical depths fitted in the 4 K sub-bands.
En savoir plus

10 En savoir plus

Show all 10000 documents...