High and low fidelity models

Top PDF High and low fidelity models:

When High Fidelity Matters: AR and VR Improve the Learning of a 3D Object

When High Fidelity Matters: AR and VR Improve the Learning of a 3D Object

is is likely to improve object manipulation performances [20, 23]. Few studies to date have managed to show a benefit of AR in a teach- ing environment. Implementing a robust and convincing AR system (e.g. accurate, high resolution, high frame rate, and low latency) is still a challenge compared to VR setups; which could explain this lack of material. In 2011, Chen et al. tested the use of tangible models and AR models in an engineering graphics course [8]. While students enthusiastically welcomed the use of AR models, Chen et al. only observed a thin improvement in students’ ability to map 3D objects to 2D images. The use of a tangible model, on the other hand, increased significantly their performances. However, the AR setup did not allow all rotations of the model, and the authors reported that it was sometimes used inappropriately. The use of a more natural AR device, i.e. offering a control over the virtual model closer to how we manipulate physical objects [4], may have resulted in per- formances more in line with those of the tangible model. Shin et al. studied how an observer perceives a complex 3D object in an AR scene when changing viewpoints either from observer movement or from object rotation [31]. They showed that moving around the object was more efficient to perceive it than rotating the object itself, highlighting the benefits of head coupling to interact with 3D scenes.
En savoir plus

10 En savoir plus

Egoshots, an ego-vision life-logging dataset and semantic fidelity metric to evaluate diversity in image captioning models

Egoshots, an ego-vision life-logging dataset and semantic fidelity metric to evaluate diversity in image captioning models

1 I NTRODUCTION Humans have a great ability to comprehend any new scene captured by their eyes. With the re- cent advancement of deep learning, the same ability has been shared with machines. This ability to describe any image in the form of a sentence is also well known as image captioning and has been at the forefront of research for both computer vision and natural language processing ( Vinyals et al. , 2014 ; Karpathy & Feifei , 2014 ; Venugopalan et al. , 2016 ; Selvaraju et al. , 2019 ; Vedantam et al. , 2017 ). The combination of Convolutional Neural Networks (CNN) and Recurrent Neural Network (RNN) has played a major role in achieving close to human like performance, where the former maps the high dimensional image to efficient low dimensional features, and the latter use this low dimensional features to generate captions. However, most of the image captioning models have mostly been trained on MSCOCO ( Lin et al. , 2014 ) or Pascal-VOC ( Everingham et al. ), which consists of 80 and 20 object classes respectively. All the images are captioned taking into consid- eration only these classes. Thus, even though current models have been successful in generating grammatically correct sentences, they still give a poor interpretation of a scene because of the lack of knowledge about various other kinds of objects present in the world, along with those seen in the dataset. Egoshots dataset 1 has a wide variety of images ranging from both indoor to outdoor scenes and takes into consideration diverse situations encountered in real life which are hardly found in
En savoir plus

14 En savoir plus

Optimal airfoil's shapes by high fidelity CFD

Optimal airfoil's shapes by high fidelity CFD

Available numerical methods Modelling the laminar to turbulent transition is one of the key challenges of CFD and, as shown in the previous section, it is of paramount importance to correctly predict the aerodynamic forces at transitional Re. Between the different methods that have been used for modelling transition, from the less computationally expensive to those that resolve more physics, there are linear stability theory, low Reynolds number turbulent models, the local correlation based transition models (LCTMs), large eddy simulations (LESs), detached eddy simulations (DESs) and direct numerical simulations (DNSs). A critical comparison between these methods is available in, for instance, Pasquale et al., 2009 . The methods based on linear stability theory, such as the e N method ( Smith, 1956 ; Mack, 1977; Ingen, 2008 ), are incompatible with large free stream turbulence levels and cannot predict bypass transition. Low Reynolds number turbulent models are based on the wall induced damping of turbulent viscosity and are unable to predict the growth of natural instabilities along streamlines. On the other hand, LES ( Sagaut and Deck, 2009 ), DES ( Squires, 2004; Spalart, 2009 ) and even more DNS ( Moin and Mahesh, 1998; Wu and Moin, 2009 ) can resolve transition mechanisms, but their computational costs ( Celik, 2003 ; Sagaut and Deck, 2009) are currently incompatible with optimization algorithms that require the evaluation of a large number of candidates.
En savoir plus

13 En savoir plus

Low-Order Modeling and High-Fidelity Simulations for the Prediction of Combustion Instabilities in Liquid Rocket Engines and Gas Turbines

Low-Order Modeling and High-Fidelity Simulations for the Prediction of Combustion Instabilities in Liquid Rocket Engines and Gas Turbines

To conclude, the example considered in this section demonstrates the modularity of the LOM proposed, which can combine in a same thermoacoustic network active flames, one-dimensional subdomains (the burners), 2D subdomains (the plenum), and complex 3D subdomains of arbitrary shape (the chamber) for which the modal basis is numerically computed. Obviously, the approach is not limited to azimuthal eigenmodes, but is also able to capture any other form of thermoacoustic eigenmodes. It is also worth comparing the cost associated with the over-complete frame expansion LOM to existing low-order models. As shown above, N = 10 modes were sufficient to achieve a satisfactory resolution (with error below 10%) of the first 20 modes of the combustors (not all shown in Tab. 3.1). The state-space of the whole system comprises 248 DoF: 2 × 24 for the plenum and the chamber, 4 × 20 for the straight ducts, 8 × 3 for the cross- section changes, and 4 × 24 for the active flames. After the preliminary computation of the chamber modal basis with AVSP (160 CPU seconds for 12 modes), the LOM computation of all the eigenmodes was performed in a few CPU seconds only. This is comparable to the 300 DoF necessary to treat a similar annular configuration in the work of Schuermans et al. [82, 122]. However, unlike this latter method, the present example did not assume acoustically compact injectors represented as lumped elements, and the acoustic field is fully resolved within the burners. LOMs relying on direct discretization of the flow domain, although more straightforward to put into application, result in more DoF and higher cost: Emmert et al. [84] required 10 5 DoF and 38 seconds
En savoir plus

246 En savoir plus

Distinct domain switching in Nd0.05Ce0.95CoIn5 at low and high fields

Distinct domain switching in Nd0.05Ce0.95CoIn5 at low and high fields

www.nature.com/scientificreports/ Scenario A yields an equal domain population for ψ = 0° and a maximal difference for ψ = 45°. This is consist- ent with the neutron diffraction results at μ 0 H = 2 T, where two populated domains are found that feature a com- parable intensity for H ||[0 1 0] and a 80% suppression of Q2 for H ||[1 1 0]. Our data in the SDW phase can thus be explained with spin-orbit couplings mediating a field-induced repopulation of the Q-domains. The much sharper switching at high fields, however, is not consistent with scenario A and suggest qualitatively different behavior more consistent with scenario B. This means that the Landau theory by Kim et al. 24 can describe the SDW and Q-phase separately, but not both phases simultaneously, and different phenomenological parameters would be needed for the low- and high-field phases. This points towards a qualitative change of the electronic structure at H * .
En savoir plus

7 En savoir plus

Gate and drain low frequency noise of ALGaN/GaN HEMTs featuring high and low gate leakage currents

Gate and drain low frequency noise of ALGaN/GaN HEMTs featuring high and low gate leakage currents

II. D EVICES U NDER T EST AND E XPERIMENTAL C ONDITIONS A. Devices under test The AlGaN/GaN HEMTs under test are fabricated at United Monolithic Semiconductors (UMS) [13]. They are grown on SiC substrate and feature 18% of Al content. The surface is SiN passivated and the Schottky contact is formed by deposition of Ni/Pt/Au transition metals. The devices feature four gate fingers (4x400µmx0.5µm) as shown in Fig. 1. The set selected from the same wafer is composed of devices presenting high gate leakage currents (T1 with an I G of

5 En savoir plus

Synchrosqueezing transforms: From low- to high-frequency modulations and perspectives

Synchrosqueezing transforms: From low- to high-frequency modulations and perspectives

Many works in the past decades have tackled this limitation, by using, for instance, quadratic TFRs, e.g., Wigner–Ville distributions [ 3 ], which are not constrained by the uncertainty principle, but exhibit strong interference hampering the representation and are not invertible. Another attempt, called the reassignment method (RM), dating back to the work by Kodera et al. [ 4 ] in the 1970s and then further developed in [ 5 ], essentially proposed a means of improving the TFR readability. Unfortunately, the reassigned representation was also no longer invertible. This in particular means that, when applied to the TFR of a multicomponent signal (MCS) made of AM/FM modes, RM does not allow for an easy retrieval of the components of the MCS. Other works, like the Empirical Mode Decomposition (EMD) [ 6 ], precisely focused on this latter aspect and consists of a simple algorithm to adaptively decompose an MCS into modulated AM/FM waves. While it proved to be interesting in many practical applications, it lacks mathematical foundations and behaves like a filter bank resulting in mode mixing [ 7 ]. To partially overcome these drawbacks, recent works tried to mimic EMD but using a more stable framework, either based on wavelet transforms [ 8 ] or convex optimization [ 9 , 10 ].
En savoir plus

13 En savoir plus

Analysis and comparisons of various models in cold spray simulations : towards high fidelity simulations

Analysis and comparisons of various models in cold spray simulations : towards high fidelity simulations

The U shape in the convergent is clearly visible on Fig 7. Near the throat, it starts to increase because of the supersonic flow beginning. Then, in the diverging part, it is approximately constant, demonstrating that there is no additional phenom- ena to dissipate like shocks or something else. For each configuration, after the diverging part, there is a brutal increase of turbulent kinetic energy. Using the Fig 5, these brutal increases occur on the location of the first oblique shock. Continuing along the axis of the nozzle, the fluctuations of the turbulent kinetic energy occur exactly where there is a shock. Finally, near the substrate, there is a strong increase of the turbulent kinetic energy caused by the bow shock at the impact. Consid- ering the values of 𝑘, this bow shock has a important effect on the dissipation and on turbulent effect. It has therefore a non negligible influence on the particles impact velocity because they are slowed down by this discontinuity. It confirms then the conclusion drawn in the literature that lighter particles will be easily slowed down in comparison to heavier particles with bigger inertia. However, these heavier ones gain less speed during their acceleration thanks to the same reason. An optimum must be found to reach the highest impact velocity with a given configuration. Comparing configurations A to C, it happens configuration B has more fluctuations along the flow which can be problematic for the nozzle in use and all the machinery nearby. We would then prefer to use configuration C which has less steep variations of turbulent kinetic energy to avoid breaking the material in use.
En savoir plus

30 En savoir plus

Achieving high CPU efficiency and low tail latency in datacenters

Achieving high CPU efficiency and low tail latency in datacenters

Shenango's fast core reallocations enable it to match the tail latency of state-of-the-art kernel bypass network stacks while linearly trading throughput for latency-sens[r]

113 En savoir plus

Estimation of structured tensor models and recovery of low-rank tensors

Estimation of structured tensor models and recovery of low-rank tensors

This relatively recent surge of interest in tensor methods is mainly explained by their abil- ity to exploit additional problem structure in comparison with more traditional matrix-based ones. The estimation of excitation/emission spectra from fluorescence data in chemometrics by means of high-order tensor decomposition techniques [ 25 , 24 ] is a stereotypical example of such a superiority, which stems in this case from the uniqueness of these quantities un- der much milder conditions than are needed when matrix decompositions are used instead. More generally, the same idea applies to inverse problems whose sought quantities consti- tute multilinear models, such as, e.g., in the estimation of directions of arrival in antenna array processing [ 174 ]. Tensor models are equally useful in many other problems because they often provide accurate and parsimonious representations of real-world multidimensional data, a fact that can be exploited for developing efficient storage, computation and estima- tion techniques. This line of thought is followed, for instance, in data compression [ 56 , 7 ], nonlinear system modeling [ 79 , 26 ] and low-rank tensor recovery [ 43 , 167 ].
En savoir plus

239 En savoir plus

Low Delay Transform for High Quality Low Delay Audio Coding

Low Delay Transform for High Quality Low Delay Audio Coding

18 Chapter 1 Introduction In recent years there has been a phenomenal increase in the number of products and applications making use of audio coding formats. Among the most successful audio coding schemes, MPEG-1 Layer III [ISO 92][Brandenburg 99], the MPEG-2 Advanced Audio Coding (AAC) [ISO 09][Grill 99] or its evolution MPEG-4 High Efficiency-Advanced Audio Coding (HE-AAC and HE-AACv2) [ISO 09][Dietz 02] can be listed. These codecs are based on the perceptual audio coding paradigm. Usually, the perceptual audio codecs find their applications in broadcasting services, streaming or storage. Indeed, historically few delay constraints were im- posed to those audio coding standards and they are consequently not suit- able for conversational applications. As opposed to the broadcast applica- tions, communication services are usually based on speech coding format such as Algebraic Code Excited Linear Prediction (ACELP). The ACELP coding scheme [Schroeder 85] [Adoul 87] is used in the most widely de- ployed communication codecs such as AMR [3GPP 99], 3GPP AMR-WB [3GPP 02] or ITU-T G.729 [ITU-T G.729 96]. This coding algorithm is based on the source-filter model of the speech production and it provides good quality for speech signals with a limited delay which makes it com- patible with conversational applications.
En savoir plus

197 En savoir plus

Learning low-dimensional models of microscopes

Learning low-dimensional models of microscopes

Learning low-dimensional models of microscopes Valentin Debarnot, Paul Escande, Thomas Mangeat, Pierre Weiss Abstract—We propose accurate and computationally efficient procedures to calibrate fluorescence microscopes from micro- beads images. The designed algorithms present many original features. First, they allow to estimate space-varying blurs, which is a critical feature for large fields of views. Second, we propose a novel approach for calibration: instead of describing an optical system through a single operator, we suggest to vary the imaging conditions (temperature, focus, active elements) to get indirect observations of its different states. Our algorithms then allow to represent the microscope responses as a low- dimensional convex set of operators. This approach is deemed as an essential step towards the effective resolution of blind inverse problems. We illustrate the potential of the methodology by designing a procedure for blind image deblurring of point sources and show a massive improvement compared to alternative deblurring approaches both on synthetic and real data.
En savoir plus

15 En savoir plus

High-fidelity copying is not necessarily the key to cumulative cultural evolution: a study in monkeys and children

High-fidelity copying is not necessarily the key to cumulative cultural evolution: a study in monkeys and children

Furthermore, transmission chain studies in humans have shown that fundamental properties of CCE can be reproduced with social learning mechanisms that exist in non-human animals, suggesting that CCE is not dependent on special cognitive capacities unique to humans [24–26]. Claidie`re et al. [26], for instance, performed a transmission chain study in which baboons observed and reproduced visual patterns on touchscreen computers. The baboons were organized into chains of transmission, where each baboon was provided with the patterns produced by the previous individual in their chain. As in some human trans- mission chain experiments ([27] for instance), the baboons had no visual access to the behaviour of other individuals, simply the products of those behaviours. With this procedure, transmission led to the emergence of cumulative culture, as indicated by three fundamental aspects of human cultural evolution: (i) a progressive increase in per- formance, (ii) the emergence of systematic structure and (iii) the presence of lineage specificity [26]. Surprisingly, these results were achieved with an extremely low fidelity of pattern reproduction during the first generation of trans- mission (only 37% of the patterns were reproduced without errors). This initially low level of fidelity did not prevent the accumulation of modifications, and they observed a sharp increase in fidelity as patterns were passed on from generation to generation (reaching 72% in the 12th gener- ation). Similar results have been found in transmission experiments with human participants, for example, where the transmission of miniature languages results in the emer- gence of languages which can be easily learned, even if the initial languages in each chain of transmission are trans- mitted only with very low fidelity (e.g. [28,29]). Together, these results suggest that high-fidelity transmission may not always be the cause of cumulative culture and may in fact itself be a product of CCE. Individuals may transform input variants in accordance with their prior biases, and if those biases are shared at the population level, we expect transformations in the same direction to accumulate at each transmission step. This could thus lead to the evolution of variants which are more faithfully transmitted because they match the prior biases more and more closely over gener- ations, giving a misleading impression of high-fidelity transmission.
En savoir plus

9 En savoir plus

Asteroid 21 Lutetia: Low Mass, High Density

Asteroid 21 Lutetia: Low Mass, High Density

The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation Patzold, M., T. P. Andert, S. W. Asmar, J. D. Anderson, J.- P. Barriot, M. K. Bird, B. Hausler, et al. “Asteroid 21 Lutetia: Low Mass, High Density.” Science 334, no. 6055 (October 27, 2011): 491–492.

27 En savoir plus

Computational and statistical challenges in high dimensional statistical models

Computational and statistical challenges in high dimensional statistical models

Towards understanding computational-statistical gaps, and specifically identifying the funda- mentally hard region for various inference problems, a couple of approaches have been considered. One of the approaches seeks to identify the algorithmic limit "from above", in the sense of iden- tifying the fundamental limits in the statistical performance of various families of known com- putationally efficient algorithms. Some of the families that have been analyzed are (1) the Sum of Squares (SOS) hierarchy, which is a family of convex relaxation methods [Par00], [Las01] (2) the family of local algorithms inspired by the Belief Propagation with the celebrated example of Approximate Message Passing [DMM09], [DJM13]), (3) the family of statistical query algorithms [Kea98] and (4) several Markov Chain Monte Carlo algorithms such as Metropolis Hasting and Glauber Dynamics [LPW06]. Another approach offers an average-case complexity-theory point of view [BR13], [CLR17], [WBP16], [BBH18]. In this line of work, the hard regimes of the var- ious inference problems are linked by showing that solving certain high dimensional statistical problems in their hard regime reduces in a polynomial time to solving other high dimensional statistical problems in their own hard regime.
En savoir plus

301 En savoir plus

Developing high-frequency equities trading models

Developing high-frequency equities trading models

In particular, we want to build a predictive model to estimate future returns on US equities on a high frequency environment, with approximate holding periods on t[r]

59 En savoir plus

Charge Pumps for Implantable Microstimulators in Low and High-Voltage Technologies

Charge Pumps for Implantable Microstimulators in Low and High-Voltage Technologies

the LGN. These layers retransmit the signals received from the layer 4 surrounding areas for more advanced treatment. In this arrangement adds layers of primary visual cortex organization in columns, as shown in Figure 1.5. Initially, the layer 4C is divided evenly in the ocular dominance columns by alternating bands of about 0.5 mm wide. In a second step, the columns are spread orientation in the direction orthogonal to that of the dominant eye. For each of these columns in the cortex, electrical activity is dependent on a fixed orientation of the light stimulation. It has been shown experimentally that 180 o are covered by 1 mm on average. Finally, the last column elements are stains. These cylindrical pillars collect and process information about color. In Figure 1.5a, is schematized cortical module, that is to say, a block of 2 mm × 2 mm with two complete groups of ocular dominance twice orientation of 180 o and 16 spots. As a summary, Figure 1.5b represents a column detailed V1 cortex on which the various layers and the signal inputs are identified. Inputs from magnocellular and parvocellular layers respectively are 1 and 2, and 3 to 6 of LGN. They are different type of retinal ganglion cells which are projected.
En savoir plus

115 En savoir plus

Determinants of high, median and low rates of caesarean deliveries in Belgium

Determinants of high, median and low rates of caesarean deliveries in Belgium

8. Organization of pre-caesarean section consultations. Finally, these recommandations were implemented among the trainees, hospital staff members, private practitioners having an obstetrical activity in an academic center during the year 2010. The rate of Cs delivery decreased from 26,0 to 20,2 %. The Cs rate associated with the MIC unit was not modified. The decrease resulted almost exclusively from a significant reduction in the number of Ceasarian deliveries performed in women presented with a low risk pregnancy, demonstrating the efficacy of such measures when collectively implemented.
En savoir plus

11 En savoir plus

Courtship behaviour at low and high water temperatures in the Alpine newt

Courtship behaviour at low and high water temperatures in the Alpine newt

Gecko setae interact with locomotor surfaces through van der Waals forces to produce an adhesive bond (Autumn et al., 2002). There is a direct relationship between setal geometry and force generation. Setal characteristics such as spatula diameter and number of tips apparently dictate adhesive capabilities. Estimates of maximal adhesive force have assumed that setae are identical throughout subdigital pads. There has been no examination of setal variation throughout the subdigital region of any gecko species, even though some aspects of interspecific variation are well documented. Here we investigate the form and distribution of subdigital Oberhaütchen elaborations throughout pedal digit IV of the tokay gecko (Gekko gecko) as an exemplar of structural variation in this taxon. This digit is subdivided into three zones according to morphology and function: the distal region encompassing scansors associated with the penultimate phalanx; the intermediate region that includes lamellae associated with the short intermediate phalanges; and the basal region including lamellae underlying the proximal phalanx. Differences in distribution, length of epidermal outgrowths, basal diameter and tip diameter are reported for each zone of the digit and are related to the gross morphology of the digit. Setal length decreases from distal to proximal along the length of the digit, as does basal diameter. For each individual lamella or scansor, setal length also decreases from distal to proximal, but branching pattern appears to remain constant. The distribution of elaborations, shape and dimensions of setal tips from the distal region of the digit differ greatly from those of elaborations on more proximal zones. We relate form and distribution of elaborations to their function in relation to the locomotor kinematics of Gekko gecko, and to the evolution of van der Waals-type interaction of setae from less elaborate structures.
En savoir plus

3 En savoir plus

Effects of low and high temperature plasma nitriding on electrochemical corrosion of steel

Effects of low and high temperature plasma nitriding on electrochemical corrosion of steel

steel. The nitrogen solubility seems to be more in the crystal lattice when the nitriding temperature raised to 550 °C for the same dura- tion of 10 h. Scanning electron microscopic studies: To understand the effects of nitriding on the microstructure of the nitrided steel, a sample nitrided at 450 °C for 10 h as representative sample was selected ( Fig. 2 ). First the sample was cross sectioned and mirror polished and then etched with the Villela’s reagent. Following this the sample was exposed to SEM. The cross sectional microstructure shows a very thin white layer which is a good indication of mechanical integrity. It is expected; that the thinner the white layer better will be the wear resistance. The nitrided layer/diffu- sion layer follows the thin layer. The EDS analyses reveal that the concentration of nitrogen in the steel after nitriding at 550 °C for 10 h is more than that of the steel nitrided at 450 °C for the same duration, which is evidenced by Fig. 2 (b and c). This means that the nitrogen diffusion is more when the nitriding temperature raised to the higher temperature of 550 °C when compared to the nitrogen diffusion in the steel nitrided at the temperature of 450 °C. The higher content of nitrogen may affect the nitride con- centration and/or the nitrogen supersaturation in the crystal lat- tice. This will have an effect on hardness and the corrosion resistance. It is expected that the improvement in hardness and corrosion resistance is caused by the presence of iron nitrides and nitrogen super saturation. As the focus of this study is plasma nitriding for the improvement of corrosion resistance property, the following section is concentrated on the assessment of corrosion resistance of nitrided steels.
En savoir plus

6 En savoir plus

Show all 10000 documents...