• Aucun résultat trouvé

[PDF] Top 20 Location models for visual place recognition

Has 10000 "Location models for visual place recognition" found on our website. Below are the top 20 most common "Location models for visual place recognition".

Location models for visual place recognition

Location models for visual place recognition

... observation models, a number of methods have been investigated. For example, the work of [27] incorporates learned distributions of 3D distances between visual words into the generative model in ... Voir le document complet

174

The time course of visual influences in letter recognition

The time course of visual influences in letter recognition

... letters. For each electrode/time frame pair (e, tf), taken independently, amplitudes collected on the group were sorted, and the lowest 20 % and the highest 20 % of the distribution were ...trimmed. For ... Voir le document complet

10

Resources and Methods for the Automatic Recognition of Place Names in Alsatian

Resources and Methods for the Automatic Recognition of Place Names in Alsatian

... specifically location detection in historical data, Borin et ...86.4%. For the Arabic language, which also presents high variation, a knowledge and rule-based approach is described in ...class for ... Voir le document complet

11

Place Recognition via 3D Modeling for Personal Activity Lifelog Using Wearable Camera

Place Recognition via 3D Modeling for Personal Activity Lifelog Using Wearable Camera

... thresholding the distance between the camera and the predefined reference point delineates areas defined as: close (manipulation zone), intermediate (approach- ing) and far (seeing the place, but too far to do ... Voir le document complet

12

Generating Unsupervised Models for Online Long-Term Daily Living Activity Recognition

Generating Unsupervised Models for Online Long-Term Daily Living Activity Recognition

... stands for True Positive, False Positive and False Negative, respec- ...]. For doing this, we train the classifier on clipped videos and perform the testing us- ing sliding ...appropriate for our ... Voir le document complet

6

Semantic Event Fusion of Different Visual Modality Concepts for Activity Recognition

Semantic Event Fusion of Different Visual Modality Concepts for Activity Recognition

... low-level, visual feature rep- resentations for actions [35] by aggregating other modali- ties commonly present in video recordings, such as audio and text [27] [29] ...that models the joint patterns ... Voir le document complet

15

Probabilistic Place Recognition with Covisibility Maps

Probabilistic Place Recognition with Covisibility Maps

... room for im- provement in terms of how locations are ...image location models has been addressed in the work of [2], [3], [11], ...[12]. Location models built using specific poses in ... Voir le document complet

7

Refining visual activity recognition with semantic reasoning

Refining visual activity recognition with semantic reasoning

... successful recognition rates, the robot seems to have difficulty to recognize the ”remote controlling” ...third place, behind the ”opening door” and ”applauding” ...vision-based recognition in some ... Voir le document complet

9

Spatiotemporal Dynamics of Morphological Processing in Visual Word Recognition

Spatiotemporal Dynamics of Morphological Processing in Visual Word Recognition

... takes place when there is a true semantic relationship between the com- plex word and its ...priming for semantically transparent pairs ( farmer – FARM ) is identical to that of semantically opaque or ... Voir le document complet

16

Using Markov Logic Network for On-line Activity Recognition from Non-Visual Home Automation Sensors

Using Markov Logic Network for On-line Activity Recognition from Non-Visual Home Automation Sensors

... used for recognition. Indeed, information provided by the sensor for activity recognition is indirect (no worn sensors for localisation), heterogeneous (from numerical to categorical), ... Voir le document complet

17

Late Fusion of Bayesian and Convolutional Models for Action Recognition

Late Fusion of Bayesian and Convolutional Models for Action Recognition

... Fig. 2: An action sequence from CAD-120 [8] dataset: actor 1, video 2305260828, action microwaving-food. From left to right : reach, open, reach, move, place. In blue: human pose detected by OpenPose. In yellow: ... Voir le document complet

9

Training and evaluation of the models for isolated character recognition

Training and evaluation of the models for isolated character recognition

... the recognition property of the NN is better in the cases when we have “enough” samples, and the SVMs results get higher when the number of samples is ...to place (to separate) the image object into one ... Voir le document complet

13

Bayesian models for visual information retrieval

Bayesian models for visual information retrieval

... First, it is based on a universal recognition language (the language of probabilities) that provides a computational basis for the integration of information from mult[r] ... Voir le document complet

211

Context-based Visual Feedback Recognition

Context-based Visual Feedback Recognition

... trained for each gesture class. Since HHMs are designed for segmented data, we trained each HMM with segmented subsequences where the frames of each subsequence all belong to the same gesture ...used ... Voir le document complet

197

Semi-Supervised Learning for Location Recognition from Wearable Video

Semi-Supervised Learning for Location Recognition from Wearable Video

... fast location recognition from structure-from-motion point clouds [4] and finding ef- ficiently loop closures in monocular SLAM ...approach for image representation is relying on BoF visual ... Voir le document complet

7

Dynamic reshaping of functional brain networks during visual object recognition

Dynamic reshaping of functional brain networks during visual object recognition

... module during the task). Integration and occurrence were greater for meaningless than for meaningful images. Our findings revealed also that the occurrence within the right frontal regions and the left ... Voir le document complet

28

Comparison of Visual Registration Approaches of 3D Models for Orthodontics

Comparison of Visual Registration Approaches of 3D Models for Orthodontics

... To validate the two previous steps, tests have been carried out both on virtual and real images. From a “perfect” virtual case, we evaluate the robustness against noise and the increase of performance using several ... Voir le document complet

13

On the usage of visual saliency models for computer generated objects

On the usage of visual saliency models for computer generated objects

... typical visual acuity at standardized viewing distance is around 60 ...Typical visual acuity at standardized viewing distance of a common VR HMD device is around 15 pixel/degree for the HTC vive and ... Voir le document complet

6

Automaticity of phonological and semantic processing during visual word recognition

Automaticity of phonological and semantic processing during visual word recognition

... the visual and auditory systems ( Booth et ...corrected for multiple comparisons across the ROIs using the False Detection Rate method (FDR) as proposed in the spm_ss toolbox ( ...the visual, ... Voir le document complet

13

Water sound recognition based on physical models

Water sound recognition based on physical models

... To cite this version : Guyot, Patrice and Pinquier, Julien and AndréObrecht, Régine Water sound recognition based on physical models.. Any correspondance concerning this service should b[r] ... Voir le document complet

7

Show all 10000 documents...