Rachid El Khoury, Jean-Philippe Vandeborre, Mohamed Daoudi Institut Mines-Télécom; Télécom Lille1; LIFL (UMR 8022 Lille1/CNRS), France
Bag-of-feature technique is a popular approach in areas of computer vision and pattern recognition. Recently, it plays an important role in shape analysis community and especially in 3D-model retrieval. We present our approach for partial 3D-model retrieval using this technique based on closed curves. We define an invariant scalar function on the surface based on the commute-time distance. Our mapping function respects important properties in order to compute robust closed curves. Each scale of our scalar function detects a small region. The form of these regions are encoded in the form of the closed curves. We generate a collection of closed curves from a source point detected automatically. Based on the collection of all closed curves extracted, we construct our bag-of-features. Then we cluster the bag-of-features in the sense in accurate categorization. The centres of classes are defined as keyshapes. This method is particularly interesting in the sense of quantifying the 3D-model by its keyshapes that are accumulated into an histogram. The results shows the robustness of our method (BOF) compared to a method based on indexed closed curves (ICC) on various 3D-models with different poses. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Picture/Image Generation—Line and curve generation
Searching for images of the same object or scene in a large number of images has recently received increasing at- tention [9, 4, 11, 5]. The most popular approach today, ini- tially proposed in , relies on a bag-of-features (BOF) representation of the image. The idea is to quantize lo- cal invariant descriptors, for example obtained by an affine invariant interest point detector  and a description with SIFT , into a set of visual words. The frequency vector of the visual words then represents the image and an inverted file system is used for efficient comparison of such BOFs. Recent extensions of this method improve the quantization and its speed [9, 11], the post-processing based on a global spatial geometric verification , the matching distance of descriptors  as well as the efficiency and compactness of the representation [1, 2].
|! ! | ! (2) where | ! " is the probability of the category given ! (the BoF vector of an image In). and ! " are respectively the prior probability of the class , and the prior probability of obtaining the signature ! for an image. The probability ! " is the same for all the classes, and therefore, it can be ignored without affecting the relative values of class probabilities. Finally, we consider the largest a posteriori score as the class prediction. This prediction is possible by making a strong independence assumption called the naïve assumption: the visual words of the vocabulary are conditionally independent given the class. The reason why NBN is able to work well with the BoF approach is that the conditional independence assumption is quite reasonable: if we know that an image belongs to a category, this is sufficient to specify what kind of visual words we will find in this image. Moreover, BoF approach uses high-dimensional attribute spaces where it is very difficult to estimate the correlation between attributes. Practically, attributes are seldom independent given the class, but it has been verified that the NBN performs well even when strong attribute dependences are present . The other important aspect that motivated our classifier choice is its tolerance to learn parameters from different data types generated by different weighting schemes. In existing works, we have seen the use of txx  and binary weights (bxx)  for image classification. To compare the weighting schemes performance, we train two instances of NBN. The first learns its parameters from data produced by applying bxx, while the second uses txx data. Further, we use the Gaussian NBN to learn from the Fuzzy weights.
5.2. Experimentation on KTH dataset
The dataset contains 25 people for 6 actions (running, walk- ing, jogging, boxing, hand clapping and hand waving) in 4 different scenarios (indoors, outdoors, outdoors with scale change and outdoors with different clothes). Different people perform the same action at different orientations and speeds. In contains 599 videos, of which 399 are used for training, and the rest for testing. As designed by , the test set con- tains the actions of 9 people, and the training set corresponds to the 16 remaining persons. Table 1 shows the confusion ma- trix obtained by our method on the KTH dataset. The ground truth is read row by row. The average recognition rate is 95 % which is comparable to the state-of-the-art approaches. The main error factor comes from confusion between jogging and running, which is a typical problem in reported methods.
Abstract Time-Series Classification (TSC) has attracted a lot of attention in pattern recognition, because wide range of applications from different do- mains such as finance and health informatics deal with time-series signals. BagofFeatures (BoF) model has achieved a great success in TSC task by summa- rizing signals according to the frequencies of ”feature words” of a data-learned dictionary. This paper proposes embedding the Recurrence Plots (RP), a visu- alization technique for analysis of dynamic systems, in the BoF model for TSC. While the traditional BoF approach extracts features from 1D signal segments, this paper uses the RP to transform time-series into 2D texture images and then applies the BoF on them. Image representation of time-series enables us to explore different visual descriptors that are not available for 1D signals and to treats TSC task as a texture recognition problem. Experimental results on the UCI time-series classification archive demonstrates a significant accuracy boost by the proposed Bagof Recurrence patterns (BoR), compared not only to the existing BoF models, but also to the state-of-the art algorithms.
8.1 Architecture et fonctionnalit´ es d’un safety bag
La figure 21 pr´esente une sch´ematisation des architectures des cas d’´etude que nous avons pr´esent´es dans ce rapport. Selon la d´efinition d’un safety bag de la norme IEC 61508 (voir section 1), l’architecture doit r´epondre `a des exigences. No- tamment, il doit ˆetre ext´erieur au syst`eme du point de vue mat´eriel (« syst`eme de surveillance mis en œuvre ` a partir d’un ordinateur ind´ependant, selon une sp´ecification diff´erente »). Sur cette figure, on peut noter que seul Elektra et le Ranger disposent d’un processeur d´edi´e ` a la surveillance. Pour les autres syst`emes, le safety bag se trouve au cœur du syst`eme qu’il surveille, soit en tant qu’agent (guardian agent), composant (R2C), ou mˆeme service (SPAAS). Ceci s’oppose `a la d´efinition de la norme IEC 61508. Cependant, ces approches, pr´esentant diff´erent degr´es d’ind´ependance vis-a-vis du syst`eme surveill´e, caract´eriseront pour nous une ´echelle d’ind´ependance pour la d´efinition d’un safety bag que nous pr´esenterons ult´erieurement (cf. section 8.3). Un autre aspect important est l’ind´ependance du syst`eme principal vis-`a-vis du safety bag (pour l’instant nous nous sommes surtout int´eress´e ` a l’inverse). En effet, suivant l’architecture choisie, la d´efaillance du safety bag pourra ˆetre plus ou moins bien tol´er´ee par le syst`eme global. Par exemple, on peut concevoir le syst`eme tel que l’arrˆet du safety bag entraˆıne le blocage total du syst`eme. Ce choix est fait si l’on consid`ere que la s´ecurit´e est plus importante que la disponibilit´e. ` A l’oppos´e, certains syst`emes peuvent ˆetre con¸cus pour ac- complir leur mission au del` a des d´efaillances des modules de s´ecurit´e (comme des sondes explorant les profondeurs de l’espace). Un point de conception corr´el´e `a ce probl`eme est l’activation ou la d´esactivation par l’homme des r`egles de s´ecurit´e, voire du safety bag lui mˆeme. Il est difficile de dire quelle est la meilleure approche puisque l’histoire des accidents technologiques montre que la d´esactivation possible de syst`eme de s´ecurit´e a entraˆın´e des dommages important. Cependant, pour des syst`emes autonomes, il nous semble important que la manipulation des r`egles de s´ecurit´e (modification, suppression, ajout) en ligne soit possible, surtout lorsque le dispositif est ´eloign´e, ou en mission comme une navette spatiale.
Self-inflating bags (SIB) remain widely used for neonatal resuscitation. Insufflation pressures from SIB are difficult to assess and can be inadequate. Ventilation monitoring improves pressure control, but is not accessible to most resuscitators. Small spring manometer or a pressure line to a needle and dial manometer can be connected through a side port on the SIB. Those devices are cheap and easily available, but their efficacy needs to be assessed. Observation of the manometer could also be considered as a distraction, with increased risk of leak or inadequate insufflation rate. We therefore aimed to evaluate the effect of mechanical manometers on the quality of insufflations with a SIB.
II SICT do Incaper (2017) Programa: PIBIC – Controle nº 029
IMPORTÂNCIA DO BANCO ATIVO DE GERMOPLASMA (BAG) DO GÊNERO PIPER (PIPERACEAE)
CERRI NETO, B. (Estudante de IC), ARANTES, L. O. (Orientador), LAVANHOLE, D. F., ARANTES, S. D., CONCEIÇÃO, S. S., CALATRONI, D., CORREIA, L. Z., SANT'ANA, C. Laboratório de Fisiologia Vegetal e Pós-Colheita-Incaper, firstname.lastname@example.org
S4 7670 93.5
Both ABClass and ABSim approaches provide good overall accuracy results compared to those obtained using the naive approach (see Figure 3). This shows that our proposed approaches are efficient. The results provided by ABSim using the SMS aggregation method are slightly better than those obtained using WAMS. The best result is reached using ABClass approach and the motif extraction settings S1, S2 and S3. Using these three settings, a minimum threshold of frequency and/or discrimination should be reached when extrcating motifs. The figures Figure 3 (a) and Figure 3 (b) show the impact of the motif extraction settings on the prediction results using the naive approach and ABClass. For example, using MISVM classifier, the accuracy varies from 53.5% using S1 to 82.1% using S3. Although the motifs extracted using S1 are discriminative, the naive approach does not provide good accuracy results for most multiple instance classifiers. For some classifiers, the results using S1 are the lowest comparing with the other motif extraction settings. However, using this setting, ABClass provides good results since it reaches 100% of accuracy using SVM, SMO and IBk classifiers, 96.4% using Logistic and 93.3% using J48. This could be explained by the fact that the naive approach looses the advantage of representing the instances using discriminative motifs when it uses the union of all motifs in the data encoding step. Using S4, ABClass does not reach 100% of accuracy although it succeeds to reach it with some classifiers using the other three settings S1, S2 and S3. No constraints related to frequency (α = 0) or discrimination (β = 1) were required when extracting motifs using S4.
been introduced. Normalized Cuts method is used to se- parate the graph of an image, which is composed of full connected dense grid-points, into several subgraphs. Each subgraph is then described by a bag-of-words model. Our approach incorporates color and limited spatial information into image representation for image retrieval. The obtai- ned results are encouraging, especially when the partitions obtained are stable across the images from the same cate- gory. A criterion for measuring the stability of subgraphs has been defined in the experimental section. The stability is nevertheless not always insured with the aforementioned graph weights functions. In the future, we consider embed- ding more image descriptors in graph weights and bipartite subgraphs matching process in order to reach more stabi- lity.
4. QMUL: recorded in a wide variety of Greater London (UK) lo- cations over Summer and Autumn 2012 by 3 members of Queen Mary University of London ( Giannoulis et al. , 2013a ). The dataset consists of 100 30-second recordings equally dis- tributed among 10 classes: “bus”, “busy street”, “office”, “open air market”, “park”, “quiet street”, “restaurant”, “supermarket”, “tube”, and “tube station”. To ensure that no systematic varia- tion in the recordings covaried with scene type, all recordings were made in moderate weather conditions, at varying times of day and week, and each operator recorded occurrences of ev- ery scene type. The public part of the dataset considered in this study is available on the C4DM Research Data Repository:
“Beauty is bought by judgment of the eye” (Shakespeare, Love’s Labour’s Lost), but the bodily features governing this critical biological choice are still debated. Eye movement studies have demonstrated that males sample coarse body regions expanding from the face, the breasts and the midriff, while making female attractiveness judgements with natural vision. However, the visual system ubiquitously extracts diagnostic extra-foveal information in natural conditions, thus the visual information actually used by men is still unknown. We thus used a parametric gaze-contingent design while males rated attractiveness of female front- and back-view bodies. Males used extra-foveal information when available. Critically, when bodily features were only visible through restricted apertures, fixations strongly shifted to the hips, to potentially extract hip-width and curvature, then the breast and face. Our hierarchical mapping suggests that the visual system primary uses hip information to compute the waist- to-hip ratio and the body mass index, the crucial factors in determining sexual attractiveness and mate selection.
The comparison of methods presented here has been con- ducted in the context of the Semantic INdexing (SIN) task TRECVid . It differs from the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) in many respects. In- deed, the indexed units are video shots instead of still images. The quality and resolution of the videos are quite low (512 and 64 kbit/s for the image and audio streams respectively). The target concepts are different: 346 non-exclusive concepts with generic-specific relations. Many of them are very infrequent in both the training and test data. The way the collection has been built is also very different. In ImageNet, a given number of sample images have been selected and checked for each target concept resulting in a high quality and comparable example set size for all concepts. In TRECVid SIN, videos have been randomly selected from the Internet Archive completely independently of the target concepts; the target concepts have been annotated a posteriori resulting in very variable number of positive and negative examples for the different concepts. Most of the concepts are very infrequent and also not very well visible. Compared to ImageNet, the positive samples are much less typical, much less centered, of smaller size and with a much lower image quality. The task is therefore much more difficult than the ILSVRC one but it may also be more representative of indexing and retrieval tasks “in the wild”. An active learning method was used for driving the annotation process for trying to reduce the imbalance class effect in the training data and also ensure a minimum number of positive samples for each target concept . The resulting annotation is sparse (about 15% in average) and consists in 28,864,844 concept × shots judgements. All of these differences probably explain why training DCNNs directly on TRECVid SIN data gives much poorer results than on ImageNet data and why the two considered adaptation strategies are needed (or perform much better) in this case.
learning an ensemble of classifiers, provide with superior performance. 5. Conclusion
Spatio-temporal feature identification was addressed. An analysis of Global Field Power highlighted time periods of interest where effects are likely to be the most robust yielding to a data-driven temporal feature ex- traction. For each temporal feature, a spatial filter was learned jointly with a classifier in the SVM theoretical framework. Spatial filters were learned to optimize classification performance. A weighted averaging on the so obtained ensemble of classifiers yielded to a robust final decision function. Experimen- tal results on Error-related Potentials illustrate the efficiency of the method from a physiological and a machine learning points of view. Further research may extract all relevant aspects of brain post-stimulus dynamics recorded in EEG (spatio-temporal-frequential).
Epiphyseal edema is a MRI feature which does not reﬂect a real histopathological situation  ; it is not observed with other imaging modalities such as radiography, com- puted tomography (CT), or ultrasound. On MR images, bone edema is deﬁned by the presence of hypointense inﬁltration on T1-weighted images and clear high signal intensity on fat-saturated T2-weighted sequences that is enhanced fol- lowing injection of gadolinium chelates  . The edema is not delimited by a radiolucent demarcation line, and the signal abnormalities have unclear contours  ( Fig. 1 ). Epiphsyeal edema is generally associated with one of the four following underlying situations: bone bruising, subchondral fracture, complex regional pain syndrome and degenerative disease. Epiphyseal edema is a reversible condition; the length of time before restitutio ad integrum being variable  .
local description of the image, visual vocabulary construction and image indexation. Therefore, each image is represented by a Bag-of-Visual-Words signature which is traditionally a histogram of its patches, i.e. a bin of the histogram represents a visual word, and contains the associated information which depends on the chosen weighting scheme. We have seen the use of presence or absence information, its count in the image (the number of keypoints in the corresponding visual word), or the weighted count . In effect, these are the most used term weighting techniques in text retrieval . In Bag-of-Visual-Words approach, an image is described by its visual words just like a document is described by the terms in automatic text retrieval. However the visual words aren’t quite as meaningful. For example, when describing a text document by a bag-of-words signature, each word is counted in the corresponding entry of the vocabulary, e.g. “walks”, “walking” and “walked” would be counted in the entry of “walk”. But when mapping an image’s keypoints to visual words, each word is counted in the nearest entry of the visual vocabulary, based on a distance measure. This may introduce a loss of fidelity to image signature: two keypoints associated with the same vocabulary entry contribute in the same way to the construction of the signature, whether they are identical or appreciably different. Furthermore, two similar keypoints may be considered in two different entrees. Certainly, increasing the vocabulary size attenuates this disadvantage, but involves a longer processing time when responding to a query. The aim of this work is to propose a method keeping simplicity of the Bag-of-Visual-Words approach while minimizing the effects due to the vocabulary size choice and similarity between visual words. The proposed weighting scheme is based on a fuzzy modeling of the distribution of the keypoints. This paper is organized as follows: section 2 describes the development of the indexation system based on Bag-of-Visual-Words approach and reviews the existent weighting schemes. In Section 3, the proposed approach for visual-words weighting is presented. Section 4 provides detailed experimental results. Finally, section 5 concludes the paper.
achieved remarkable success in text and sound recognition for modeling temporal dependencies in sequences . Nev- ertheless, there are a few proposed methods that use varia- tion of RNN  and LSTM networks  with skeleton data and achieved acceptable results. One of the main challenges of using Neural Network and Deep Learning for 3D action recognition is a lack of training data. Moreover, computa- tional complexity of these networks makes it unsuitable for use in real time and online tasks [20,38].Generative methods (state-based) such as HMMs produced acceptable results for modeling action with pre-defined poses . But the main disadvantage of these method is their sensitivity to training data where only abundancy of data in training phase may lead to performance enhancement . Moreover, training of HMMs in terms of computational and memory cost is ex- pensive and requires manual parameter tuning. Therefore, using HMMs with noisy skeleton data generally does not end up producing excellent results since it is difficult to deter- mine a correct state where there are some variation in candi- date actions.On the other hand, instead of generative models, discriminative methods such as kernel machines or metric learning that have been developed for classification of vec- tor data are more suitable for working with high dimensional space . These methods generally have achieved better results compared to HMMs  and have been used for recognition of single action in pre-segmented video clips. Conversely, generative methods have been used for parsing
Diva -4.2.1: presentation of the new features
C. Troupin 1,4,⋆ , F. Mach´ın 2 , M. Ouberdous 1 , M. Rixen 3 , D. Sirjacobs 1 and J.-M. Beckers 1
1 GHER - MARE, Sart-Tilman B5, University of Li`ege, All´ee du 6-Aoˆut 17, 4000 Li`ege, BELGIUM 2 Institut de Ci`encies del Mar (CSIC), Passeig Mar´ıtim de la Barceloneta, 37-49, 08003 Barcelona, SPAIN
task of inverting songs from the Million Song Dataset. What makes this task harder is twofold. First, that features are irregularly spaced in the temporal domain according to an onset-based segmentation. Second the exact method used to compute these features is unknown, although the
features for new audio can be computed using their API as a black-box. In this paper, we detail these difficulties and present a framework to nonetheless attempting such synthesis by concatenating audio samples from a training dataset, whose features have been computed beforehand. Samples are