By contrast, the modeling of our data allows us to suggest that memory mechanisms were responsible for performance differences between the interference conditions. The value of the first memory parameter, i.e., the c parameter, was indeed significantly higher in the interference than in the no-interference condition. According to the SET timing model, the reference duration is represented in the form of a distribution of values with a mean equal to the reference duration, when K is equal to 1.0, and a variance that increases as the square of the mean ( Gibbon, 1991 ). The representationoftime in reference memory is thus “inherently noisy” ( Church and Gibbon, 1982 , p. 116). Delgado and Droit-Volet (2007) demonstrated that introducing noise in the reference memory increased the coefficient of variation of the memory representationof the reference duration (c parameter), thus flattening the generalization gradient. Therefore, disrupting the consolidation of the newly learned duration by means of an interference task could introduce noise into the representationof this duration. The second memory parameter, the K parameter, is a memory distortion parameter, such that when K < 1.0, the reference duration is remembered as shorter that it really was, and when K > 1.0, it is remembered as longer than it was. In line with the results on p(yes), the K-value differed significantly between the interference conditions, being greater when the interference task was administered than when it was not. Our modeling thus tends to suggest that presenting an interference task shortly after training distorts the memorized duration in the direction of a lengthening effect.
precisely, Noether theorem refers to differentiable symmetries of the system’s La- grangian. For instance, if a mechanical system behaves in the same way regardless of where it is located, the system is symmetric under translations in space. By Noether theorem, the linear momentum of the system is conserved. Further, if a system is rota- tionally symmetric, Noether theorem states that its angular momentum is conserved. As far we know, Noether theorem and its proof, in DM as well as in conventional me- chanics, require using differential calculus, and, as such, they rely on real numbers 5 . In order to better appreciate the importance of real numbers in DM, we now present the limitations that are inherent to an older version of DM whose project was to precisely dispense with real numbers. It was indeed Donald Greenspan’s pur- pose to develop a discrete physical theory that would appeal to real numbers in none of its descriptions or computations. During the 1970s and 1980s, Greenspan devel- oped what he called a “discrete mechanics” (Greenspan 1973, p. 10) that is different from the theory we focus on in this paper. His discrete mechanics consists in refor- mulating traditional Newtonian mechanics with a discrete representationoftime. In Greenspan’s discrete mechanics, time is represented by a discrete parameter tk where tk = kh, k being a natural number and h a rational number. The differential equation expressing Newton’s second law is replaced by a difference equation with a discrete acceleration and a discrete force ma k = F k . The discrete acceleration a k is defined as the ratio (v k+1 − v k )/h, with r k the discrete velocity vector, and the discrete force
Abstract Time-Series Classification (TSC) has attracted a lot of attention in pattern recognition, because wide range of applications from different do- mains such as finance and health informatics deal with time-series signals. Bag of Features (BoF) model has achieved a great success in TSC task by summa- rizing signals according to the frequencies of ”feature words” of a data-learned dictionary. This paper proposes embedding the Recurrence Plots (RP), a visu- alization technique for analysis of dynamic systems, in the BoF model for TSC. While the traditional BoF approach extracts features from 1D signal segments, this paper uses the RP to transform time-series into 2D texture images and then applies the BoF on them. Image representationoftime-series enables us to explore different visual descriptors that are not available for 1D signals and to treats TSC task as a texture recognition problem. Experimental results on the UCI time-series classification archive demonstrates a significant accuracy boost by the proposed Bag of Recurrence patterns (BoR), compared not only to the existing BoF models, but also to the state-of-the art algorithms.
7. Conclusion and Future Work
We have proposed in this paper a geometric approach for effectively modeling and classifying dynamic facial se- quences. Based on Gramian matrices derived from the facial landmarks, our representation consists of an affine- invariant shape representation and a spatial covariance of the landmarks. We have exploited the geometry of the space to define a closeness between static and dynamic (trajec- tory) representations. We have derived then computational tools to align, re-sample and compare these trajectories giv- ing rise to a rate-invariant analysis. Finally, facial expres- sions are learned from these trajectories using a variant of SVM, called ppfSVM, which allows to deal with the nonlin- earity of the space of representations. Our experiments on four publicly available datasets showed that the proposed approach gives competitive or better than state-of-art re- sults. In the future, we will extend this approach to han- dle with smaller variations of facial expressions. Another direction could be adapting our approach for other appli- cations that involve landmark sequences analysis such as action recognition.
Nowadays in signal processing, spectrum analysis plays a key role to characterize and understand many phenomena and es- pecially for Radar systems such as the Inverse Synthetic Aper- ture Radar (ISAR). However, for a non-stationary signal, the frequency components can appear or disappear. To deal with these temporal evolutions, a Time-Frequency Analysis must be considered [2, 3]. The reconstruction process of ISAR image exploit the target’s motions. Thus, ISAR images are usually obtained by the range-Doppler algorithm based on the 2-D Fourier Transform to convert the data in the spa- tial frequency domain to reflectivity information in the spa- tial domain. However, because of the target maneuvering, the Doppler spectrum becomes time-varying and the image is blurred. Instead of the Fourier Transform, TFR techniques can be adopted to improve the resolution of ISAR images [3–5]. In recent years, a great deal of interest has been paid
But what is important to notice is that all these elementary “ machines” are always determined by some controlling parameter exterior to them. (In fig. I the control arrow originates from the exterior of the system.) This aspect of “ automatas” is often neglected in their descriptions, mainly when it takes the form of a black box. In other words, the degrees of freedom allowed to the processing device are not a property of the states themselves neither of the processing device, but are always imposed on them by an exterior factor or agent. In the symbolic systems, the controlling devices are rules and grammars and they do not originate from the states nor from the processing device. They have to be either stipulated or learned. This is even more obvious in non symbolic systems where controls take the form of parameters, thresholds, limits, etc. Here too, they are exterior to the states themselves and to the processor. It is because of this “externality” that these elementary systems will have to be in relation with other systems.
Safran Tech, Signal and Information Technologies † Telecom Paris, Institut Polytechnique de Paris ?
Abstract. Finding understandable and meaningful feature representa- tion of multivariate time series (MTS) is a difficult task, since informa- tion is entangled both in temporal and spatial dimensions. In particular, MTS can be seen as the observation of simultaneous causal interactions between dynamical variables. Standard way to model these interactions is the vector linear autoregression (VAR). The parameters of VAR models can be used as MTS feature representation. Yet, VAR cannot general- ize on new samples, hence independent VAR models must be trained to represent different MTS. In this paper, we propose to use the inference capacity of neural networks to overpass this limit. We propose to asso- ciate a relational neural network to a VAR generative model to form an encoder-decoder of MTS. The model is denoted Seq2VAR for Sequence- to-VAR. We use recent advances in relational neural network to build our MTS encoder by explicitly modeling interactions between variables of MTS samples. We also propose to leverage reparametrization tricks for binomial sampling in neural networks in order to build a sparse version of Seq2VAR and find back the notion of Granger causality defined in sparse VAR models. We illustrate the interest of our approach through experiments on synthetic datasets.
Rhys Goldstein 1 , Azam Khan 1 , Olivier Dalle 2 and Gabriel Wainer 3
To better support multiscale modeling and simulation, we present a multiscale timerepresentation consisting of data types, data structures, and algorithms that collectively support the recording of past events and scheduling of future events in a discrete-event simulation. Our approach addresses the drawbacks of conventional time representations: limited range in the case of 32- or 64-bit fixed-point time values; problematic rounding errors in the case of floating- point numbers; and the lack of a universally acceptable precision level in the case of brute force approaches. The proposed representation provides both extensive range and fine resolution in the timing of events, yet it stores and manipulates the majority of event times as standard 64-bit numbers. When adopted for simulation purposes, the representation allows a domain expert to choose a precision level for his/her model. This time precision is honored by the simulator even when the model is integrated with other models of vastly different time scales. Making use of C++11 programming language features and the Discrete Event System Specification (DEVS) formalism, we implemented a simulator to test the timerepresentation and inform a discussion on its implications for collaborative multiscale modeling efforts.
Superpixels were originally designed [ RM03 ] for object detection. The principle was to group some pixels according to several criteria: texture, brightness, geometry. Each group of pixels, the superpixel, is then de- scribed and then used to train a two-class classifier. Several superpixel methods exist in the literature, and we refer to [ Ach+12 ] for a cover- age. We focus on one popular class of superpixels, proposed by Achanta [ Ach+10 ], exclusively based on color features and pixels positions. In the paper, Achanta proposed a method to group pixels by, on the first hand, describing all pixels in a 5D space: three from the Lab color space, and the two pixels coordinates. On the second hand, he initializes superpix- els’ seeds by drawing them using a regular grid. Then, each pixel is as- sociated to one seed according to a Euclidean-based distance (in the 5D space). SLIC superpixels have the advantage to require very low-level operations: Lab color space and Euclidean distance computations. Fur- thermore, in terms of usability, the only parameter to set is the number of seeds. While the main application of SLIC superpixels is segmentation, (and it has indeed been used for medical image segmentation), by describ- ing each superpixels with multiple features, Achanta was able to perform object recognition. Fig. 2.5 illustrates SLIC superpixels computation.
of obstacles even in the event of partial occlusion or er- rors committed during the matching process. Furthermore, it provides semi-global matching and reveals the matches which are the most coherent globally in the 3D road scene. The longitudinal proﬁle of the road is extracted precisely. All other information concerning obstacles are then de- duced in a straightforward manner : obstacle areas, free space on the road surface, position of tyre-contact points. This detection is performed without any explicit extraction of speciﬁc structures (road edges and lane-markings, etc.) but exploits all the relevant information in the stereo im- age pair. Computational time does not exceed 40 ms for images of size 380x289 on a current day PC with no special hardware.
The need for a generic, efficient, broadband TDIBC has led to the introduction of a new family of models, known as “multi-pole.” They consist of a discrete sum of elementary first- or second-order low-pass systems. The number N of systems, as well as their respective gains and poles, are degrees of freedom (DoF) of the TDIBC, which translates as a considerable versatility. Moreover, admissibility condi- tions are straightforwardly verified, which is not the case for rational fractions expressed with polynomials (especially of higher degree). The drawback is that they lead to N elemen- tary convolutions. Borrowed from the computational electro- magnetic community, 13 so-called “recursive” (recurrent) convolution techniques have been employed by many authors. 8 , 14 – 17 Bin et al. 18 used an alternative implementa- tion, relying instead on N additional differential equations. A comprehensive study by Dragna et al. 19 showed the benefit of this technique, known as the auxiliary differential equa- tions method, over recursive convolution.
a Irstea, UMR TETIS, Montpellier, France b LIRMM, Montpellier, France
Nowadays, remote sensing technologies produce huge amounts of satellite im- ages that can be helpful to monitor geographical areas over time. A satellite image time series (SITS) usually contains spatio-temporal phenomena that are complex and difficult to understand. Conceiving new data mining tools for SITS analysis is challenging since we need to simultaneously manage the spa- tial and the temporal dimensions at the same time. In this work, we propose a new clustering framework specifically designed for SITS data. Our method firstly detects spatio-temporal entities, then it characterizes their evolutions by mean of a graph-based representation, and finally it produces clusters of spatio-temporal entities sharing similar temporal behaviors. Unlike previous approaches, which mainly work at pixel-level, our framework exploits a purely object-based representation to perform the clustering task. Object-based analy- sis involves a segmentation step where segments (objects) are extracted from an image and constitute the element of analysis. We experimentally validate our method on two real world SITS datasets by comparing it with standard tech- niques employed in remote sensing analysis. We also use a qualitative analysis to highlight the interpretability of the results obtained.
Université de Montréal et CIREQ
A. We provide a representation theorem for risk measures sat- isfying (i) monotonicity; (ii) positive homogeneity; and (iii) translation invariance. As a simple corollary to our theorem, we obtain the usual representationof coherent risk measures (i.e., risk measures that are, in addition, sub-additive; see Artzner et al. ).
− 1. Moreover, Mestre gives examples of Abelian varieties of dimension g > 4 for which the determinant of the Verschiebung does not characterises its isogeny class: as a consequence it is not even possible to recover the characteristic polynomial of the Frobenius morphism from this data only.
In this article, we make the trivial but crucial remark that this ambiguity can be avoided if instead of modular forms we compute the equation of the isogeny induced by the Verschiebung directly so that we can compute its action on the differentials (or by duality its action on the tangent space at 0), to recover not only the determinant, but the full matrix M of the Verschiebung (up to conjugation). It is then straightforward to recover the characteristic polynomial of the Frobenius as the characteristic polynomial of the matrix M + qM −1 . Strangely it seems that this obvious idea was not considered in the literature, although it raises no difficulty: Satoh already lifts the kernel, and Vélu’s formula give the equations of the isogeny along with the equations of the normalised isogenous elliptic curve. Mestre uses the duplication formula between theta constants θi(0, τ ), but it is well known that this duplication formula extends to a duplication formula between theta functions θi(z, τ ) which give an explicit equation for the 2-isogeny. The extensions of Mestre algorithm [ CKL06 ; CL09 ; FLR11 ] to characteristic p > 2 uses a p-multiplication formula between theta constants of level 2p (or 4p), which readily extends to a
Given the recent results suggesting difficulties in time prediction, it was hypothesized that patients with schizophrenia would have difficulties in planning their actions (tap) in anticipation to the predicted moment of sound occurrence. The typical tapping task was modified slightly in order to make it more sensitive to possible anticipation impairments by including a spatial aspect to the SMS task. Indeed, our previous studies suggested that patients were more impaired at planning motor sequences than performing simple one-element motor actions ( Delevoye-Turrell et al., 2007, 2012 ). In sum, the present study was designed to provide the means to distinguish between the role of distinct temporal difficulties when planning through space a series of voluntary movements in synchrony with an auditory rhythmic pattern. A difficulty in estimating durations should lead to imprecise and variable time intervals between successive taps. In contrast, a difficulty at anticipating the moment of occurrence of the external auditory event should mainly lead to tap-tone asynchronies. The manipulation of the type of tone sequence (isochronous, or not), allowed us to assess to what extent the difficulties observed in patients with schizophrenia are function of the necessity to extract a complex pattern oftime regularities within the perceptual world.
was normalized to 1 for L = 0.
SH decomposition is widely used in domains of physics described by a second order PDE such as the Laplace, Schr¨odinger or the wave equation. For example stellar oscillations are often described in terms of standing waves whose angular part in spherical coordinates is a SH (see chapter 8 of Collins, 1989 for a rewiew about stellar pulsations). Another application is to use SH decomposition for de- composition of functions defined over the sphere. It is indeed an image processing technique analog to 2D Fourier decomposition of an image defined on a rectangle. For example in cosmology, the brightness distribution over the whole sky of the Cosmic Microwave Background (CMB) is analyzed in terms of SH series. The angular power spectrum provides informations about the statistical properties of the CMB. Hinshaw et al., (2007) present an extensive analysis of data from the WMAP spacecraft.
In this paper we develop the formal model of dynamic planning presented in  and present several algorithmic aspects. Dynamic planning concerns the planning and execution of actions in a dynamic, real world environment. Its goal is to take into account changes generated by unpredicted events occurred during the execution of actions . According to this approach, changes can come both from a dynamic environment and from the agent himself. Several works are proposed in the so-called "reactive planning" field in order to address planning in a dynamic environment under different approaches (see , , , , ). Such works propose different techniques in order to react to environmental changes, which may occur during the execution process. We adopt a more general approach since we consider that, in addition, any change may occur in agent’s behavior (for any reason, i.e. according to a possible user suggestion) during the execution process, pushing him to change his preferences and consequently his actions or his method to evaluate these preferences. Changes on agent's preferences and on his evaluation methods, are taken into account as revision of three specific structures called possible plans, efficient plans and best plans. To model these structures, we use graphs inspired by the ones described in . Preferences are modeled as criteria in the multi- criteria planning problem we consider. This formalism allows us to present this planning problem as a multi-objective dynamic programming problem. Using dynamic programming in planning problems dates back to Bellman , but its use in agency theory has been limited in search algorithms, (see ) or in the frame of "universal planning" algorithms (see ). Under such a perspective the model we propose allows an agent based on the set of possible actions to achieve a fixed goal, to express his preferences about the benefit he desires to take out (for example,