HMM represents the dynamics of a system using a set of hidden states [11]. Transition to a state is related to the previous state and the transition probabilities. At each time instant, system transits to a state and generates an observation, based on a probability density function (Fig. 1(a)). Several generalizations have been proposed for HMM. For example, the behavior of a system can be explained by a Hidden **semi** **Markov** **Model** (HSMM) [12,13], in which the system can rest in a state for several time instants (resting time) (Fig. 1(b)). Alternatively, systems with several interacting components can be modeled by coupled hidden **Markov** **model** (CHMM) [14, 15]. In CHMM, it is assumed that each observation recorded from the system is generated by a component, which can be modeled by a set of hidden states. In each

En savoir plus
In this work, in order to overcome the weaknesses of TMC-HIST, we focus on developing a new parametric TMC **model** that can recognize lower limb locomotion activities using one single IMU sensor. Besides, the proposed algorithm should be adaptive and on-line applicable as well, i.e., it can adjust its parameters at run-time to suit for the user. By introducing a sojourn hidden state process to form **semi**-**Markov** structure, it allows the hidden states X and U keep the same for a while, which is consistent with the activity and gait transition during the motion. **Semi**-**Markov** structure is embedded into the TMC to better mimic the real state transition properties. Multi-dimensional Gaussian mixture **model** (GMM) is introduced to represent the non-Gaussian conditioned observation densities, in the meantime, it involves the observation correlation among the sensor axes. With the introduction of **semi**-**Markov** structure and Gaussian mixture density, the specific TMC **model** will be referred to as SemiTMC-GMM in the remaining of this paper. Because of the parametric densities, an on-line parameter learning algorithm based on EM is applied. Therefore, our claimed contributions in this paper are:

En savoir plus
activity chain are often neglected. Moreover, the heterogeneity across population on the travel/activity pattern formation are general difficult to determine and hence less studied.
The widely used methods for analyzing the effects of explanatory factors on activity durations are based on hazard models (Bhat 2000). However, most studies have focused on single activity episode duration analysis. Although these studies attempt to examine the affecting factors on activity duration, they neglected the importance of dependency between travels/activities conducted in the activity chain. Recently, the effects of dependency on activity durations have been increasingly studied. Popkowski Leszczyc and Timmermans (2002) utilized conditional and unconditional parametric competing risk models to investigate the effects of sociodemographic covariates on activity duration. Their study showed that activity durations depend not only on its type but also on the duration of activity previously conducted. Joly (2006) applied duration models to analyze the stability of individual’s daily travel time. He found that individual’s activity patterns have significant effects on the daily travel time. Ma et al. (2009) applied multistate non-homogeneous **semi**-**Markov** **model** to estimate the influence of covariates on travel and activity duration sequence. They found significant dependency effects between adjoining travel and activity episode over individual’s travel-activity chain. For the correlation of activity type choice and its duration, Bhat (1996b) proposed a generalized multiple durations proportional hazard **model** to capture endogenously the influence of entrance/exit activity type choice on activity durations. Pendyala and Bhat (2004) applied discrete-continuous simultaneous equation **model** to investigate the casual structure of activity timing and duration. Ettema et al. (1995) applied parametric competing risk **model** to examine the effects of temporal constraints on activity choice, timing and its duration. The found that spatiotemporal constraints are important determinants of individual’s activity type choice, timing and durations of activities in individual’s activity chain.

En savoir plus
We introduce a new **model** for describing the fluctuations of a tick-by-tick single asset price. Our **model** is based on **Markov** renewal processes. We consider a point process associated to the timestamps of the price jumps, and marks associated to price increments. By modeling the marks with a suitable **Markov** chain, we can reproduce the strong mean-reversion of price returns known as microstructure noise. Moreover, by using **Markov** renewal processes, we can **model** the presence of spikes in intensity of market activity, i.e. the volatility clustering, and consider dependence between price increments and jump times. We also provide simple parametric and nonparametric statistical procedures for the estimation of our **model**. We obtain closed-form formula for the mean signature plot, and show the diffusive behavior of our **model** at large scale limit. We illustrate our results by numerical simulations, and find that our **model** is consistent with empirical data on the Euribor future. 1

En savoir plus
A major drawback of hidden **Markov** models is the inflexible description of the time spent in a given state, as sojourn time (state occupancy) distributions are implicitly geometric. To overcome this limitation, a **semi**-Markovian framework may be considered where parametric sojourn time distributions are incorporated in the **model**, or where states are replaced by series–parallel networks of states with a common observation distribution; see Guedon (2005), Langrock and Zucchini (2011) and references therein. The sojourn time distributions of the macro-states defined in this way are built from the implicit geometric sojourn time distributions of the elementary Markovian states. These geometric distributions are combined either by convolution for states in series or by mixture for (series of) states in parallel. Guedon (2005) showed that hidden **Markov** models with macro-states are not a valid alterna- tive to hidden **semi**-**Markov** models because of higher algorithm space complexity and strong constraints in the

En savoir plus
Comparison of the estimated Gaussian hidden **semi**-**Markov** chain (GHSMC) parameters (i.e. where the influence of covariates and the inter-individual heterogeneity are not taken into account) with the estimated **semi**-**Markov** switching linear mixed **model** (SMS-LMM) parameters (state occupancy distributions and marginal observation distributions). The regression parameters, the cumulative rainfall effect and the variability decomposition are given

The estimation algorithms proposed in this paper can directly be transposed to other families of hidden **Markov** models such as for instance hidden **Markov** tree models; see Durand et al. (2005) and references therein. Another interesting direction for further research would be to develop the statistical methodology for **semi**-**Markov** switching generalized linear mixed models to take into account non-normally distributed response variables (for instance, number of growth units, apex death/life, non flowering/flowering character in the plant architec- ture context). Since the conditional expectation of random effects given state sequences cannot be analytically derived, the proposed MCEM-like algorithm for **semi**-**Markov** switching linear mixed **model** cannot be transposed to the case of non-normally distributed observed data and other conditional restora- tion steps, for instance based on a Metropolis-Hastings algorithm, have to be derived for the random effects.

En savoir plus
6 Chaubert-Pereira, F. et al.
the observations are assumed to be conditionally independent given the non- observable states and the random effects. The proposed MCEM-like algo- rithm can therefore be directly transposed to SMS-LMM. Given the random effects, the state sequences are sampled using the “forward-backward” algo- rithm proposed by Gu´edon (2007). Given a state sequence, the random effects are predicted as previously described. The underlying **semi**-**Markov** chain pa- rameters and the linear mixed **model** parameters are obtained by maximizing the Monte Carlo approximation of the complete-data log-likelihood.

En savoir plus
foliar organs of an offspring shoot). The structure of the estimated hidden **semi**-**Markov** chain is represented in Figure 10: only the transitions whose probability is greater than 0.03 are represented. The dotted edges correspond to the less probable transitions while the dotted vertices correspond to the less probable states. The underlying **semi**-**Markov** chain is composed of two transient states followed by a five-state recurrent class. An interpretation is associated with each state, summarizing the combination of the estimated observation probabilities. The first transient state corresponds to the initial transient phases for both variables (before rank 11) while the second transient state corresponds to the end of the transient phase for the flowering variable (see Figure 11). The two less probable states in the recurrent class are the direct expression of biological hypotheses and were a priori defined in the specification stage by appropriate constraints on **model** parameters: the ‘resting’ state (unbranched, non-flowered) corresponds to zones of slowdown in the growth of the parent shoot. The immediate branching state corresponds to a rare event in this context and immediate branching follows very different rules compared to one-year-delayed branching and, these two types of branching should not therefore be mixed in a given state.

En savoir plus
10
Brice Olivier, Anne Guérin-Dugué, Jean-Baptiste Durand
topics. In MR texts, the concept of trigger word is not clearly defined, since texts may be more or less related to the topic and may contain both incongruent and target words. Thus, the two considered words where those with the highest and the lowest cosine. Since HSMC states are random and hidden, the times of transitions are uncertain. Thus, instead of considering transition or not at “trigger words”, the effect of distance of transitions to “trigger words” was measured in numbers of fixations, focusing on “trigger words” with lowest distance to transitions. Its effect of transition probabilities was assessed using regression models. Firstly, frequencies for the distances associated to each incoming phase (among every possible distance for that phase) were modelled with linear mixed regressions, using distance, text type and phase as predictors, with subjects as random effects. Secondly, the binary random variable corresponding to occurrence or not of a transition at each possible distance of a fixation to “trigger word” was modelled with generalized linear mixed regressions. Binomial distributions were considered, using the canonical link function and the same three predictors as above. In both approaches, models with interactions of order 2 and 3 between predictors were estimated, in addition to models without interaction. Models were compared using BIC. The **model** with minimal BIC (referred to as M1) was then used to assess the significance of random subject effects, by comparing BIC with that of a **model** without random effects. M1 was also compared with the **model** obtained by removing distance as a predictor (referred to as M0). The justifications for using both approaches (linear models on frequencies or GLMMs on binary variables) were twofold: firstly, GLMMs easily suffer from lack of convergence for high-order interactions and thus some of these models cannot be compared. Secondly, the linear assumptions on frequencies seemed reasonable given the shape of the cloud of points (See Figure 6).

En savoir plus
On-line apnea-bradycardia detection using hidden **semi**-**Markov** models
Miguel Altuve, Student Member, IEEE, Guy Carrault, Alain Beuch´ee, Patrick Pladys and Alfredo I. Hern´andez
Abstract— In this work, we propose a detection method that exploits not only the instantaneous values, but also the intrinsic dynamics of the RR series, for the detection of apnea- bradycardia episodes in preterm infants. A hidden **semi**-**Markov** **model** is proposed to represent and characterize the temporal evolution of observed RR series and different pre-processing methods of these series are investigated. This approach is quantitatively evaluated through synthetic and real signals, the latter being acquired in neonatal intensive care units (NICU). Compared to two conventional detectors used in NICU our best detector shows an improvement of around 13% in sensitivity and 7% in specificity. Furthermore, a reduced detection delay of approximately 3 seconds is obtained with respect to conven- tional detectors.

En savoir plus
The “Forest” state therefore requires a special treatment. In a near future we are now developing a **semi**-**Markov** **model** where the sojourn time on the state F will better match the data set and so will not be geometric.
The long time behavior of the inferred **model** is dubious as the present data set is relatively limited in time (22 years). This data set implies a relatively short time scale where some rare transitions, like the forest regeneration, are not observed. Note that the Bayesian approach has an advantage over the likelihood approach in that it allows to incorporate prior knowledge about these rare and unobserved transitions. The likelihood approach will set their probabilities zero while the Bayesian approach will incorporate a priori knowledge and assign them positives probabilities. A new database is currently being developed by the IRD. It will be for a longer period of time and a greater number of parcels, it will also allow to consider a more detailed state space comprising more than four states. In a longer time scale, it is reasonable to suppose that F and B have long sojourn time distributions, the one associated to F being longer than the one associated to B. Also B will not be absorbing anymore as well as the forest regeneration will be possible, i.e. the transition from J to F will be possible. The associated **model** will present multi-scale properties, namely slow and fast components in the dynamics, that will be of interest.

En savoir plus
The current approach provides a starting framework upon which to improve for an advanced assessment of resilience in water systems. For starters, the current approach employs a binary fail/repair status for each network element; further work should explore the representation of partially degraded states to more fully represent the operation of the system. The current application utilizes static network demands to evaluate resilience well into the future. A **model** that incorporates dynamically changing demand and future growth scenarios will contribute to the understanding of how efficiency and end-user programs may affect the system resilience. Additionally the characterization of specific outages and failures needs to be introduced to the framework. For example, if an extreme event could cause all desalination plants to be shut-down simultaneously, the

En savoir plus
6 Conclusion and discussions
In this paper, we introduced Hidden-**Semi**-**Markov**-Mode **Markov** Decision Pro- cesses (HS3MDPs), a new generalization of Hidden-Mode **Markov** Decision Pro- cesses (HM-MDPs) to handle in a more natural and efficient way non-stationary environments. We proposed to use the Partially Observable Monte-Carlo Plan- ning algorithm as a solving method for HS3MDPs. As a subclass of our **model**, HM-MDPs can be solved efficiently using the same methods. However, this algo- rithm does not solve large-sized problems modeled with HS3MDPs in the most efficient way. We developed two adaptations of POMCP to improve its perfor- mances. The first adaptation exploits the structure of HS3MDPs to alleviate particle deprivation. The second adaptation uses an exact representation of the belief state to reach better results with less simulations than the other two meth- ods. Experimental results on various domains of the literature show that those adaptation significantly improve the performance.

En savoir plus
LCV incurs a very slight additional risk (of order 10 −4 ) as compared to
knowing the true **model** but a lower risk as compared to choosing the wrong
**model**; in the latter case the additional risk is of order 10 −2 .
This result must not be falsely interpreted. First the discrimination prop- erties of LCV depends on many parameters and particularly on the quantity of information available in the samples. Second and even more important the aim of estimator choice is not to choose the right **model** but to choose the best estimator. The choice between the two structures depends on how “far” the two models are. If the models are “close” it is of course more difficult to discriminate between them, but at the same time it becomes less important to choose the right one. For instance the homogeneous **Markov** **model** be- longs to both structures so it is possible by small perturbations of this **model** to construct two models, one **Markov** and one **semi**-**Markov**, which are very near in term for instance of Kullback-Leibler divergence.

En savoir plus
These potential limitations and needed improvements to the HDP-HMM motivate this investiga- tion into explicit-duration **semi**-**Markov** modeling, which has a history of success in the parametric (and usually non-Bayesian) setting. We combine **semi**-Markovian ideas with the HDP-HMM to construct a general class of models that allow for both Bayesian nonparametric inference of state complexity as well as general duration distributions. In addition, the sampling techniques we de- velop for the Hierarchical Dirichlet Process Hidden **semi**-**Markov** **Model** (HDP-HSMM) provide new approaches to inference in HDP-HMMs that can avoid some of the difficulties which result in slow mixing rates. We demonstrate the applicability of our models and algorithms on both synthetic and real data sets.

En savoir plus
Relation to Discretized Approximations and **Model** Approximations
Our lower cost approximation approach for average cost POMDPs in fact grows out from the same approach for discounted POMDPs. There, several discretized or continuous lower approximation schemes are known. The first one was proposed by Lovejoy [Lov91] as a measure of convergence for the subgradient based cost approximation proposed in the same paper. Lovejoy’s lower bound was later improved by Zhou and Hansen [ZH01], and also proposed by them as the approximate cost-to-go function for suboptimal controls. These lower bounds are based on the concavity of the optimal discounted cost functions. In reducing the computational complexity of incremental pruning – an LP based algorithm of value iteration for discounted POMDPs ([LCK96, Cas98, ZL97, CLZ97]), Zhang and Liu [ZL97] proposed a continuous approximation scheme, the “region-observable” POMDP. In the “region-observable” POMDP, to derive approximation schemes one assumes that a subset of the state space containing the true state would be revealed to the controller by a fictitious “information oracle.” A different design of the partition of the state space, called “region systems”, gives a different approximating POMDP, and the class of approximating processes can range from the completely observable MDP to the POMDP itself. Prior to Zhang and Liu’s work, the approximation based on the value of the completely observable MDP had also been proposed by Littman, Cassandra, and Kaelbling [LCK95] to tackle large problems.

En savoir plus
169 En savoir plus

7 Concluding remarks
Macro-states should not be considered as a valid alternative to **semi**-Markovian states for the modeling of short or medium size homogeneous zones as shown in Section 3. For long zones, Markovian states are mandatory because of algorithmic complexity constraints. Nevertheless, the shape of the implicit geometric state occu- pancy distribution may be too constraining and to remedy this shortcoming, macro- states combining Markovian states with **semi**-Markovian states may be included in hidden hybrid **Markov**/**semi**-**Markov** chains. A zone of highly variable length - for instance corresponding to introns in DNA sequences; see Kulp et al. (1996) and Burge and Karlin (1997) - can be modeled by a series-parallel network of **Markov**- ian and **semi**-Markovian states with common observation distribution. This point is illustrated by the example in Fig. 8 where the macro-state is composed of a ‘de- generated’ **semi**-Markovian state with a fixed sojourn time (to **model** the minimum sojourn time spent in the macro-state) followed by two elementary states in parallel, a Markovian state for long zones and a **semi**-Markovian state for shorter zones. Hence, Markovian states, **semi**-Markovian states and macro-states (for combining Markovian states with **semi**-Markovian states) are the building blocks of flexible state processes with precise guidelines and algorithmic solutions for their combina- tion. The algorithms described in Sections 4.1 and 5 still apply in the case of macro- states, the only minor modification being the management of tying constraints within macro-states for the reestimation of the observation distributions. This point of view is in accordance with the development of very flexible hidden **Markov** models which can also incorporate various sub-models as output processes; see Burge (1997) and Burge and Karlin (1997).

En savoir plus
In this section, we specialize our results to a particular instance of accumulation. Our application is inspired by fluid embedding [8], a technique that is commonly used in MAPs to eliminate phase-type jumps of one direction and resort to the theory of MAPs with one-sided jumps. More precisely, as shown in Figure 1, if the jumps of one direction are phase-type, they can be replaced by linear stretches of slope one. If E is the state-space of the original MAP, the auxiliary MAP (after fluid embedding) has an augmented state-space E 0 = E ∪ {1, . . . , m}, where m is the number of phases of the phase-type jumps. Thus, for an arbitrary fixed time τ in the original **model**, it holds that τ = R τ

En savoir plus
5 Conclusion and Discussion
We have introduced a **model** a probabilistic process allowing concurrency of local components. We have distinguished two properties: the **Markov** property, that extends the **Markov** property in the case of usual, sequential processes, and the local independence property. The later property is specific to our **model**, since it is a condition on the relative independence of local components, and is thus not applicable for sequential processes, without concurrency. The **model** we consider has the same basic ingredients than any probabilistic **model**: a space of trajectories, on top of which we construct some probability measure with particular properties. However the meaning of “trajectory” is quite different in our case: instead of considering sequences of successive states, a trajectory consists of a partial order of local states. Hence, insisting on a state space that takes into account the distributed character of the system induces a distributed property for time as well. We detail this point below.

En savoir plus