• Aucun résultat trouvé

[PDF] Top 20 Applications of Markov Decision Processes in Communication Networks : a Survey

Has 10000 "Applications of Markov Decision Processes in Communication Networks : a Survey" found on our website. Below are the top 20 most common "Applications of Markov Decision Processes in Communication Networks : a Survey".

Applications of Markov Decision Processes in Communication Networks : a Survey

Applications of Markov Decision Processes in Communication Networks : a Survey

... 101 - 54602 Villers lès Nancy Cedex France Unité de recherche INRIA Rennes : IRISA, Campus universitaire de Beaulieu - 35042 Rennes Cedex France Unité de recherche INRIA Rhône-Alpes : 65[r] ... Voir le document complet

55

Collision Avoidance for Unmanned Aircraft using Markov Decision Processes

Collision Avoidance for Unmanned Aircraft using Markov Decision Processes

... to a POMDP is a policy, or way of behaving, that selects actions in a way that takes into account both the current uncertainty about the underlying state of the system ... Voir le document complet

23

Constrained Markov Decision Processes with Total Expected Cost Criteria

Constrained Markov Decision Processes with Total Expected Cost Criteria

... select a path between a source S and a destination R, one often has several ...criteria. In road traf- fic problems it may be the minimization of the delay as well as the ...tolls. ... Voir le document complet

3

The steady-state control problem for Markov decision processes

The steady-state control problem for Markov decision processes

... Conclusion In this paper, we have defined the steady-state control problem for MDP, and shown that this question is decidable for (ergodic) MDP in polynomial time, and for labeled MDP in polynomial ... Voir le document complet

17

Aggregating Optimistic Planning Trees for Solving Markov Decision Processes

Aggregating Optimistic Planning Trees for Solving Markov Decision Processes

... from A = {−3V, 0V, 3V} to segments of a piecewise control signal u, each ...0.05s in duration, and then numerically integrating the differential equation on the constant segments using ... Voir le document complet

9

Dealing with uncertainty : a comparison of robust optimization and partially observable Markov decision processes

Dealing with uncertainty : a comparison of robust optimization and partially observable Markov decision processes

... This means the solution to the linear program will be feasible for the robust optimization problem, because the correct number of squadrons are still available to complet[r] ... Voir le document complet

132

Representing Markov processes as dynamic non-parametric Bayesian networks

Representing Markov processes as dynamic non-parametric Bayesian networks

... developments in the copula-based graph field, models accounting for time dynamic systems generated in a systematic way are ...example, a data-driven dynamic NPBN was developed by ... Voir le document complet

14

Approximate solution methods for partially observable Markov and semi-Markov decision processes

Approximate solution methods for partially observable Markov and semi-Markov decision processes

... POMDPs in fact grows out from the same approach for discounted ...as a measure of convergence for the subgradient based cost approximation proposed in the same ...concavity of the ... Voir le document complet

169

Efficient Policies for Stationary Possibilistic Markov  Decision Processes

Efficient Policies for Stationary Possibilistic Markov Decision Processes

... described in Figure 2. It appears that BL-V I provides a very good approximation especially when increasing (l, ...I in about 90% of cases, with an (l, c) = (200, ...rate of BL-V I ... Voir le document complet

11

Efficient Policies for Stationary Possibilistic Markov  Decision Processes

Efficient Policies for Stationary Possibilistic Markov Decision Processes

... Keywords: Markov Decision process, Possibility theory, lexicographic compar- isons, possibilistic qualitative utilities 1 Introduction The classical paradigm for sequential decision making under ... Voir le document complet

12

Large Markov Decision Processes based management strategy of inland waterways in uncertain context

Large Markov Decision Processes based management strategy of inland waterways in uncertain context

... to a smooth evolution with small ...conditions of navigation, there is no difficulty for the network to recover from events that lead the reaches to their HNL (see Figure 16) or to their LNL (see Figure ... Voir le document complet

12

Non-Stationary Markov Decision Processes a Worst-Case Approach using Model-Based Reinforcement Learning

Non-Stationary Markov Decision Processes a Worst-Case Approach using Model-Based Reinforcement Learning

... evolution of the NSMDP.They build a learning algorithm for POMDPs solving where time dependency is taken into account by weighting recently experienced transitions more than older ...consists in ... Voir le document complet

19

Stochastic numerical methods for Piecewise Deterministic Markov Processes. Applications in Neuroscience

Stochastic numerical methods for Piecewise Deterministic Markov Processes. Applications in Neuroscience

... ones in the way we construct an approximation of our original ...one of the characteristics of a PDMP is a family of vector fields indexed by its discrete component, we ... Voir le document complet

125

On the Use of Non-Stationary Policies for Infinite-Horizon Discounted Markov Decision Processes

On the Use of Non-Stationary Policies for Infinite-Horizon Discounted Markov Decision Processes

... significant in the usual situation when γ is close to ...ǫ, a surprising consequence of this result is that the problem of “computing an approximately optimal non-stationary policy” is much ... Voir le document complet

5

Decentralized Control of Partially Observable Markov Decision Processes Using Belief Space Macro-Actions

Decentralized Control of Partially Observable Markov Decision Processes Using Belief Space Macro-Actions

... Control of Partially Observable Markov Decision Processes using Belief Space Macro-actions Shayegan Omidshafiei, Ali-akbar Agha-mohammadi, Christopher Amato, Jonathan ...focus of this ... Voir le document complet

9

Some Applications of Markov Additive Processes as Models in Insurance and Financial Mathematics

Some Applications of Markov Additive Processes as Models in Insurance and Financial Mathematics

... for a Markov-modulated exponential Lévy model [50] écrit en collaboration avec Romuald Hervé Momeya et publié dans la revue Asia-Pacific Financial ... Voir le document complet

148

Pathwise uniform value in gambling houses and Partially Observable Markov Decision Processes

Pathwise uniform value in gambling houses and Partially Observable Markov Decision Processes

... part of the literature investigates long-term MDPs, that is, MDPs which are repeated a large number of ...times. In the n-stage problem ...small). A first approach is to determine ... Voir le document complet

25

Strong Uniform Value in Gambling Houses and Partially Observable Markov Decision Processes

Strong Uniform Value in Gambling Houses and Partially Observable Markov Decision Processes

... contribution of this paper is to show that any finite POMDP has a strong uniform value, and consequently has a uniform value in pure ...strategies. In fact, we prove this result ... Voir le document complet

25

Strong Uniform Value in Gambling Houses and Partially Observable Markov Decision Processes

Strong Uniform Value in Gambling Houses and Partially Observable Markov Decision Processes

... contribution of this paper is to show that any finite POMDP has a strong uniform value, and consequently has a uniform value in pure ...strategies. In fact, we prove this result ... Voir le document complet

26

Incorporating Bayesian networks in Markov Decision Processes

Incorporating Bayesian networks in Markov Decision Processes

... are a special class of Bayesian networks that can be used for modeling time series data and represent stochastic proc- ...consist of a sequence of time slices (which are often ... Voir le document complet

11

Show all 10000 documents...