• Aucun résultat trouvé

[PDF] Top 20 The approach in Markov decision processes revisited

Has 10000 "The approach in Markov decision processes revisited" found on our website. Below are the top 20 most common "The approach in Markov decision processes revisited".

The approach in Markov decision processes revisited

The approach in Markov decision processes revisited

... L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignemen[r] ... Voir le document complet

22

Collision Avoidance for Unmanned Aircraft using Markov Decision Processes

Collision Avoidance for Unmanned Aircraft using Markov Decision Processes

... of the flight dynamics, intruder behavior, and sensor char- acteristics and attempt to optimize the avoidance strategy so that a predefined cost function is ...minimized. The cost function could take ... Voir le document complet

23

Constrained Markov Decision Processes with Total Expected Cost Criteria

Constrained Markov Decision Processes with Total Expected Cost Criteria

... pute the optimal value and an optimal stationary policy for ...available in [1] but re- quired the strong assumption that s(β, u) is finite for any ...excludes the shortest path problem ... Voir le document complet

3

A Learning Design Recommendation System Based on Markov Decision Processes

A Learning Design Recommendation System Based on Markov Decision Processes

... after the transition ܽ ԡܶܵሺ݄ܶ݁ܽܿ݁ݎሻǡ ܶܵሺݏ ᇱ ሻԡ is a distance factor between the teacher’s teaching styles and the learning object ݏ ᇱ teaching ...between the learning styles of a learner or a ... Voir le document complet

9

DetH*: Approximate Hierarchical Solution of Large Markov Decision Processes

DetH*: Approximate Hierarchical Solution of Large Markov Decision Processes

... treat the MDP as a single problem, but find more compact [Sanner and McAllester, 2005; Sanner et ...ours in that it de- composes a large MDP based on the connectivity of the ...and, in ... Voir le document complet

9

Smart Sampling for Lightweight Verification of Markov Decision Processes

Smart Sampling for Lightweight Verification of Markov Decision Processes

... [6] the authors present learning algorithms to bound the maximum probability of reachability properties of ...MDPs. The algorithms work by refining upper and lower bounds associated to individual ... Voir le document complet

14

Strong Uniform Value in Gambling Houses and Partially Observable Markov Decision Processes

Strong Uniform Value in Gambling Houses and Partially Observable Markov Decision Processes

... When the state space and action sets are finite, Blackwell [6] has proved the existence of a pure strategy that is optimal for every discount factor close to 0, and one can deduce that the strong ... Voir le document complet

25

Approximate solution methods for partially observable Markov and semi-Markov decision processes

Approximate solution methods for partially observable Markov and semi-Markov decision processes

... Algorithms In the last part of this thesis (Chapters 10 and 11) we consider approximation algorithms for finite space POMDPs and MDPs under the reinforcement learning ...from the previous ... Voir le document complet

169

Non-Stationary Markov Decision Processes a Worst-Case Approach using Model-Based Reinforcement Learning

Non-Stationary Markov Decision Processes a Worst-Case Approach using Model-Based Reinforcement Learning

... to the right since MDP 0 does not capture this risk. As a result, the  = 0 case reflects a favorable evolution for DP-snapshot and a bad one for ...RATS. The opposite occurs with  = 1 where ... Voir le document complet

19

Aggregating Optimistic Planning Trees for Solving Markov Decision Processes

Aggregating Optimistic Planning Trees for Solving Markov Decision Processes

... on the inverted pendulum benchmark problem, showing the sum of discounted rewards for simulations of 50 time ...steps. The algorithms are compared for several budgets. In the cases of ... Voir le document complet

9

Efficient Policies for Stationary Possibilistic Markov  Decision Processes

Efficient Policies for Stationary Possibilistic Markov Decision Processes

... and the possibility degree of the other one is uniformly fired in ...generated. The two algorithms are compared ...Success, the percentage of optimal solutions provided by Bounded value ... Voir le document complet

12

Markov Decision Petri Net and Markov Decision Well-Formed Net Formalisms

Markov Decision Petri Net and Markov Decision Well-Formed Net Formalisms

... i.e, in our framework, many components may have a similar ...define Markov Decision Well-formed nets (MDWN) similarly as we do for ...MDPNs. The semantics of a model is then easily obtained by ... Voir le document complet

20

Large Markov Decision Processes based management strategy of inland waterways in uncertain context

Large Markov Decision Processes based management strategy of inland waterways in uncertain context

... to the emission of greenhouse gas (GHG). The last report of IPCC [1] indicate that anthropogenic GHG emissions “came by 11% from transport” from 2000 to ...measures in the transport sector. ... Voir le document complet

12

On the fastest finite Markov processes

On the fastest finite Markov processes

... of the paper is as follows. The above results (A) and (B) are proved in the next section via a dynamic programming approach, which also provides an alternative proof of the ... Voir le document complet

35

Markov concurrent processes

Markov concurrent processes

... The approach we consider in this paper is based on a treatment of concur- rency in a more structural ...evolves in the usual way, and is thus rendered as a sequence of random ... Voir le document complet

21

Approximate Policy Iteration for Generalized Semi-Markov Decision Processes: an Improved Algorithm

Approximate Policy Iteration for Generalized Semi-Markov Decision Processes: an Improved Algorithm

... To summarize, the improvement of the current policy is performed online: for each visited state starting in s0 we perform one Bellman backup using the value function evaluation from the[r] ... Voir le document complet

15

Pilot Allocation and Receive Antenna Selection: A Markov Decision Theoretic Approach

Pilot Allocation and Receive Antenna Selection: A Markov Decision Theoretic Approach

... model in this work consists of a transmitter with a single antenna and a receiver with N antenna ...elements. The receiver has a single RF chain, so it needs to decide on the antenna with which it ... Voir le document complet

7

Decentralized Control of Partially Observable Markov Decision Processes Using Belief Space Macro-Actions

Decentralized Control of Partially Observable Markov Decision Processes Using Belief Space Macro-Actions

... Observable Markov Decision Processes using Belief Space Macro-actions Shayegan Omidshafiei, Ali-akbar Agha-mohammadi, Christopher Amato, Jonathan ...Abstract— The focus of this paper is on ... Voir le document complet

9

Strong Uniform Value in Gambling Houses and Partially Observable Markov Decision Processes

Strong Uniform Value in Gambling Houses and Partially Observable Markov Decision Processes

... When the state space and action sets are finite, Blackwell [6] has proved the existence of a pure strategy that is optimal for every discount factor close to 0, and one can deduce that the strong ... Voir le document complet

26

Pathwise uniform value in gambling houses and Partially Observable Markov Decision Processes

Pathwise uniform value in gambling houses and Partially Observable Markov Decision Processes

... situations, the decision-maker may not be perfectly informed of the current state ...if the state variable represents a resource stock (like the amount of oil in an oil field), ... Voir le document complet

25

Show all 10000 documents...