• Aucun résultat trouvé

[PDF] Top 20 Lexicographic refinements in stationary possibilistic Markov Decision Processes

Has 10000 "Lexicographic refinements in stationary possibilistic Markov Decision Processes" found on our website. Below are the top 20 most common "Lexicographic refinements in stationary possibilistic Markov Decision Processes".

Lexicographic refinements in stationary possibilistic Markov Decision Processes

Lexicographic refinements in stationary possibilistic Markov Decision Processes

... of lexicographic refinements to finite horizon possibilistic Markov decision processes and proposes a value iteration algorithm that looks for policies optimal with respect to ... Voir le document complet

22

Lexicographic refinements in possibilistic decision trees and finite-horizon Markov decision processes

Lexicographic refinements in possibilistic decision trees and finite-horizon Markov decision processes

... to decision under uncertainty have been considered by [ 15 – 20 ]. In particular, possibilistic DTs and pos- sibilistic MDPs (see [ 1 , 21 – 25 ]) use a common ordinal scale to model both the ... Voir le document complet

27

Lexicographic refinements in possibilistic decision trees and finite-horizon Markov decision processes

Lexicographic refinements in possibilistic decision trees and finite-horizon Markov decision processes

... for possibilistic MDPs, since no unique stochastic transition function corresponds to a possibility distribution [ 42 ...the possibilistic decision tree (provided that both transition possibilities ... Voir le document complet

26

Lexicographic refinements in possibilistic sequential decision-making models

Lexicographic refinements in possibilistic sequential decision-making models

... exposed in Chapter 3, is how to overcome the drowning effect of qualitative utilities when comparing ...two lexicographic criteria that compare policies based on their corresponding matrices of ...define ... Voir le document complet

131

Efficient Policies for Stationary Possibilistic Markov  Decision Processes

Efficient Policies for Stationary Possibilistic Markov Decision Processes

... Bounded lexicographic value iteration VS Unbounded lexicographic value iteration and the possibility degree of the other one is uniformly fired in ...criterion in its full ... Voir le document complet

12

Efficient Policies for Stationary Possibilistic Markov  Decision Processes

Efficient Policies for Stationary Possibilistic Markov Decision Processes

... Bounded lexicographic value iteration VS Unbounded lexicographic value iteration and the possibility degree of the other one is uniformly fired in ...criterion in its full ... Voir le document complet

11

Lexicographic refinements in possibilistic decision trees

Lexicographic refinements in possibilistic decision trees

... solve possibilistic Markov Decision ...of decision trees (one for each ...the lexicographic approach to possibilistic MDPs may lead to algorithms which are exponential in ... Voir le document complet

8

Lexicographic refinements in possibilistic decision trees

Lexicographic refinements in possibilistic decision trees

... solve possibilistic Markov Decision ...of decision trees (one for each ...the lexicographic approach to possibilistic MDPs may lead to algorithms which are exponential in ... Voir le document complet

9

Constrained Markov Decision Processes with Total Expected Cost Criteria

Constrained Markov Decision Processes with Total Expected Cost Criteria

... criteria. In road traf- fic problems it may be the minimization of the delay as well as the ...tolls. In communication networks it may be the minimization of delays, of loss probabilities of packets, of ... Voir le document complet

3

A Learning Design Recommendation System Based on Markov Decision Processes

A Learning Design Recommendation System Based on Markov Decision Processes

... Usually in MDP, a policy π is defined based on the reward function to help the decision maker, in our case our prediction method, to make the right ... Voir le document complet

9

The steady-state control problem for Markov decision processes

The steady-state control problem for Markov decision processes

... In this paper, we are interested in control problems for Markov decision pro- cesses (MDP) and partially observable Markov decision processes (POMDP) with respect to ... Voir le document complet

17

Collision Avoidance for Unmanned Aircraft using Markov Decision Processes

Collision Avoidance for Unmanned Aircraft using Markov Decision Processes

... can in fact generate non-trivial (if not superior) collision avoidance strategies that can compete with hand-crafted ...sacrifice in terms of more maneuvering, but it results in low risk ratios that ... Voir le document complet

23

Aggregating Optimistic Planning Trees for Solving Markov Decision Processes

Aggregating Optimistic Planning Trees for Solving Markov Decision Processes

... planning in Markov decision processes using a randomized simulator, under a budget ...exploration). In the decision-making step of the algorithm, the individual trees are ... Voir le document complet

9

Smart Sampling for Lightweight Verification of Markov Decision Processes

Smart Sampling for Lightweight Verification of Markov Decision Processes

... not in general converge to the true maximum (the number of state-actions does not actually indicate scheduler probability), but is sometimes successful because the outer loop randomly explores local ... Voir le document complet

14

DetH*: Approximate Hierarchical Solution of Large Markov Decision Processes

DetH*: Approximate Hierarchical Solution of Large Markov Decision Processes

... However, we found that splitting macro-states only on i- reachability created far too many macro-states if there were many variables in the domain whose values were not well- connected. We use the check on line 10 ... Voir le document complet

9

Approximate solution methods for partially observable Markov and semi-Markov decision processes

Approximate solution methods for partially observable Markov and semi-Markov decision processes

... Algorithms In the last part of this thesis (Chapters 10 and 11) we consider approximation algorithms for finite space POMDPs and MDPs under the reinforcement learning ...– in that a model of the problem is ... Voir le document complet

169

Strong Uniform Value in Gambling Houses and Partially Observable Markov Decision Processes

Strong Uniform Value in Gambling Houses and Partially Observable Markov Decision Processes

... exists. In many situations, the decision-maker may not be perfectly informed of the current state ...oil in an oil field), the quantity left, which represents the state, can be evaluated, but is not ... Voir le document complet

25

Large Markov Decision Processes based management strategy of inland waterways in uncertain context

Large Markov Decision Processes based management strategy of inland waterways in uncertain context

... 17). In both case, the recovery is pretty quick in only 2 time ...than in the normal conditions, as the traffic is more important it is easier to move large volumes of water from on reach to ... Voir le document complet

12

Pathwise uniform value in gambling houses and Partially Observable Markov Decision Processes

Pathwise uniform value in gambling houses and Partially Observable Markov Decision Processes

... of Markov Decision Process (or Controlled Markov chain) was introduced by Bellman [4] and has been extensively studied since ...then. In this model, at the beginning of every stage, a ... Voir le document complet

25

Strong Uniform Value in Gambling Houses and Partially Observable Markov Decision Processes

Strong Uniform Value in Gambling Houses and Partially Observable Markov Decision Processes

... value in pure strategies. In fact, we prove this result in a much more general framework, as we shall see ...value in behavior strategies in POMDPs) has been generalized in ... Voir le document complet

26

Show all 10000 documents...