• Aucun résultat trouvé

[PDF] Top 20 Lightweight Verification of Markov Decision Processes with Rewards

Has 10000 "Lightweight Verification of Markov Decision Processes with Rewards" found on our website. Below are the top 20 most common "Lightweight Verification of Markov Decision Processes with Rewards".

Lightweight Verification of Markov Decision Processes with Rewards

Lightweight Verification of Markov Decision Processes with Rewards

... collection of Monte Carlo methods that approximate the results of probabilistic model ...exponentially with the number of interacting variables in the model ...cost of (effectively) ... Voir le document complet

16

Scalable Verification of Markov Decision Processes

Scalable Verification of Markov Decision Processes

... Introduction Markov decision processes (MDP) describe systems that interleave nondetermin- istic actions and probabilistic transitions, possibly with rewards or costs assigned to the ... Voir le document complet

13

Smart Sampling for Lightweight Verification of Markov Decision Processes

Smart Sampling for Lightweight Verification of Markov Decision Processes

... half of all schedulers for a given MDP and property are “near optimal”, ...probability of satisfying the property that is deemed adequately close to the true ...half of the enumeration, it will be ... Voir le document complet

14

Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes

Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes

... best of our knowledge, the only regret guarantees available in the literature for this setting are [17, 18, ...counter-example of Osband and Roy [20] seems to invalidate the result of Abbasi-Yadkori ... Voir le document complet

28

Optimization of Probabilistic Argumentation With Markov Decision Models

Optimization of Probabilistic Argumentation With Markov Decision Models

... 1 Introduction Argumentation is by essence a dialectical process, which in- volves different parties exchanging pieces of information. In a persuasion dialogue, agents have conflicting goals and try to convince ... Voir le document complet

8

Adaptive Planning for Markov Decision Processes with Uncertain Transition Models via Incremental Feature Dependency Discovery

Adaptive Planning for Markov Decision Processes with Uncertain Transition Models via Incremental Feature Dependency Discovery

... problem of finding a good representation to a certain degree since the expressiveness of the representation grows as more samples are ...class of motion ...Gaussian processes and kernel ... Voir le document complet

17

Large Markov Decision Processes based management strategy of inland waterways in uncertain context

Large Markov Decision Processes based management strategy of inland waterways in uncertain context

... evolution with small ...conditions of navigation, there is no difficulty for the network to recover from events that lead the reaches to their HNL (see Figure 16) or to their LNL (see Figure ...volumes ... Voir le document complet

12

Collision Avoidance for Unmanned Aircraft using Markov Decision Processes

Collision Avoidance for Unmanned Aircraft using Markov Decision Processes

... compete with hand-crafted ones. The EO/IR sensor, with its limited field-of-view and lack of horizontal localization ability, provides us a good example where the POMDP strategy scores a lower ... Voir le document complet

23

A Learning Design Recommendation System Based on Markov Decision Processes

A Learning Design Recommendation System Based on Markov Decision Processes

... styles of a learner or a group of learners and the learning styles associated to the learning object ݏ ᇱ ...the decision maker, in our case our prediction method, to make the right ... Voir le document complet

9

Aggregating Optimistic Planning Trees for Solving Markov Decision Processes

Aggregating Optimistic Planning Trees for Solving Markov Decision Processes

... The inverted pendulum is described by the state variables (α, ˙ α) ∈ [−π, π] × [−15, 15] and the differential equation ¨ α = (mgl sin(α) − b ˙ α − K(K ˙ α + u)/R) /J, where J = 1.91 · 10 −4 kg · m 2 , m = 0.055 kg, g = ... Voir le document complet

9

The steady-state control problem for Markov decision processes

The steady-state control problem for Markov decision processes

... servation of states is ...our decision algorithm is an interesting next step to establish the feasibility of our approach on case ...case of ergodic ...lem with a finite horizon: given ... Voir le document complet

17

On the Use of Non-Stationary Policies for Infinite-Horizon Discounted Markov Decision Processes

On the Use of Non-Stationary Policies for Infinite-Horizon Discounted Markov Decision Processes

... computed with some error ǫ, a surprising consequence of this result is that the problem of “computing an approximately optimal non-stationary policy” is much simpler than that of “computing an ... Voir le document complet

5

Decentralized Control of Partially Observable Markov Decision Processes Using Belief Space Macro-Actions

Decentralized Control of Partially Observable Markov Decision Processes Using Belief Space Macro-Actions

... controller with 13 nodes, resulting in a policy space with cardinality ...presence of different sources of uncertainty (wind, actuator, sensor), obstacles and constraints in the environment, ... Voir le document complet

9

Distributed Markov Processes

Distributed Markov Processes

... Construction of Synchronization Products Lemma ...respectively, with global states X [1,n−1] = X 1 × · · · × X n −1 and X n ...associated with Markov chain M has all its coefficients positive, ... Voir le document complet

19

Logical modelling of cellular decision processes with GINsim

Logical modelling of cellular decision processes with GINsim

... result of a simulation is compressed into a hierarchical graph, where the nodes represent connected sets of states or components, each symbolically represented by a decision ...basins of ... Voir le document complet

3

Markov concurrent processes

Markov concurrent processes

... treatment of concur- rency in a more structural ...sequence of random variables taking values in the local state ...action of sites is taken into account by considering that local state spaces share ... Voir le document complet

21

Strong Uniform Value in Gambling Houses and Partially Observable Markov Decision Processes

Strong Uniform Value in Gambling Houses and Partially Observable Markov Decision Processes

... contribution of this paper is to show that any finite POMDP has a strong uniform value, and consequently has a uniform value in pure ...result of Rosenberg, Solan and Vieille [20] (existence of the ... Voir le document complet

25

Decentralized control of Partially Observable Markov Decision Processes using belief space macro-actions

Decentralized control of Partially Observable Markov Decision Processes using belief space macro-actions

... Control of Partially Observable Markov Decision Processes using Belief Space Macro-actions Shayegan Omidshafiei, Ali-akbar Agha-mohammadi, Christopher Amato, Jonathan ...focus of this ... Voir le document complet

9

Parsimonious Markov Modeling of Processes with Long Range Dependence

Parsimonious Markov Modeling of Processes with Long Range Dependence

... In this paper, we show that a fractal model can be accurately approximated over a finite range of time scales by parsimonious multi-stage Markov models where the transition rates form a [r] ... Voir le document complet

20

Lightweight verification of control flow policies on Java bytecode

Lightweight verification of control flow policies on Java bytecode

... proof of the code’s behavior with respect to a given property is pre-computed off-device so that the code’s receiver only needs to verify that the proof is correct for the given ...need of a trusted ... Voir le document complet

26

Show all 10000 documents...