• Aucun résultat trouvé

Résoudre un problème par une recherche

N/A
N/A
Protected

Academic year: 2022

Partager "Résoudre un problème par une recherche"

Copied!
39
0
0

Texte intégral

(1)

Introduction à l’Intelligence Artificielle et à la Théorie des Jeux

Ahmed Bouajjani

abou@irif.fr

Résoudre un problème par une recherche

Recherche locale

Recherche on-line

(2)

Recherche locale

Dans le problème de 8 reines nous cherchons de trouver une solution (l'état destination) et non pas le chemin qui mène vers une solution.

Même situation pour d'autres problèmes, en optimisation

(3)

Recherche locale

Dans le problème de 8 reines nous cherchons de trouver une solution (l'état destination) et non pas le chemin qui mène vers une solution.

Même situation pour d'autres problèmes, en optimisation

(4)

Recherche locale

Dans le problème de 8 reines nous cherchons de trouver une solution (l'état destination) et non pas le chemin qui mène vers une solution.

Même situation pour d'autres problèmes, en optimisation

Pour ce type de problème la recherche locale :

on maintient juste le noeud courant, pas le chemin qui mène vers ce noeud

on se déplace vers un voisin du noeud courant

(5)

Recherche locale

fonction de coût définie pour chaque noeud

solution - le noeud de coût minimal

Problème d'optimisation de la fonction du coût.

Exemple :

1. 8 reines, le coût d'une configuration de 8 reines = le nombre de couples de reines en conflit

2. la solution : la configuration avec le coût minimal 0

(6)

Recherche locale

Section 4.1. Local Search Algorithms and Optimization Problems 121

If the path to the goal does not matter, we might consider a different class of algo- rithms, ones that do not worry about paths at all. Local search algorithms operate using

LOCAL SEARCH

a single current node (rather than multiple paths) and generally move only to neighbors

CURRENT NODE

of that node. Typically, the paths followed by the search are not retained. Although local search algorithms are not systematic, they have two key advantages: (1) they use very little memory—usually a constant amount; and (2) they can often find reasonable solutions in large or infinite (continuous) state spaces for which systematic algorithms are unsuitable.

In addition to finding goals, local search algorithms are useful for solving pure op- timization problems, in which the aim is to find the best state according to an objective

OPTIMIZATION PROBLEM

function. Many optimization problems do not fit the “standard” search model introduced in

OBJECTIVE FUNCTION

Chapter 3. For example, nature provides an objective function—reproductive fitness—that Darwinian evolution could be seen as attempting to optimize, but there is no “goal test” and no “path cost” for this problem.

To understand local search, we find it useful to consider the state-space landscape (as

STATE-SPACE LANDSCAPE

in Figure 4.1). A landscape has both “location” (defined by the state) and “elevation” (defined by the value of the heuristic cost function or objective function). If elevation corresponds to cost, then the aim is to find the lowest valley—a global minimum; if elevation corresponds

GLOBAL MINIMUM

to an objective function, then the aim is to find the highest peak—a global maximum. (You

GLOBAL MAXIMUM

can convert from one to the other just by inserting a minus sign.) Local search algorithms explore this landscape. A complete local search algorithm always finds a goal if one exists;

an optimal algorithm always finds a global minimum/maximum.

current state objective function

state space global maximum

local maximum

“flat” local maximum shoulder

Figure 4.1 A one-dimensional state-space landscape in which elevation corresponds to the objective function. The aim is to find the global maximum. Hill-climbing search modifies the current state to try to improve it, as shown by the arrow. The various topographic features are defined in the text.

(7)

Recherche locale : exemple

Section 4.1. Local Search Algorithms and Optimization Problems 123

14 18 17 15 14 18 14

14 14 14 14 12 16 12

13 16 17 14 18 13 14

17 15 18 15 13 15 13

12 15 15 13 15 12 13

14 14 14 16 12 14 12

12 15 16 13 14 12 14

18 16 16 16 14 16 14

(a) (b)

Figure 4.3 (a) An 8-queens state with heuristic cost estimate h = 17, showing the value of h for each possible successor obtained by moving a queen within its column. The best moves are marked. (b) A local minimum in the 8-queens state space; the state has h = 1 but every successor has a higher cost.

concretely, the state in Figure 4.3(b) is a local maximum (i.e., a local minimum for the cost h); every move of a single queen makes the situation worse.

Ridges: a ridge is shown in Figure 4.4. Ridges result in a sequence of local maxima

RIDGE

that is very difficult for greedy algorithms to navigate.

Plateaux: a plateau is a flat area of the state-space landscape. It can be a flat local

PLATEAU

maximum, from which no uphill exit exists, or a shoulder, from which progress is

SHOULDER

possible. (See Figure 4.1.) A hill-climbing search might get lost on the plateau.

In each case, the algorithm reaches a point at which no progress is being made. Starting from a randomly generated 8-queens state, steepest-ascent hill climbing gets stuck 86% of the time, solving only 14% of problem instances. It works quickly, taking just 4 steps on average when it succeeds and 3 when it gets stuck—not bad for a state space with 88 17 million states.

The algorithm in Figure 4.2 halts if it reaches a plateau where the best successor has the same value as the current state. Might it not be a good idea to keep going—to allow a sideways move in the hope that the plateau is really a shoulder, as shown in Figure 4.1? The

SIDEWAYS MOVE

answer is usually yes, but we must take care. If we always allow sideways moves when there are no uphill moves, an infinite loop will occur whenever the algorithm reaches a flat local maximum that is not a shoulder. One common solution is to put a limit on the number of con- secutive sideways moves allowed. For example, we could allow up to, say, 100 consecutive sideways moves in the 8-queens problem. This raises the percentage of problem instances solved by hill climbing from 14% to 94%. Success comes at a cost: the algorithm averages roughly 21 steps for each successful instance and 64 for each failure.

Une configuration avec coût = 17

Coût des succeurs en déplaçant une reine sur sa colonne

(8)

Recherche locale

Section 4.1. Local Search Algorithms and Optimization Problems 121

If the path to the goal does not matter, we might consider a different class of algo- rithms, ones that do not worry about paths at all. Local search algorithms operate using

LOCAL SEARCH

a single current node (rather than multiple paths) and generally move only to neighbors

CURRENT NODE

of that node. Typically, the paths followed by the search are not retained. Although local search algorithms are not systematic, they have two key advantages: (1) they use very little memory—usually a constant amount; and (2) they can often find reasonable solutions in large or infinite (continuous) state spaces for which systematic algorithms are unsuitable.

In addition to finding goals, local search algorithms are useful for solving pure op- timization problems, in which the aim is to find the best state according to an objective

OPTIMIZATION PROBLEM

function. Many optimization problems do not fit the “standard” search model introduced in

OBJECTIVE FUNCTION

Chapter 3. For example, nature provides an objective function—reproductive fitness—that Darwinian evolution could be seen as attempting to optimize, but there is no “goal test” and no “path cost” for this problem.

To understand local search, we find it useful to consider the state-space landscape (as

STATE-SPACE LANDSCAPE

in Figure 4.1). A landscape has both “location” (defined by the state) and “elevation” (defined by the value of the heuristic cost function or objective function). If elevation corresponds to cost, then the aim is to find the lowest valley—a global minimum; if elevation corresponds

GLOBAL MINIMUM

to an objective function, then the aim is to find the highest peak—a global maximum. (You

GLOBAL MAXIMUM

can convert from one to the other just by inserting a minus sign.) Local search algorithms explore this landscape. A complete local search algorithm always finds a goal if one exists;

an optimal algorithm always finds a global minimum/maximum.

current state objective function

state space global maximum

local maximum

“flat” local maximum shoulder

Figure 4.1 A one-dimensional state-space landscape in which elevation corresponds to the objective function. The aim is to find the global maximum. Hill-climbing search modifies the current state to try to improve it, as shown by the arrow. The various topographic features are defined in the text.

Problème des optimaux locaux

Problème des plateaux

(9)

Miniumum Local

Coût = 1 et tout successeur a un coût supérieur

Section 4.1. Local Search Algorithms and Optimization Problems 123

14 18 17 15 14 18 14

14 14 14 14 12 16 12

13 16 17 14 18 13 14

17 15 18 15 13 15 13

12 15 15 13 15 12 13

14 14 14 16 12 14 12

12 15 16 13 14 12 14

18 16 16 16 14 16 14

(a) (b)

Figure 4.3 (a) An 8-queens state with heuristic cost estimate h = 17, showing the value of h for each possible successor obtained by moving a queen within its column. The best moves are marked. (b) A local minimum in the 8-queens state space; the state has h = 1 but every successor has a higher cost.

concretely, the state in Figure 4.3(b) is a local maximum (i.e., a local minimum for the cost h); every move of a single queen makes the situation worse.

Ridges: a ridge is shown in Figure 4.4. Ridges result in a sequence of local maxima

RIDGE

that is very difficult for greedy algorithms to navigate.

Plateaux: a plateau is a flat area of the state-space landscape. It can be a flat local

PLATEAU

maximum, from which no uphill exit exists, or a shoulder, from which progress is

SHOULDER

possible. (See Figure 4.1.) A hill-climbing search might get lost on the plateau.

In each case, the algorithm reaches a point at which no progress is being made. Starting from a randomly generated 8-queens state, steepest-ascent hill climbing gets stuck 86% of the time, solving only 14% of problem instances. It works quickly, taking just 4 steps on average when it succeeds and 3 when it gets stuck—not bad for a state space with 88 17 million states.

The algorithm in Figure 4.2 halts if it reaches a plateau where the best successor has the same value as the current state. Might it not be a good idea to keep going—to allow a sideways move in the hope that the plateau is really a shoulder, as shown in Figure 4.1? The

SIDEWAYS MOVE

answer is usually yes, but we must take care. If we always allow sideways moves when there are no uphill moves, an infinite loop will occur whenever the algorithm reaches a flat local maximum that is not a shoulder. One common solution is to put a limit on the number of con- secutive sideways moves allowed. For example, we could allow up to, say, 100 consecutive sideways moves in the 8-queens problem. This raises the percentage of problem instances solved by hill climbing from 14% to 94%. Success comes at a cost: the algorithm averages roughly 21 steps for each successful instance and 64 for each failure.

(10)

fonction steepest-descent(problème) retourne le minimum local courant = new noeud(problème.état_initial)

loop do

voisin = le voisin avec le coût minimal

if voisin.coût >= courant.coût return courant courant=voisin


done

Hill-climbing search

Steepest-descent (descente du gradient)

(11)

Steepest-descent appliquée à 8 reines (avec initialement 8 reines, une par ligne):

bloqué dans 86% de cas sans qu'une solution soit trouvée

4 itérations en moyenne si une solution est trouvée

3 itérations en moyenne si une solution n'est pas trouvée et l'algorithme steepest-descent bloque

le nombre d'état 8 8

(12)

Steepest descent: Variantes

Steepest-descent s'arrête si on se trouve sur un

plateau (pas de voisins avec le coût inférieur mais il existe des voisins avec le coût égal).

Pour éviter le blocage dans ce cas : permettre les mouvement latéraux (s'il n'y a pas de voisin de coût

inférieur mais il en existe avec le coût égal au coût du courant alors déplacer le courant vers un tel voisin) . Limiter le nombre de mouvement latéraux consécutifs (pour éviter un long parcours inutile sur un plateau).

(13)

Steepest descent

Pour 8 reines : steepest descent avec au plus 100

mouvements latéraux consécutifs trouve une solution dans 94% de cas.

Mais le nombre moyen de mouvements avant que

steepest-descent trouve une solution passe à 21 et le nombre moyen de mouvement avant que l'algorithme soit bloqué passe à 64.

Steepest descent: Variantes

(14)

Steepest descent stochastique

- A chaque étape on sélectionne un voisin avec une

certaine probabilité — probabilité peu dépendre la pente

(le coût) : plus élevée pour les voisins avec le coût plus petit (la probabilité proportionnelle à 1/coût). Cela permet de

passer les collines avoisinante et sortir d'un minimum local entouré de collines.

- On peut redémarrer et conduire plusieurs recherche à

partir de plusieurs point initiaux choisis aléatoirement.

==> on fait des sauts vers d’autres zones de recherche.

(15)

Local beam search

recherche locale à faisceau

A chaque moment on maintient k noeuds n1,…,nk

On génère tous les voisins V de ces noeuds

Si parmi ces voisins il y a une solution alors on termine

Sinon pour l'étape suivant on choisit dans V k noeuds dont le

coût est minimal

(16)

Local beam search

recherche locale à faisceau

A chaque moment on maintient k noeuds n1,…,nk

On génère tous les voisins V de ces noeuds

Si parmi ces voisins il y a une solution alors on termine

Sinon pour l'étape suivant on choisit dans V k noeuds dont le coût est minimal

Noter que il est possible que pour l'étape suivant on

garde plusieurs voisin d'un noeud et aucun voisin d'un autre (on choisit k noeud d'un ensemble de tous les

voisins de {n1,…,nk}).

==> on cherche dans les endroits les plus prometteurs

(17)

Local beam search

recherche locale à faisceau

k=4 J

1 1 2 2 3 4

5

(18)

Algorithmes génétiques

Un état successeur obtenu en combinant deux états parents

Initialement k états générés aléatoirement (population)

Un état représenté par une chaine de caractères sur un alphabet fini (souvent 0,1)

La fonction d'évaluation (fitness fonction) permet

d'évaluer la qualité d'un état. Plus la valeur de la fonction est élevée plus l'état est mieux "adapté".

Produire la nouvelle génération d'états par la sélection, croisement et mutation.

(19)

Algorithmes génétiques

• 24748552 -- la position de chaque reines, colonne par colonne, par exemple dans 
 la première colonne la reine se trouve sur la deuxième ligne, dans la deuxième

colonne sur la quatrième ligne etc.)

• Fitness function: le nombre de couples de reines qui ne sont pas en conflit (min = 0, max = 8 × 7/2 = 28)

• 24/(24+23+20+11) = 31%

• 23/(24+23+20+11) = 29% etc

(20)

Algorithmes génétiques (croisement de deux

configurations dans le problème de 8 reines)

(21)

Section 4.2. Local Search in Continuous Spaces 129

function GENETIC-ALGORITHM( population, FITNESS-FN) returns an individual inputs: population, a set of individuals

FITNESS-FN, a function that measures the fitness of an individual repeat

new population empty set

for i = 1 to SIZE( population) do

x RANDOM-SELECTION( population, FITNESS-FN) y RANDOM-SELECTION( population, FITNESS-FN) child REPRODUCE(x , y)

if (small random probability) then child MUTATE(child ) add child to new population

population new population

until some individual is fit enough, or enough time has elapsed

return the best individual in population, according to FITNESS-FN function REPRODUCE(x , y) returns an individual

inputs: x , y, parent individuals

n LENGTH(x ); c random number from 1 to n

return APPEND(SUBSTRING(x , 1, c), SUBSTRING(y, c + 1, n))

Figure 4.8 A genetic algorithm. The algorithm is the same as the one diagrammed in Figure 4.6, with one variation: in this more popular version, each mating of two parents produces only one offspring, not two.

component is likely to be good in a variety of different designs. This suggests that successful use of genetic algorithms requires careful engineering of the representation.

In practice, genetic algorithms have had a widespread impact on optimization problems, such as circuit layout and job-shop scheduling. At present, it is not clear whether the appeal of genetic algorithms arises from their performance or from their æsthetically pleasing origins in the theory of evolution. Much work remains to be done to identify the conditions under which genetic algorithms perform well.

4.2 LOCAL SEARCH IN CONTINUOUS SPACES

In Chapter 2, we explained the distinction between discrete and continuous environments, pointing out that most real-world environments are continuous. Yet none of the algorithms we have described (except for first-choice hill climbing and simulated annealing) can handle continuous state and action spaces, because they have infinite branching factors. This section provides a very brief introduction to some local search techniques for finding optimal solu- tions in continuous spaces. The literature on this topic is vast; many of the basic techniques

Algorithme génétique

(22)

Recherche "online"

Les algorithmes de recherche précédents était "offline", le calcul de la solution se fait avant d'exécuter la solution.

Les algorithmes "online" :

le calcul entrelacé avec l'exécution

à chaque moment on peut visiter uniquement un voisin du

sommet courant (recherche d'un chemin dans un labyrinthe)

l'exploration -- il faut utiliser les actions pour découvrir

l’environnement : l’agent se déplace physiquement dans l’environnement et le découvre progressivement

adapté à l'environnement dynamique qui change avec le temps ou un environnement inconnu (on ne connait pas quels sont les états)

(23)

Recherche "online"

La qualité de la recherche online mesurée par competitive ratio=(la complexité en temps de

l'algorithme online) / (la complexité en temps de l'algorithme le plus efficace si l'environnement

complètement était connu)

S'il y a des états sans issu l'algorithme peut rester bloqué (concerne le graphe orienté).

(24)

Recherche « online »: situation de blocage

148 Chapter 4. Beyond Classical Search

G

S

1 2 3

1 2 3

Figure 4.19 A simple maze problem. The agent starts at S and must reach G but knows nothing of the environment.

S

G

S

G A

A

S G

(a) (b)

Figure 4.20 (a) Two state spaces that might lead an online search agent into a dead end.

Any given agent will fail in at least one of these spaces. (b) A two-dimensional environment that can cause an online search agent to follow an arbitrarily inefficient route to the goal.

Whichever choice the agent makes, the adversary blocks that route with another long, thin wall, so that the path followed is much longer than the best possible path.

Finally, the agent might have access to an admissible heuristic function h(s) that es- timates the distance from the current state to a goal state. For example, in Figure 4.19, the agent might know the location of the goal and be able to use the Manhattan-distance heuristic.

Typically, the agent’s objective is to reach a goal state while minimizing cost. (Another possible objective is simply to explore the entire environment.) The cost is the total path cost of the path that the agent actually travels. It is common to compare this cost with the path cost of the path the agent would follow if it knew the search space in advance—that is, the actual shortest path (or shortest complete exploration). In the language of online algorithms, this is called the competitive ratio; we would like it to be as small as possible.

COMPETITIVE RATIO

Tout agent on-line échouera à trouver le but sur un des

deux sous-graphes

(25)

Online Depth-First Search

- delta(s,a) - l'état obtenu en appliquant l'action a dans l'état s

- Pour chaque action a et chaque état s, si delta(s,a)=t alors il existe une action b telle que delta(t,b)=s (pour chaque action a il existe une action b qui permet de "défaire" a). De plus b peut être trouvée si on connait s, a, t.

==> on peut revenir sur ses pas

- Online depth-first search est un algorithme naturel de recherche dans un labyrinthe

(26)

online depth-first search

s - un variable globale qui donne l'état précédent a - l'action exécutée dans s

result[état, action] une table indexées par l'état et l'action qui

donne la valeur de delta(état,action), initialement la table est vide

untried[état] qui donne pour un état l'ensemble d'action qu'on a pas encore exécuté dans cet état

unbacktracked[état] donne pour chaque état visité la liste des

actions "back" qu'on a pas encore exécuté, initialement vide

(27)

online depth-first search

150 Chapter 4. Beyond Classical Search

function ONLINE-DFS-AGENT(s) returns an action inputs: s, a percept that identifies the current state

persistent: result, a table indexed by state and action, initially empty

untried , a table that lists, for each state, the actions not yet tried

unbacktracked , a table that lists, for each state, the backtracks not yet tried s, a, the previous state and action, initially null

if GOAL-TEST(s) then return stop

if s is a new state (not in untried ) then untried [s] ACTIONS(s) if s is not null then

result [s, a] s

add s to the front of unbacktracked [s] if untried [s] is empty then

if unbacktracked [s] is empty then return stop

else a an action b such that result [s, b] = POP(unbacktracked [s]) else a POP(untried [s])

s s

return a

Figure 4.21 An online search agent that uses depth-first exploration. The agent is appli- cable only in state spaces in which every action can be “undone” by some other action.

lists, for each state, the predecessor states to which the agent has not yet backtracked. If the agent has run out of states to which it can backtrack, then its search is complete.

We recommend that the reader trace through the progress of O NLINE-DFS-AGENT when applied to the maze given in Figure 4.19. It is fairly easy to see that the agent will, in the worst case, end up traversing every link in the state space exactly twice. For exploration, this is optimal; for finding a goal, on the other hand, the agent’s competitive ratio could be arbitrarily bad if it goes off on a long excursion when there is a goal right next to the initial state. An online variant of iterative deepening solves this problem; for an environment that is a uniform tree, the competitive ratio of such an agent is a small constant.

Because of its method of backtracking, ONLINE-DFS-AGENT works only in state spaces where the actions are reversible. There are slightly more complex algorithms that work in general state spaces, but no such algorithm has a bounded competitive ratio.

4.5.3 Online local search

Like depth-first search, hill-climbing search has the property of locality in its node expan- sions. In fact, because it keeps just one current state in memory, hill-climbing search is already an online search algorithm! Unfortunately, it is not very useful in its simplest form because it leaves the agent sitting at local maxima with nowhere to go. Moreover, random restarts cannot be used, because the agent cannot transport itself to a new state.

Instead of random restarts, one might consider using a random walk to explore the

RANDOM WALK

environment. A random walk simply selects at random one of the available actions from the

(28)

online depth-first search

- Termine sur un espace d’état fini

- Peut être inefficace : explore inutilement des partie de l’espace d’états

- La version online DFS borné itéré atténue ce problème

(29)

Hill climbing

- Hill climbing (descente du gradient) est aussi « local »

- Mais pas pratique : blocages dans les min locaux, pas

possible de redémarrer d’un autre point aléatoire

(30)

Hill climbing - Random walk

- Hill climbing (descente du gradient) est aussi « local »

- Mais pas pratique : blocages dans les min locaux, pas possible de redémarrer d’un autre point aléatoire

- Random walk : tirer un successeur aléatoirement

- Si le but est accessible, il finira par le trouver

(31)

Hill climbing - Random walk

- Hill climbing (descente du gradient) est aussi « local »

- Mais pas pratique : blocages dans les min locaux, pas possible de redémarrer d’un autre point aléatoire

- Random walk : tirer un successeur aléatoirement

- Si le but est accessible, il finira par le trouver

- Peut mettre un nombre exponentiel d’étapes

Section 4.5. Online Search Agents and Unknown Environments 151

S G

Figure 4.22 An environment in which a random walk will take exponentially many steps to find the goal.

current state; preference can be given to actions that have not yet been tried. It is easy to prove that a random walk will eventually find a goal or complete its exploration, provided that the space is finite.14 On the other hand, the process can be very slow. Figure 4.22 shows an environment in which a random walk will take exponentially many steps to find the goal because, at each step, backward progress is twice as likely as forward progress. The example is contrived, of course, but there are many real-world state spaces whose topology causes these kinds of “traps” for random walks.

Augmenting hill climbing with memory rather than randomness turns out to be a more effective approach. The basic idea is to store a “current best estimate” H (s) of the cost to reach the goal from each state that has been visited. H (s) starts out being just the heuristic estimate h(s) and is updated as the agent gains experience in the state space. Figure 4.23 shows a simple example in a one-dimensional state space. In (a), the agent seems to be stuck in a flat local minimum at the shaded state. Rather than staying where it is, the agent should follow what seems to be the best path to the goal given the current cost estimates for its neighbors. The estimated cost to reach the goal through a neighbor s is the cost to get to s plus the estimated cost to get to a goal from there—that is, c(s, a, s) + H (s). In the example, there are two actions, with estimated costs 1 + 9 and 1 + 2 , so it seems best to move right. Now, it is clear that the cost estimate of 2 for the shaded state was overly optimistic.

Since the best move cost 1 and led to a state that is at least 2 steps from a goal, the shaded state must be at least 3 steps from a goal, so its H should be updated accordingly, as shown in Figure 4.23(b). Continuing this process, the agent will move back and forth twice more, updating H each time and “flattening out” the local minimum until it escapes to the right.

An agent implementing this scheme, which is called learning real-time A (LRTA), is

LRTA*

shown in Figure 4.24. Like ONLINE-DFS-AGENT , it builds a map of the environment in the result table. It updates the cost estimate for the state it has just left and then chooses the

“apparently best” move according to its current cost estimates. One important detail is that actions that have not yet been tried in a state s are always assumed to lead immediately to the goal with the least possible cost, namely h(s). This optimism under uncertaintyencourages

OPTIMISM UNDER UNCERTAINTY

the agent to explore new, possibly promising paths.

An LRTA agent is guaranteed to find a goal in any finite, safely explorable environment.

Unlike A, however, it is not complete for infinite state spaces—there are cases where it can be led infinitely astray. It can explore an environment of n states in O(n2) steps in the worst case,

14 Random walks are complete on infinite one-dimensional and two-dimensional grids. On a three-dimensional grid, the probability that the walk ever returns to the starting point is only about 0.3405 (Hughes, 1995).

- A chaque état, la probabilité de reculer est 2 fois plus grande

- A chaque étape il reviendra revoir tout ce qu’il a visité

(32)

Recherche "online"

LRTA* - learning real time A*

Pour chaque sommet s, une heuristique admissible h(s).

L'idée : on améliore la valeur de la heuristique en cours de l’exploration.

On peut alors sortir des pièges des min locaux

Si l'environnement est sûr (il existe une solution accessible à

partir de chaque sommet) alors LRTA* est complet

(33)

Recherche "online"

LRTA* - learning real time A*

Variables globales:

résultat — un tableau dynamique (un map) initialement vide indexé par les couples [état,action]

Si résultat[s,a] défini alors on a déjà visité s, on a exécuté a dans s et résultat[s,a] — donne l'état obtenu en appliquant a dans s.

H — un tableau dynamique (un map) indexé par les états, initialement vide.

H[s] donne la valeur de heuristique pour l'état s. Pendant la première visite dans s on initialise H[s]=h(s), où h une fonction heuristique admissible

connue.

(34)

quand on entre dans un état t et si H[t] non défini alors c'est la première visite dans t et on initialise H[t]=h(t)

quand on sort d'un état s alors si tous les voisins de s ont déjà été visités alors on calcule

w = min coût(s,b,résultat[s,b])+H[resultat[s,b]] pour b dans Actions(s)

si H[s]<w alors H[s]=w

LRTA* -intuition

(35)

Recherche "online"

LRTA* - learning real time A*

a t

résultat[t,a] inconnu pour une action a de Actions(t)

H[t] = h(t)

résultat[t,a] connu pour toute action a de Actions(t)

a t

b

d c

w = mina2Actions(t) cout(t, a, resultat[t, a]) + H[resultat(t, a)]

resultat[t,a] = ?

mise à jour de H[t]

H [t] = max{H [t], w}

(36)

Recherche "online"

LRTA* - learning real time A*

a t

résultat[t,a] inconnu pour une action a de Actions(t)

résultat[t,a] connu pour toute action a de Actions(t)

a t

b

d c

resultat[t,a] = ?

on préfère une action a

telle que resultat[s,a] inconnu (exploration)

on préfère l'action a telle que

cout(t,a,resultat[t,a])+H[resultat[t,a]]

soit minimal

(37)

Recherche "online"

LRTA* - learning real time A*

variable globales : s - état précédent, a - action exécutée dans s pour aller vers l'état courant. Initialement s et a sont null.

La fonction suivante met à jour l'estimation du coût minimal fonction LRTA*coût(s,a,t) retourne réel

if t undefined then return h(s) else return coût(s,a,t)+H[t]

(38)

Recherche "online"

LRTA* - learning real time A*

variable globales : s - état précédent, a - action exécutée dans s pour aller vers l'état courant t. Initialement s et a sont null.

Faire tourner en boucle jusqu'à stop:

fonction LRTA*(état t) retourne action if t état final then stop

if H[t] undefined then H[t]=h(t)

if s is not null then /*mis à jour de H[s]*/

resultat[s,a]=t

w=min { LRTA*coût(s,b,résultat[s,b]) | b in Actions(s) } H[s] = max { H[s], w }

a = action b qui minimise LRTA*coût(t,b,résultat[t,b]) s=t 


retourner a

(39)

152 Chapter 4. Beyond Classical Search

1 2 1 1

1 1 1

1

1 1 1

1 1 1

1

1 1 1

1 1 1

1

2

2

3

4

4

4

3

3

3

1 1 1

1 1 1

1 3

1 1 1

1 1 1

1 5

3

5 5

4

(a)

(b)

(c)

(d)

(e)

8 9

8

9

8 9

8

9

8 9

4 4

3 4

Figure 4.23 Five iterations of LRTA on a one-dimensional state space. Each state is labeled with H(s), the current cost estimate to reach a goal, and each link is labeled with its step cost. The shaded state marks the location of the agent, and the updated cost estimates at each iteration are circled.

function LRTA*-AGENT(s) returns an action

inputs: s, a percept that identifies the current state

persistent: result, a table, indexed by state and action, initially empty H , a table of cost estimates indexed by state, initially empty s, a, the previous state and action, initially null

if GOAL-TEST(s) then return stop

if s is a new state (not in H ) then H [s] h(s) if s is not null

result[s, a] s H [s] min

b ACTIONS(s)

LRTA*-COST(s, b, result[s, b], H )

a an action b in ACTIONS(s) that minimizes LRTA*-COST(s, b, result[s, b], H ) s s

return a

function LRTA*-COST(s, a, s, H ) returns a cost estimate if s is undefined then return h(s)

else return c(s, a, s) + H[s]

Figure 4.24 LRTA*-AGENT selects an action according to the values of neighboring states, which are updated as the agent moves about the state space.

LRTA* - sortir d’un min local

Références

Documents relatifs

Forty years ago, the College of Family Physicians of Canada (CFPC) established the Section of Teachers of Family Medicine (SOT) as a national home for family medicine teachers

[r]

While most of the Minnesota collection is also available on the Internet, documents from the British American Tobacco Company stored in England are not, except for a small subset

LOFT’s youth-directed Transitional Age Youth Programs offer case management, peer support and group programming for emerging adults living with mental health and/or substance

Separate regressions of treatment on the outcome using the reduced sample size for each interaction term confirm that differences in the coefficient on treatment are not due to

3.My sister(not – to like ) doesn’t like making posters but she (like)likes talking to students.. My sister doesn’t like making posters but she likes talking

articles were freely available from 229 unique sources but that academic libraries provide over half of this content, possibly either. through subject or

In the previous section we have seen an example where, surprising to Unguru’s interpretation of Greek mathematics, geometrical objects are brought under arithmetical operations.