• Aucun résultat trouvé

The essence of PSPACE: Optimum strategies for game playing

Dans le document This page intentionally left blank (Page 112-115)

PART ONE BASIC COMPLEXITY CLASSES

Exercises 77 A subfieldof complexity theory calledstructural complexity has carriedout a detailed

4.2.2 The essence of PSPACE: Optimum strategies for game playing

Recall that the central feature ofNP-complete problems is that a yes answer has a short certificate (see Definition2.1). The analogous concept forPSPACE-complete problems seems to be that of a winning strategy for a two-player game with perfect information.

A goodexample of such a game is Chess: Two players alternately make moves, and the moves are made on a board visible to both, hence the termperfect information.

What does it mean for a player to have a “winning strategy”? The first player has a winning strategy iff there is a first move for player 1 such that for every possible first move of player 2 there is a secondmove of player 1 such that … (andso on) such that at the endplayer 1 wins. Deciding whether or not the first player has a winning strategy seems to require searching the tree of all possible moves. This is reminiscent ofNP, for which we also seem to require exponential search. But the crucial difference is the lack of a short “certificate” for the statement “Player 1 has a winning strategy,” since the only certificate we can think of is the winning strategy itself, which, as noticed, requires exponentially many bits to evendescribe. Thus we seem to be dealing with a fundamentally different phenomenon than the one captured byNP.

The interplay of existential anduniversal quantifiers in the description of the the winning strategy motivates us to invent the following game.

4.3. Completeness 87

EXAMPLE 4.15 (The QBF game)

The “board” for the QBF game is a Boolean formula ϕ whose free variables are x1,x2,. . .,x2n. The two players alternately make moves, which involve picking values for x1,x2,. . ., in order. Thus player 1 will pick values for the odd-numbered vari-ablesx1,x3,x5,. . .(in that order), and player 2 will pick values for the even-numbered variablesx2,x4,x6,. . .. We say player 1 wins iff at the endϕ(x1,x2,. . .,x2n)is true.

In order for player 1 to have awinning strategyhe must have a way to win for all possible sequences of moves by player 2, namely, if

∃x1∀x2∃x3∀x4· · · ∀x2nϕ(x1,x2,. . .,x2n), which is just saying that this quantifiedBoolean formula is true.

Thus deciding whether player 1 has a winning strategy for a given board in the QBF game isPSPACE-complete.

At this point, the reader is probably thinking of familiar games such as Chess, Go, andCheckers andwondering whether complexity theory may help differentiate between them—for example, to justify the common intuition that Go is more difficult than Chess. Unfortunately, formalizing these issues in terms of asymptotic complexity (i.e., using an infinite language) is tricky because these are finite games, andas far as the existence of a winning strategy is concerned, there are at most three choices: Player 1 has a winning strategy, Player 2 does, or neither does (they can play to a draw).

However, one can study generalizations of these games to ann×nboardwherenis arbitrarily large—this may involve stretching the rules of the game since the definition of chess is tailoredto an 8×8 board. After generalizing this way, one gets an infinite sequence of game situations, andthen one can show that for most common games, including chess, determining which player has a winning strategy in then×nversion is PSPACE-complete (see [Pap94] or [GJ79]). Thus ifNP=PSPACE, then there is no short certificate for exhibiting that either player in such games has a winning strategy.

ProvingPSPACE-completeness of games may seem like a frivolous pursuit, but similar ideas lead toPSPACE-completeness of some practical problems. Usually, these problems involve repeatedmoves by an agent who faces an adversary with unlim-itedcomputational power. For instance, many computational problems of robotics involve a robot navigating in a changing environment. If we wish to be pessimistic about the environment, then its moves may be viewedas the moves of an adver-sary. With this assumption, solving many problems of robotics isPSPACE-complete.

Some researchers feel that the assumption that the environment is adversarial is unduly pessimistic. Unfortunately, even assuming a benign or “indifferent” environment still leaves us with aPSPACE-complete problem; see the reference toGames against nature in the chapter notes.

4.3 NL COMPLETENESS

Now we consider problems that form the “essence” of nondeterministic logarithmic space computation, in other words, problems that arecompleteforNL. What kindof

reduction should we use? When choosing the type of reduction to define completeness for a complexity class, we must keep in mindthe complexity phenomenon we seek to understand. In this case, the complexity question is whether or notNL=L. We cannot use the polynomial-time reduction sinceLNLP(see Exercise4.3). The reduction shouldnot be more powerful than the weaker class, which isL. For this reason we use logspacereductions, which, as the name implies, are computed by a deterministic TM running in logarithmic space. To define these, we must tackle the tricky issue that a logspace machine might not even have the memory to write down its output. The way out is to require that the reduction should be able to compute any desired bit of the output in logarithmic space. In other words, the reductionfisimplicitly computablein logarithmic space, in the sense that there is anO(log|x|)-space machine that on input x,ioutputsf(x)iprovided thati≤ |f(x)|.

Definition 4.16 (logspace reduction andNL-completeness)

A functionf :{0, 1}→ {0, 1}isimplicitly logspace computable, iff is polynomially bounded (i.e., there’s some csuch that|f(x)| ≤ |x|c for everyx ∈ {0, 1}) andthe languagesLf = {x,i |f(x)i=1}andLf = {x,i |i≤ |f(x)|}are inL.

A languageBislogspace reducibleto languageC, denotedBl C, if there is a function f:{0, 1}→ {0, 1}that is implicitly logspace computable andxBifff(x)Cfor everyx∈ {0, 1}.

We say thatCisNL-completeif it is inNLandfor everyBNL,BlC.

Another way (usedby several texts) to think of logspace reductions is to imagine that the reduction is given a separate “write-once” output tape, on which it can either write a bit or move to the right but never move left or readthe bits it wrote down previously. The two notions are easily provedto be equivalent (see Exercise4.8).

The next lemma shows that logspace reducibility satisfies the usual properties one expects. It also implies that anNL-complete language is inLiffNL=L.

Lemma 4.17

1. If BlC and ClD then BlD.

2. If BlC and CLthen BL. #

Proof: We prove that iff,gare two functions that are logspace implicitly computable, then so is the functionhwhereh(x)=g(f(x)). Then part 1 of the Lemma follows by lettingfbe the reduction fromBtoCandgbe the reduction fromCtoD. Part 2 follows by lettingf be the reduction fromBtoCandgbe the characteristic function ofC(i.e., g(y)=1 iffyC).

Let Mf,Mg be the logspace machines that compute the mappings x,if(x)i

andy,jg(y)j respectively. We construct a machine Mh that given input x,j with j≤ |g(f(x))|, outputsg(f(x))j. MachineMhwill pretend that it has an additional (ficti-tious) input tape on whichf(x)is written, andit is merely simulatingMgon this input (see Figure4.3). Of course, the true input tape hasx,jwritten on it. To maintain its fiction,Mh always maintains on its work tape the index, sayi, of the cell on the ficti-tious tape thatMgis currently reading; this requires only log|f(x)|space. To compute

4.3. Completeness 89

Figure 4.3.Composition of two implicitly logspace computable functionsf,g. The machineMguses calls tof to implement a “virtual input tape.” The overall space used is the space ofMf + the space ofMg+O(log|f(x)|)

=O(log|x|).

for one step, Mg needs to know the contents of this cell, in other words, f(x)|i. At this pointMhtemporarily suspends its simulation ofMg(copying the contents ofMg’s work tape to a safe place on its own work tape) andinvokesMf on inputsx,ito get f(x)|i. Then it resumes its simulation ofMgusing this bit. The total spaceMhuses is O(log|g(f(x))|+s(|x|)+s(|f(x)|)). Since|f(x)| ≤poly(x), this expression isO(log|x|). Now we exhibit anNL-complete language. Recall from Section4.1.2the language PATHof triplesG,s,tsuch that vertext can be reachedfromsin the directed graph G. We have the following result.

Theorem 4.18

PATHisNL-complete.

Proof: We have already seen thatPATHis inNL. LetLbe any language inNLandMbe a machine that decides it in spaceO(logn). We describe a logspace implicitly computable functionfthat reducesLtoPATH. For any inputxof sizen,f(x)will be the configuration graphGM,xwhose nodes are all possible 2O(logn)configurations of the machine on input x, along with the start configurationCstartandthe accepting configurationCacc. In this graph there is a path fromCstart toCacciffMacceptsx. The graph is representedas usual by anadjacency matrixthat contain 1 in the C,Cth position (i.e., in theCth row andCth column if we identify the configurations with numbers between 0 and 2O(logn)) iff there’s an edgeC fromCin GM,x. To finish the proof, we needto show that this adjacency matrix can be computed by a logspace reduction, in other words, to describe a logspace machine that can compute any desired bit in it. This is easy since givenC,Ca deterministic machine can in spaceO(|C| +C)=O(log|x|)examine C,Candcheck whetherCis one of the (at most two) configurations that can follow Caccording to the transition function ofM.

Dans le document This page intentionally left blank (Page 112-115)