• Aucun résultat trouvé

A prediction problem in game theory BY ULF GRENANDER 1 Diagram

N/A
N/A
Protected

Academic year: 2022

Partager "A prediction problem in game theory BY ULF GRENANDER 1 Diagram"

Copied!
9
0
0

Texte intégral

(1)

A R K I V F O R M A T E M A T I K B a n d 3 n r 32 Communicated 12 September 1956 by HAttALD CRAM~R and OTTO FROSTMAI~

A prediction problem in game theory

B Y U L F G R E N A N D E R

1 Diagram

1. Introduction

Two persons, X and P, play the following game. X controls a random mechanism producing a stationary stochastic process x = {xt}. The mean value E x ( t ) = 0 a n d the variance E x ~ (t)= 1 are given, but the form of the spectrum of the process can be determined completely b y X. The player P observes the values x (t) for t < 0 a n d wants to predict the value of

1

z = f a(t) x t d t = A x ,

0

where a (t) is a real and continuous function of t in (0, 1). F o r this purpose P has available all the linear predictors formed from the stochastic variables xt, t<O. L e t us denote the chosen predictor b y p and the predicted value b y px.

The payoff function of the game is ( A x - p x ) 2. X ' w i s h e s to maximize t h e q u a n t i t y l t A x - p x i i Z = E [ A x - p x ] z and P tries to minimize the same ex- pression.

I n the next section we s t u d y the value of m a x m i n H A x - p x H 2 and how this value is attained. I n section 3 we show t h a t this value coincides with m i n m a x I I ~ - V ~ l l z, the game is definite, and the pair of pure strategies is obtained t h a t forms a solution of the game. I t m a y be a m a t t e r of convention whether the strategy of the player X is considered as pure (a single choice of g (t)) or randomized (a choice of the stochastic process x). Sections 4 and 5 are devoted to some applications of the theorem in section 3.

The author was led to this result when working on an applied problem. The reader m a y be interested in consulting the paper of M. C. Yovits and J. L.

J a c k s o n [1] for a different approach to a similar problem.

2. Derivation o f m a x m i n

When P first chooses his strategy p to minimize I I A x - p x l l 2 and t h e n X chooses x to maximize the minimum value, it is clear t h a t we can immediately limit the set of strategies of X to processes t h a t are completely non deterministic.

(2)

U L F C-:aEr~ANDER, A prediction problem in £ame theory

To see this we note first t h a t for given x we minimize H A x - p x l l 2 b y choosing

1

p x = f a(t) xFdt,

0

where x~ is the best prediction of the value x~, when x~, v<_0, have been ob- served. Second, if xt is an a r b i t r a r y stationary process it is well known t h a t it can be decomposed xt = yt + zt into two terms, one, yt, completely non deter- ministic and the other, z~, deterministic and such t h a t ys±zt for all s, t. B u t t h e n for t h e o p t i m a l predictor c o r r ~ p o n d i n g t o z~ w e h a v e EI A x - p z l l ~ = 11"4 y - p y II ~ and I l y l l < ] ] x l l = l with equality only if z=O. If z * O we would have reached a larger value be choosing the process []y[I-lyt instead of xt, which proves the statement.

B u t if xt is completely non-deterministic it can be represented as

X t t

g ( t - u) d ~ (u),

- - o 0

where ~ (u) is a process with orthogona] increments and normed [I A ~ (u)II ~ = A u, and

oO

f g2 (t) d t = 1. (1)

0

There are in general m a n y different representations of this form, b u t we are especially interested in the one (which always exists) such t h a t the linear manifolds spanned b y x~, s<_t, and ~ (s), s<_t, coincide for all values of t. T h e n the optimal predictor is simply

0

x* = f g ( t - ~) d~

(~),

- - o o

and we have

1 1 1

* 2

m i n l l A x - p x l ] ~ = l l f a ( t ) ( x t - x t ) d t l l = f f a ( s ) a ( t ) r ( s , t ) d s d t ,

P 0 0 0

where

r(s, t ) = E ( x ~ - x * ) ( x t - X ~ ~ ) =

r a i n (s, t)

f g ( s - u ) g ( t - u ) d u .

0

Hence the minimum value is given b y 372

(3)

ARKIV FOR MATEMATIK. B d 3 nr 32

1 1 m i n (s, t)

m i n H A x - p x l [

2= f f a(s)a(t) f g ( s - u ) g ( t - u ) d u d s d t

0 0 0

1

= f K (x,

y)g(x)g(y)dx dy

0

rain ( l - - x , l - - y )

w i t h K (x, y) = f a (x + u) a (y + u) d u. (2)

0

N o w t h e p l a y e r X tries to m a k e this m i n i m u m value as large as possible b y choosing x in a n a p p r o p r i a t e way. Because of t h e side condition (1) it is clear t h a t

m a x m i n II A v xll 2 -< v

g~ p

if v is t h e largest eigenvalue of t h e continuous a n d s y m m e t r i c kernel K (x, y) in (2). On t h e o t h e r h a n d if ~, (t) is t h e n o r m e d eigenfunction c o r r e s p o n d i n g t o t h e eigenvalue v (or one o f t h e functions if v is a multiple value) t h e n

1

m i n

f .K ay=v.

P 0

F o r this i n e q u a l i t y a n d o t h e r results on s t a t i o n a r y processes used in this section we r e f e r - t h e r e a d e r t o J. L. Doob. [1].

W e h a v e n o w p r o v e d t h a t m a x rain = t h e largest eigenvalue v of t h e kernel K (x, y). W e n o w proceed t o t h e derivation of t h e v a l u e of rain m a x .

3. D e r i v a t i o n o f m i n m a x

L e t us a p p r o x i m a t e a (t) u n i f o r m l y with a s t e p - f u n c t i o n

As

a , ( t ) = a - - = a ~ for - < t < - - , v = 0 , 1 . . . . n - 1 .

w e

i n s t e a d of

A x.

L e t us d e n o t e t h e stochastic variables

1 1 1

Ilf f gtll<-II ,ll f (Old*

0 0 0

1

see t h a t we c o m m i t a n arbitrarily small error if we deal w i t h

f an (t)xt d t

0

b y C t h e class of all predictors f o r m e d f r o m

(4)

ULF C.RENANDER, A prediction problem in game theory

Then it is clear t h a t

v + I it

x ~ = n f xtdt,

._v n

v = O + l , + 2 . . .

min m a x 11 An x - p x ll -< men m a x II An x - p x l].

l$ z p ~ C x

The x, form a stationary process with a variance less t h a n b u t close to one when n is large. L e t us denote b y P the class of all discrete stochastic pro- cesses t h a t can be represented as x, above in terms of a s t a t i o n a r y stochastic process with a continuous time parameter. Then

rain max ]1 A,~ x - p x I1 = rain max 11 An x - p x 11 <- rain max l] A,~ x - p x ][,

p e C x p e g z E F p e C x ~ D

where D stands for the class of all discrete stationary processes with vari- ance one.

Now the m a x i m u m can be expressed conveniently in another way. Using the well-known spectral representation we can write

- - g

with [[AZ($)]I s = A F ( ~ ) and F ( ~ ) - F ( - x ) = I . We also have

- a~ x,. - ~ c~ x , =

[~ (~)

- ~, (;t)] d Z

(~),

n 0 - o o

where

a n d

n - 1

a (2.) = ! n ~o a'e"~'

- 1

q~ (I) = 7. c, e t'~

- c o

n o

Then m a x [ [ A ~ x - p x l l ~ = m a x f ~ ( t ) - ~ , ( ~ ) ] ~ .

x ~ D 1

We can now apply the following theorem: Consider the class of all power

n

series /(~) which are regular in IF] < 1 and begin with the given terms 7. g~ ~k.

k = 0

We denote b y /x~ the highest eigenvalue of the matrix with elements

(5)

ARKIV FOR MATEMATIK. B d 3 nr 32 hpq = ~p ~q -37 ~ p - 1 ~q-1 2i- "'" 27 {x 0 ~ q - p

p,q=O, 1 .... m.

T h e n if #= > O,

max

11(~)[_>/~,~

I¢lffil

for P-< q' /

J

a n d equality holds for a function of the form

r n

/ ( ~ ) = ~ d ~ 1 ¢+wk

= 1 + ff~k ~' y real, l w k l < l .

F o r a fuller discussion and proof of this theorem see G. Szeg5 a n d U. Gre- n a n d e r [1].

I n the present case we h a v e to derive the largest eigenvalue of the sym- metric m a t r i x with t h e elements

hvq=-~[a(1-P/n) a ( 1 - q / n ) + a ( 1 - P : l ) a ( 1 - q : l ) + . . . +

+ a , l ) a I l - q : P t ]

for

p~q,p, q=O, l

. . . 9 ~ - - 1 .

\ n l j

As a (t) is a continuous function of t it follows t h a t for

Pin-+x, q/n--+y, X<_y

we h a v e

min (x, y)

lim

nhv•= f a ( 1 - x + u ) a ( 1 - y + u ) d u = K ( 1 - x , l - y ) .

n ' - - ~ a ° 0

I t now follows t h a t lira #~ = v, the largest eigenvalue of t h e kernel

K (x, y).,

n - > o o

Hence we h a v e shown t h a t

rain m a x

IIAx-pxli~<_v=max

min I I A x - p x I I ~,

p x x p

where only t h e equality sign is possible. This completes t h e proof of the T h e o r e m : T h e zero s u m two person game defined a b o v e with the p a y o f f function

( A x - p x f

is definite. A solution of the g a m e is given b y the p u r e strategies

~ = .[ r ( t - ~) ~ ~ (~)

- - o o

px= fa(t)x~[ dt

o

(6)

ULF GRENANDER, A prediction problem in game theory

where ~ (t) is an eigenfunction corresponding to the largest eigenvalue v of the s y m m e t r i c kernel

r a i n ( l - x , i - y )

K ( x , y ) =

f

a ( x + u ) a ( y + u ) d u .

0

The value of the game is v. W h e n v is a simple eigenvalue the solution of the g a m e is unique.

4. First application

W h e n a ( x ) = 1 - x we h a v e to solve the integral equation

1

f K (x, y) g (x) d x = ~ g (y)

0

with K (x, y) = K (y, x) and

K ( x , y ) = f ( 1 - u - y ) ( 1 - u - x ) d u for y < x .

0

This e l e m e n t a r y problem will be solved b y a n a r g u m e n t of the t y p e used in connection with Green's functions.

We h a v e the partial derivatives with respect to y

x ; = - f

0

H t H

K~ -=- Ky = 0

for y < x

and

1 - - y

0 t t

Ky = y - x

t r !

Ky = 1

for y > x .

E x c e p t for t h e point y = x the function K (x, y) has continuous derivatives of all orders with respect to y. I n y = x the first and second derivatives are con- tinuous

g I I

( ~ ) ~ x - = ( K ~ ) ~ x + = - f (1 - x - u ) d u

0

t t pt

( K y ) , _ , _ = ( K ~ ) y - , + l = 0 376

(7)

A R K I V FOR MATEMATIK. Bd 3 nr 32 b u t t h e third derivative has a discontinuity. L e t us differentiate the integral e q u a t i o n twice with respect to y,

1 y

¢ ' (y) = f K'~' (x, y) g (x) d . = f (y - x) g (~) d ~ .

0 0

Differentiating this equation twice more we get 2 grV (y) = g (y).

T h e solutions of this differential equation are of the f o r m g (t) = A e t~t + B e -~'`t + C e "`t + D e -~t

with u = ~ t -~. To determine A, B, C and D we h a v e to observe the b o u n d a r y conditions t h a t g (t) has to satisfy. Indeed, as K (x, 1 ) = K'y (x, 1 ) = 0 and be- cause of g" ( 0 ) = g ' " ( 0 ) = 0 we get

A ei'` + B e-*'` + C e'` + D e - ' ` = OI n (A i d'` - B e -t" + Ce'` - D e - " ) = O~

n 2 ( - A - B + C + D ) = O [ "

u 3 ( - i A + i B + C - D ) = O ]

I n order t h a t this s y s t e m of linear equations h a v e non-trivial solutions it is necessary t h a t its d e t e r m i n a n t is zero

ei'`~ e-i'`~ e'`~ e - ' `

- - 1 - 1 , 1, 1

I - i , i, 1, - 1

= 0 .

E x p a n d i n g this d e t e r m i n a n t after its first row we h a v e D (u) = [2 i e -i'` + (i - 1) e'` + (i + 1) e-'`] e ~" -

- [ - 2 ie~'` + ( - 1 - i ) e'`+ (1 - i ) e-'`] e-~'` + + [(1 + i ) e~'`+ ( - 1 + i ) e-~'` + 2 ie-'`] e ' ` - - [(1 - i ) e~'`+ ( - 1 - i ) e - ~ ' ` - 2 i e "`] e-"

= 8 i [1 + cos ~ cosh ~].

T h e largest value of 2 corresponds to the smallest u satisfying c o s u c o s h u = - l .

(8)

ULF GRENANDER,

A prediction problem in game theory

W e find a p p r o x i m a t e l y ~ n ~ 1 . 8 8 , so t h a t t h e v a l u e of t h e g a m e is

P u t t i n g

or

we g e t

v = 2m= = ~ ~- 0.08.

T o d e t e r m i n e t h e corresponding eigenfunction we s t a r t f r o m A £ig

-A -iA

+Be-~"-B+iB +Ce"+De-"=O+C+c +D_D

=0=0[

c¢=A+B=C+:}

A = 1 (o¢ + f i l l ) ,

0 = 1 (o~ + ~ ) , D = I ( ~ - t ~ ) J (t) = ~ (cos ~ t + cos h u t) + fl (sin ~ t + sin h ~ t), w h e r e 0¢ a n d fl can be d e t e r m i n e d f r o m

~](l)l =0 /

f ~,~ if) d t = 1

0

T h e f o r m of ~, (t) is s h o w n in t h e d i a g r a m .

5. Second application

I f a ( x ) ~ - I we h a v e t o s t u d y i n s t e a d t h e simple i n t e g r a l e q u a t i o n

1

f K (x, y) g (x) d x = ~ g (y)

0

w i t h K (x, y) = m i n (1 - x, 1 - y),

y 1

~ g ( y ) = ( 1 - y ) f g ( x )

dx + f

( l - x )

g(x) dx.

0 y

W i t h a familiar a r g u m e n t we get

~.g" (y)+g (y)=O

w i t h

g(1)=g'

( 0 ) = 0 . H e n c e 378

(9)

2.0-

ARKIV FOR MATEMATIKo B d 3 nr 32

1.0-

i , , J

0 0.2 0.4 0.6 0.8 1.0

Diagram 1 y ( t )

w i t h u = 2 - ½ . A s

g (t) = A e ~ t + B e - ~ t

A e i ~ + B e - i ~ =O,

A - B = 0 ,

we s h o u l d h a v e cos u = O, so tha~ v = ~max ~- V 2 / ~ a n d (t) = 1/2 cos ~ t.

R E F E R E N C E S DooB, J . L. [1], Stochastic Processes. New York, 1953.

SZEGS, G. a n d G~E~rANDER, U. [1], Toeplitz F o r m s a n d Their Applications. (To appear.) YOVITS, M. C. a n d JACKSOn, J . L. [I], Linear F i l t e r Optimization w i t h Game Theory Considera-

tions. I n s t . Radio Engin. Convention Record, 1955, P a r t 4.

Department of Mathematical Statistic, Stockholm, Sweden.

Tryekt den 27 december 1956 UppsaIa 1956. Almqvist & Wiksells Boktryckeri AB

Références

Documents relatifs

It has been shown by Browning and Chiap- pori [2] that condition GS(k) is necessary in the case of household demand – that is, when there are public goods in the economy

3-dimensional manifold with boundary M which prevents interior contact between minimizing surfaces in M and the boundary of M unless the surface is totally contained

http://www.numdam.org/.. Henri Poincaré, Vol. -- We consider the Schrodinger scattering of a particle of spin 0 by a real, rotationally invariant potential at

The senior pirate is called the captain. The captain offers to share a number M of coins; we assume that this number is sufficiently large. The crew votes for or against the offer;

rCTe &lt; °°} = [ u ( x , y ) ^dy).. and finally /ig is the said equilibrium measure. Standard terminology and notation from the modern theory of Markov processes are used above.

The main problem today is that there is no way to estimate the execution time in advance, for the user to decide the best time to execute the query he needs for his work.. This

A bad edge is defined as an edge connecting two vertices that have the same color. The number of bad edges is the fitness score for any chromosome. As

The results show that CNG with the proposed distance measures is more accurate when only limited training text samples are available, at least for some of the candidate authors,