• Aucun résultat trouvé

Forward estimation for ergodic time series

N/A
N/A
Protected

Academic year: 2022

Partager "Forward estimation for ergodic time series"

Copied!
12
0
0

Texte intégral

(1)

Forward estimation for ergodic time series

Gusztáv Morvai

a

, Benjamin Weiss

b,

aResearch Group for Informatics and Electronics of the Hungarian Academy of Sciences, 1521 Goldmann György tér 3, Budapest, Hungary bHebrew University of Jerusalem, Jerusalem 91904, Israel

Received 29 August 2003; accepted 15 June 2004 Available online 5 February 2005

Abstract

The forward estimation problem for stationary and ergodic time series{Xn}n=0taking values from a finite alphabetXis to estimate the probability thatXn+1=xbased on the observationsXi, 0inwithout prior knowledge of the distribution of the process{Xn}. We present a simple proceduregnwhich is evaluated on the data segment(X0, . . . , Xn)and for which, error(n)= |gn(x)P (Xn+1=x|X0, . . . , Xn)| →0 almost surely for a subclass of all stationary and ergodic time series, while for the full class the Cesaro average of the error tends to zero almost surely and moreover, the error tends to zero in probability.

2004 Elsevier SAS. All rights reserved.

Résumé

Le problème de l’estimation future d’une série temporelle ergodique et stationnaire{Xn}n=0, prenant ses valeurs dans un alphabet finiX, est d’estimer la probabilité queXn+1=x, connaissant lesXipour 0inmais sans connaissance préalable de la distribution du processus{Xi}. Nous présentons un procédé simplegn, évalué sur les données(X0, . . . , Xn), pour lequel erreur(n)= |gn(x)P (Xn+1=x|X0, . . . , Xn)| →0 presque sûrement pour une sous-classe de toutes les séries temporelles ergodiques et stationnaires, tandis que pour la classe entière la moyenne de Cesaro de l’erreur tend vers zéro presque sûrement.

De plus, l’erreur tend vers zéro en probabilité.

2004 Elsevier SAS. All rights reserved.

MSC: 62G05; 60G25; 60G10

Keywords: Nonparametric estimation; Stationary processes

* Corresponding author.

E-mail addresses: morvai@szit.bme.hu (G. Morvai), weiss@math.huji.ac.il (B. Weiss).

0246-0203/$ – see front matter 2004 Elsevier SAS. All rights reserved.

doi:10.1016/j.anihpb.2004.07.002

(2)

1. Introduction

T. Cover [6] posed two fundamental problems concerning estimation for stationary and ergodic binary time series{Xn}n=−∞. (Note that a stationary time series{Xn}n=0can be extended to be a two sided stationary time series{Xn}n=−∞.) Cover’s first problem was on backward estimation.

Problem 1. Is there an estimation schemefnfor the valueP (X1=1|Xn, . . . , X0)such thatfndepends solely on the observed data segment(Xn, . . . , X0)and

nlim→∞fn(Xn, . . . , X0)P (X1=1|Xn, . . . , X0)=0

almost surely for all stationary and ergodic binary time series{Xn}n=−∞?

This problem was solved by Ornstein [20] by constructing such a scheme. (See also Bailey [5].) Ornstein’s scheme is not a simple one and the proof of consistency is rather sophisticated. For an even more general case, a much simpler scheme and proof of consistency were provided by Morvai, Yakowitz and Györfi [19]. (See also Algoet [1] and Weiss [24].) Note that none of these schemes are reasonable from the data consumption point of view.

Cover’s second problem was on forward estimation.

Problem 2. Is there an estimation schemefnfor the valueP (Xn+1=1|X0, . . . , Xn)such thatfndepends solely on the data segment(X0, . . . , Xn)and

nlim→∞fn(X0, . . . , Xn)P (Xn+1=1|X0, . . . , Xn)=0

almost surely for all stationary and ergodic binary time series{Xn}n=−∞?

This problem was answered by Bailey [5] in a negative way, that is, he showed that there is no such scheme.

(Also see Ryabko [22], Györfi, Morvai and Yakowitz [11] and Weiss [24].) Bailey used the technique of cutting and stacking developed by Ornstein [21] and Shields [23]. Ryabko’s construction was based on a function of an infinite state Markov-chain.

Morvai [16] addressed a modified version of Problem 2. There one is not required to predict for all time instances rather he may refuse to predict for certain values ofn. However, he is expected to predict infinitely often. Morvai [16] proposed a sequence of stopping timesλnand he managed to estimate the conditional probabilityP (Xλn+1= 1|X0, . . . , Xλn)in the pointwise sense, that is, for his estimator along the proposed stopping time sequence, the error tends to zero asnincreases, almost surely. Another estimator was proposed for this modified Problem 2 by Morvai and Weiss [17] for which theλngrow more slowly, but the consistency only holds for a certain subclass of all stationary binary time series.

In this paper we consider the original Problem 2 but we shall impose an additional restriction on the possible time series. The conditional probabilityP (X1=1|. . . , X1, X0)is said to be continuous if a version of it is continuous with respect to metric

i=02i1|xiyi|, wherexi, yi∈ {0,1}.

Problem 3. Is there an estimation schemefnfor the valueP (Xn+1=1|X0, . . . , Xn)such thatfndepends solely on the data segment(X0, . . . , Xn)and

nlim→∞fn(X0, . . . , Xn)P (Xn+1=1|X0, . . . , Xn)=0

almost surely for all stationary and ergodic binary time series{Xn}n=−∞ with continuous conditional probability P (X1=1|. . . , X−1, X0)?

(3)

We will answer this question in the affirmative. This class includes allk-step Markov chains. It is not known if the schemes proposed by Bailey [5], Ornstein [20], Morvai, Yakowitz and Györfi [19] solve Problem 3 or not.

Problem 4. Is there an estimation schemefnfor the valueP (Xn+1=1|X0, . . . , Xn)such thatfndepends solely on the data segment(X0, . . . , Xn)and

nlim→∞

1 n

n1

i=0

fi(X0, . . . , Xi)P (Xi+1=1|X0, . . . , Xi)=0

almost surely for all stationary and ergodic binary time series{Xn}n=−∞?

Bailey [5] (cf. Algoet [2] also) showed that any scheme that solves Problem 1 can be easily modified to solve Problem 4 (indeed, just exchange the data segment(Xn, . . . , X0)for(X0, . . . , Xn), but apparently not all solutions of Problem 4 arise in this fashion. For further reading cf. Algoet [1,3], Morvai, Yakowitz and Györfi [19], Györfi et al. [8], Györfi, Lugosi and Morvai [10], Györfi and Lugosi [9] and Weiss [24].

Problem 5. Is there an estimation schemefnfor the valueP (Xn+1=1|X0, . . . , Xn)such thatfndepends solely on the data segment(X0, . . . , Xn)and for arbitrary >0,

nlim→∞Pfn(X0, . . . , Xn)P (Xn+1=1|X0, . . . , Xn)>

=0 for all stationary and ergodic binary time series{Xn}n=−∞?

By stationarity, for any scheme that solves Problem 1, the shifted version of it solves Problem 5. (Just replace the data segment(Xn, . . . , X0)by(X0, . . . , Xn).)

There are existing schemes that solve Problem 4 (e.g. Bailey [5], Ornstein [20], and even for a more general case Morvai, Yakowitz and Györfi [19], Algoet [1], Györfi and Lugosi [9]) and there are schemes that solve Problem 5 (e.g. Bailey [5], Ornstein [20] and for even more general case Morvai, Yakowitz and Györfi [19], Algoet [1], Morvai, Yakowitz and Algoet [18]). In this paper we propose a reasonable, very simple algorithm that simultanously solves Problem 3, 4 and 5. Note that the schemes given by Bailey [5], Ornstein [20], Morvai, Yakowitz and Györfi [19], Algoet [1] and Weiss [24] are not reasonable at all, they consume data extremely rapidly, cf. Morvai [15] and it is not known if their schemes solve Problem 3 or not.

2. Preliminaries and main results

Let{Xn}n=−∞be a stationary time series taking values from a finite alphabetX. (Note that all stationary time series{Xn}n=0can be thought to be a two sided time series, that is,{Xn}n=−∞.) For notational convenience, let Xmn =(Xm, . . . , Xn), wheremn. Note that ifm > nthenXmn is the empty string.

Letg:X(−∞,)be arbitrary.

Our goal is to estimate the conditional expectationE(g(Xn+1)|X0n)from samplesX0n.

Fork1 define the stopping timesτik(n)which indicate where thek-blockXnnk+1occurs previously in the time series{Xn}. Formally we setτ0k(n)=0 and fori1 let

τik(n)=min

t > τik1(n): Xnntk+1t=Xnnk+1

. (1)

LetKn1 andJn1 be sequences of nondecreasing positive integers tending to∞which will be fixed later.

Defineκnas the largest 1kKn such that there are at leastJn occurrences of the blockXnnk+1in the data segmentXn0, that is,

κn=max

1kKn: τJk

n(n)nk+1

(2)

(4)

if there is suchkand 0 otherwise.

Defineλnas the number of occurrences of the blockXnnκ

n+1in the data segmentX0n, that is, λn=max

1j: τjκnnκn+1

(3) ifκn>0 and zero otherwise. Observe that ifκn>0 thenλnJn.

Our estimategnforE(g(Xn+1)|X0n)is defined asg0=0 and forn1, gn= 1

λn

λn

i=1

g(Xnτκn

i (n)+1) (4)

ifκn>0 and zero otherwise.

LetX∗−be the set of all one-sided sequences, that is, X∗−=

(. . . , x1, x0): xiX for all − ∞< i0 . Define the functionG:X∗−(−∞,)as

G(x−∞0 )=E

g(X1)|X0−∞=x−∞0 .

Note that as a conditional expectation this is only defined almost surely. E.g. ifg(x)=1{x=z} for a fixedzX thenG(y−∞0 )=P (X1=z|X−∞0 =y−∞0 ).

Define a distance onX∗−as d(x−∞0 , y0−∞)=

i=0

2i11{x−i=y−i}.

Definition. The conditional expectationG(X0−∞)is said to be continuous if a version of it is continuous on the setX∗− with respect to metricd(·,·). Since this space is compact, in fact, continuity is equivalent to uniform continuity.

The processes with continuous conditional expectation are essentially the Random Markov Processes of Ka- likow [12], or the continuous g-measures studied by Mike Keane [13].

Theorem. Let{Xn}be a stationary and ergodic time series taking values from a finite alphabetX. AssumeKn= max(1,0.1 log|X|n)andJn=max(1,n0.5 ). Then

(A) if the conditional expectationG(X−∞0 )is continuous with respect to metricd(·,·)then

nlim→∞gnE

g(Xn+1)|Xn0=0 almost surely, (B) without any continuity assumption,

nlim→∞

1 n

n1

i=0

giE

g(Xi+1)|Xi0=0 almost surely, (C) without any continuity assumption, for arbitrary >0,

nlim→∞PgnE

g(Xn+1)|Xn0>

=0.

Remarks. Note that these results are valid without the ergodic assumption. One may use the ergodic decomposition throughout the proofs, cf. Gray [7], p. 268.

(5)

We note that from the proof of Ryabko [22] and Györfi, Morvai and Yakowitz [11] it is clear that the continuity condition in the first part of the theorem cannot be relaxed. Even for the class of all stationary and ergodic binary time-series with merely almost surely continuous conditional probability P (X1=1|. . . , X1, X0) one cannot solve Problem 2 in the Introduction. (An almost surely continuous conditional probability is such that as a function restricted to a setCwith full measure, it is continuous onC.)

We do not know if the shifted version of our proposed schemegnsolves Problem 1 or not. (That is, in the case whengnis evaluated on(Xn, . . . , X0)rather than on(X0, . . . , Xn).)

IfX is a countably infinite alphabet then there is no scheme that could achieve similar result to part (A) in the theorem for all boundedg(·), even if you assume that the resultingG(·)is continuous, and the time series is in fact a first order Markov chain. Indeed, whenever a new state appears which has not occurred before, you are unable to predict, cf. Györfi, Morvai and Yakowitz [11].

3. Auxiliary results

Fork1,n0 andj0 it will be useful to define auxiliary processes{ ˜Xi(k,n,j )}i=−∞as follows. Let X(k,n,j )i =Xnτk

j(n)+i for − ∞< i <. (5)

For an arbitrary stationary time series{Yn}fork1 letτ˜0k(Y−∞ )=0 and fori1 define

˜

τik(Y−∞ )=min

t >τ˜ik1(Y−∞ ): Ytk+1+t =Y0k+1

. (6)

If it is obvious on which time seriesτ˜ik(Y−∞ )is evaluated, we will writeτ˜ik. LetT denote the left shift, that is, (T x−∞ )i =xi+1.

We will need the next lemmas for later use.

Lemma 1. Let{Xn}be a stationary time series taking values from a finite alphabetX. Fork1,n0 andj0, the time series{X(k,n,j )i }i=−∞has the same distribution as{Xi}i=−∞.

Proof. Note that by (1), and (6), Tns{Xnτ

k j(n)+m

nτjk(n)+l =xlm, τjk(n)=s} = {Xlm=xlm˜jk=s}

whereτ˜jk is evaluated on time series{Xi}i=−∞. Now by (5) and stationarity, P (Xl(k,n,j )=xl, . . . ,Xm(k,n,j )=xm)=

s=0

P (Xnτ

k j(n)+m

nτjk(n)+l =xlm, τjk(n)=s)= s=0

P (Xml =xlm˜jk=s)

=P (Xlm=xlm) and the proof of Lemma 1 is complete. 2

Lemma 2. Let {Xn}be a stationary and ergodic time series taking values from a finite alphabet X. Assume Kn→ ∞,Jn→ ∞and limn→∞(Jn/n)=0. Then

nlim→∞κn= ∞ almost surely.

(6)

Proof. We argue by contradiction. Suppose, thatκni =K,Xnni

iK =x0K for a subsequence ni. Then a simple frequency count (in the data segmentXn0i there are less thanJni occurrences of blockx0K) yields that

P (X0K=x0K) lim

n→∞

Jn

n =0.

The set of sequences that contain a block with zero probability has zero probability and thus Lemma 2 is proved. 2

4. Pointwise consistency

Proof of Theorem (A). By Lemma 2, for largen, gn(x)E

g(Xn+1)|Xn0= 1 λn

λn

j=1

g(Xnτκn

j (n)+1)E

g(Xn+1)|Xn0

max

J=Jn,...,n max

k=1,...,Kn

1 J

J j=1

g(Xnτk

j(n)+1)G(Xnτ

k j(n)

−∞ ) +

1 λn

λn

j=1

G(Xnτ

κn j (n)

−∞ )E

g(Xn+1)|Xn0 . Concerning the first term, by (1), (6) and (5),

1 J

J j=1

g(Xnτk

j(n)+1)G(Xnτ

k j(n)

−∞ )

= 1 J

J1 j=0

g(X˜(k,n,J )

˜

τjk+1 )G(. . . ,X˜(k,n,J )

˜

τjk1 ,X(k,n,J )

˜ τjk )

(7) whereτ˜jk is evaluated on{Xi(k,n,J )}i=−∞. Since by Lemma 1

G(. . . ,X(k,n,J )

˜

τjk1 ,X(k,n,J )

˜

τjk )=Eg(X(k,n,J )

˜

τjk+1 )|. . . ,X(k,n,J )

˜

τjk1 ,X(k,n,J )

˜ τjk

,

the pairj=g(X(k,n,J )

˜

τjk+1 )G(. . . ,X(k,n,J )

˜

τjk1 ,X(k,n,J )

˜

τjk ),Fj =σ (Xτ˜

k

−∞j ))forms a martingale difference sequence (E(Γj|Fj)=0 andΓj is measurable with respect toFj+1) for which Azuma’s exponential bound (cf. Azuma [4]) yields

P

1 J

J1

j=0

g(X(k,n,J )

˜

τjk+1 )G(. . . ,X(k,n,J )

˜ τjk )

>

2 e2J /B

for anyBsuch that maxx∈X|g(x)|< B. Now by (7) P

max

J=Jn,...,n max

k=1,...,Kn

1 J

J j=1

g(Xnτk

j(n)+1)G(Xnτ

k j(n)

−∞ ) >

nKn2 e2Jn/B

and by assumptionnKn2 e2Jn/Bsums up and the Borel–Cantelli Lemma yields almost sure convergence to zero.

Concerning the second term,

1 λn

λn

j=1

G(Xnτ

κn j (n)

−∞ )E

G(Xn−∞)|X0n

→0 almost surely

(7)

sinceκntends to infinity by Lemma 2,Xnτ

κn j (n)

nτjκn(n)κn+1=Xnnκ

n+1for 0n, and the conditional expectation G(·)is in fact uniformly continuous onX∗−with respect tod(·,·). The proof of Theorem (A) is complete. 2

5. Time average performance

If the process does not have continuous conditional expectations then the last step in the proof of Theorem (A) is not valid. It can be carried out for most time instancesnby using the typical behaviour of almost every realization x−∞ . More specifically, for everyδ >0, the probability of the set of thosex−∞0 for which

E

g(X1)|X0k+1=x0k+1

G(x0−∞)< δ for allkK

tends to one asK tends to infinity. The typical behaviour we are after is the statement that most of the times t=nτjκn(n)the sequenceTtx−∞t belongs to the above mentioned set. While this need not be the case for alln, it is true for mostn’s and the next lemma makes this precise. For the analysis we will fix a value ofκnatk.

Define the set of good indexesMn(δ, K)⊆ {K−1, . . . , n−1}as Mn(δ, K)=

K−1in−1:E

g(Xi+1)|Xiik+1

G(X−∞i )< δ for allkK .

We will analyze the behaviour of our algorithm for κn=k for each in by first dividing up the indices {1,2, . . . , n}according to the value ofXiik+1=y0k+1, and considering what happens for each of these.

Lety0k+1Xk. Define the set of indexes Ink(y0k+1)⊆ {k−1, . . . , n−1}, where you can find the pattern y0k+1, that is,

Ink(y0k+1)=

k−1in−1:Xiik+1=y0k+1 . DefineDk(i)as

Dk(i)=

{τjk(i): τjk(i)ik+1 and 1ji+1} ifτJk

i(i)ik+1,

∅ otherwise.

LetEnk(δ, K)be defined as Enk(δ, K)=

0in−1:Dk(i)Mn(δ, K)> (1δ0.5)|Dk(i)| .

If the number of occurrences ofy0k+1prior toiwas not enough for our algorithm thenDk(i)will be empty. This is rare, and can be expressed as follows: Let

Fnk=

0in−1:Dk(i)= ∅ . It is immediate that

|Fnk||X|kJn. (8)

Lemma 3. Assume|Mn(δ, K)|(1δ)n. Then

0ln−1: l /Enk(δ, K)andl /Fnkδ0.5n.

Proof. Fixδ,K,kandxX. Temporarily fix alsoy0k+1Xk. Letz= |Ink(y0k+1)|and letki1i2· · ·iz

denote the elements ofInk(y0k+1). Letij(y0k+1)be the largest elementijofInk(y0k+1)such thatDk(ij)= ∅and 0ln−1: lDk(ij)andl /Mn(δ, K)δ0.5Dk(ij).

(8)

DefineSto be the set of these indexes asy0k+1varies over all elementXk. It is clear that ifi, jS,i=j then Dk(i)Dk(j )= ∅since different blocksy0k+1are involved. It follows from the construction that{Dk(i)}iS is a disjoint cover of{0ln−1: l /Enk(δ, K)andl /Fnk}. It follows that

0ln−1: l /Mn(δ, K)

iS

0ln−1: lDk(i)andl /Mn(δ, K)

δ0.5

iS

Dk(i)=δ0.5

iS

Dk(i) . Now

0ln−1: l /Enk(δ, K)andl /Fnk

iS

Dk(i) δ0.5n and the proof of Lemma 3 is complete. 2

Proof of Theorem (B). Consider 1

n

n1

i=0

gi(x)E

g(Xi+1)|Xi0 |0−E(g(X1)|X0)|

n +1

n

n1

i=1

1{κi<Ki}+1 n

n1

i=1

max

J=Ji,...,i

1 J

J j=1

g(XiτjKi(i)+1)G(Xiτ

Ki j (i)

−∞ ) +1

n

n1

i=1

1 λi

λi

j=1

G(Xiτ

Ki j (i)

−∞ )E

g(Xi+1)|XiiK

i+1

1{κi=Ki} +1

n

n1

i=1

E

g(Xi+1)|XiiK

i+1

E

g(Xi+1)|Xi0.

The first term tends to zero. The second term tends to zero since by (8)|FnKn|/n|X|KnJn/n→0.

Concerning the third term, by (7) and by Azuma’s exponential bound (cf. Azuma [4]) i

J=Ji

P

1 J

J j=1

g(X(Ki,i,J )

˜

τjKi+1 )G(. . . ,X(Ki,i,J )

˜

τjKi1 ,X(Ki,i,J )

˜

τjKi ) >

2ie2Ji/B

(whereBis any real such that 2 maxx∈X|g(x)|< B) and the right-hand side is summable, hence the Borel–Cantelli Lemma yields almost sure convergence to zero. By Toeplitz lemma the average also converges to zero.

Now we deal with the fourth term. Let 0< <1 be arbitrary. Choose the integerd large enough such that

|X|10(d1)< . Letδ=/d2. LetKandN0be so large that|Mn(δ, K)|/n > (1δ)for allnN0. (There exist suchKandN0since by the ergodic theorem and the martingale convergence theorem limk→∞limn→∞|Mn(δ,k)|

n =

1 almost surely.) Now letN1N0 be so large thatKnd+2K and|X|10(Knd+1)N0 for allnN1. AssumenN1. The sum

1 n

n1

i=1

1 λi

λi

j=1

G(Xiτ

Ki j (i)

−∞ )E

g(Xi+1)|XiiK

i+1

1{κi=Ki}

that we are trying to estimate will be divided into blocks according to the value of Ki. In fact only val- ues in the range [Knd +2, Kn] need be considered since the sum up to |X|10Knd+1 can be estimated

(9)

by |X|10(Knd+1)2 maxy∈X|g(y)| and so by our assumption on d, after dividing by n this will be at most 2 maxy∈X|g(y)|. For i in the range [|X|10(k1),|X|10k) for Knδ +2 kKn, and κi =Ki, if iEk1

|X|10k(δ, K)then we get for more than(1−√

δ)|Dk(i)|terms an upper bound ofδwhile for the rest we may use 2 maxy∈X|g(y)|. This gives an upper bound of

δ|Dk(i)| +√

δ|Dk(i)|2 maxy∈X|g(y)|

|Dk(i)| .

Using Lemma 3 we can estimate the sum over alliin the interval[|X|10(k1),|X|10k)by n

δ+√

δ2 max

y∈Xg(y)+√ δn2 max

y∈Xg(y). Dividing byn, we have an upper bound:

δ+√ δ2 max

y∈Xg(y)+√ δ2 max

y∈Xg(y).

The same argument yields the same upper bound for thei’s in the range[|X|10Kn, n).

Summing overkin the range[Knd+2, Kn+1]yields an upper bound:

+dδ2 max

y∈Xg(y)+dδ2 max

y∈Xg(y).

Recall that√ δd=√

and this yields an upper bound:

+√ 2 max

y∈Xg(y)+√ 2 max

y∈Xg(y).

Sincewas arbitrary, the fourth term tends to zero.

Now we deal with the last term. Since by the martingale convergence theorem,E(g(X1)|X0i)G(X−∞0 ) almost surely, thus

ilim→∞E

g(X1)|X0K

i+1

E

g(X1)|X0i=0

and applying Breiman’s generalized ergodic theorem, cf. Maker [14] (or Algoet [2]),

nlim→∞

1 n

n1

i=0

E

g(Xi+1)|XiiK

i+1

E

g(Xi+1)|Xi0=0 almost surely and the proof of Theorem (B) is complete. 2

6. Weak consistency

Proof of Theorem (C). In order to show that for all ergodic stationary processes our estimategn converges in probability we follow the steps in the proof of Theorem (A). The probability that

gn(x)E

g(Xn+1)|Xn0>3

can be estimated as the sum of the probability of several sets, P

max

J=Jn,...,n max

k=1,...,Kn

1 J

J j=1

g(Xnτk

j(n)+1)G(Xnτ

k j(n)

−∞ ) >

, P (κn< Kn),

PE

g(Xn+1)|Xn0

E

g(Xn+1)|XnnK

n+1>

Références

Documents relatifs

To compare the two methods relying on a two-steps ma- chine learning approach with our convolutional neural network and our Siamese network, we used the Base B1 with k-folds

The consistency results follow from the simple fact that the so-called distributional distance [1] can be estimated based on sampling; this contrasts previous results that show that

We relax the setup from the orthogonal projection to convolution and, using the frequency domain approach, we find a time invariant linear mapping which gives a multivariate series

Since, when the pointwise sum is maximal monotone, the extended sum and the pointwise sum coincides, the proposed algorithm will subsume the classical Passty scheme and

More precisely, if the blind prediction problem is very specific, the control of the loss between prediction with finite and infinite past is more classical, and the following

The second idea which turned out not to be pertinent was generalizing the idea of degree distribu- tion for each simplex order: what is the probability for a vertex to be

In this work, we propose the use of the geometrical wavelet framework for the multi-temporal analysis of polarimetric features from full-polarimetric SAR image time series.

We consider the Directed Spanning Forest (DSF) constructed as follows: given a Poisson point process N on the plane, the ancestor of each point is the nearest vertex of N having