• Aucun résultat trouvé

Almost sure asymptotics for the random binary search tree

N/A
N/A
Protected

Academic year: 2021

Partager "Almost sure asymptotics for the random binary search tree"

Copied!
13
0
0

Texte intégral

(1)

HAL Id: hal-00459166

https://hal.inria.fr/hal-00459166

Submitted on 20 Aug 2015

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of

sci-entific research documents, whether they are

pub-lished or not. The documents may come from

teaching and research institutions in France or

abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est

destinée au dépôt et à la diffusion de documents

scientifiques de niveau recherche, publiés ou non,

émanant des établissements d’enseignement et de

recherche français ou étrangers, des laboratoires

publics ou privés.

tree

Matthew Roberts

To cite this version:

Matthew Roberts. Almost sure asymptotics for the random binary search tree. 21st International

Meeting on Probabilistic, Combinatorial, and Asymptotic Methods in the Analysis of Algorithms

(AofA’10), 2010, Vienna, Austria. pp.565-576. �hal-00459166�

(2)

Almost sure asymptotics for the

random binary search tree

Matthew I. Roberts

Laboratoire de Probabilit´es et Mod`eles Al´eatoires, Universit´e Paris VI Case courrier 188, 4, Place Jussieu, 75252 Paris, France.

E-mail: matthew.roberts@upmc.fr .

We consider a (random permutation model) binary search tree with n nodes and give asymptotics on the log log scale for the height Hnand saturation level hn of the tree as n → ∞, both almost surely and in probability. We then

consider the number Fnof particles at level Hnat time n, and show that Fnis unbounded almost surely.

Keywords: Binary search tree, Random permutation model, Yule tree, Quicksort, Branching random walk

1

Introduction and main results

Consider the complete rooted binary tree T. We construct a sequence Tn, n = 1, 2, . . . of subtrees of T

recursively as follows. T1consists only of the root. Given Tn, we choose a leaf u uniformly at random

from the set of all leaves of Tnand add its two children to the tree to create Tn+1. Thus Tn+1consists of

Tnand the children u1, u2 of u, and contains in total 2n + 1 nodes, including n + 1 leaves. We call this

sequence of trees (Tn)n≥1the binary search tree.

Fig. 1: An example of the beginning of a binary search tree: at each stage, we choose uniformly at random from amongst the available leaves and add the children of the chosen leaf to the tree.

This model has various equivalent descriptions: for example one may construct Tnby successive

inser-tions into T of a uniform random permutation of {1, . . . , n}. For a more detailed explanation of this and other constructions see Reed [9].

This work was supported by ANR MADCOF, grant ANR-08-BLAN-0220-01.

(3)

One interesting quantity in this model is the height Hnof the tree Tn— that is, the greatest generation

amongst all nodes of Tn (where the root is defined to have generation 0); so H1 = 0, H2 = 1, H3 = 2,

and H4 is either 2 (with probability 1/3) or 3 (with probability 2/3). Another is the saturation level hn,

defined to be the greatest complete generation of Tn — that is, the greatest generation k such that all

nodes of generation k are present in Tn (so h1 = 0, h2 = 1, h3 = 1 and h4 is 1 with probability 2/3

and 2 with probability 1/3). These two quantities, Hnand hn, have been studied extensively. Pittel [8]

showed that there exist constants c and γ such that Hn/ log n → c and hn/ log n → γ almost surely, and

gave bounds on the values of c and γ. Devroye [4] calculated c exactly by showing that Hn/ log n → c

in probability as n → ∞; and Reed [9] showed that for the same c and another known constant d, E[Hn] = c log n − d log log n + O(1). Drmota [5] and Reed [9] also showed that VarHn= O(1).

Our first aim in this article is to prove the following theorem.

Theorem 1 Let a be the solution to

2(a − 1)ea+ 1 = 0, a > 0

and let

b := 2aea

(we geta ≈ 0.76804 and b ≈ 3.31107). Then

1

2 = lim infn→∞

b log n − aHn

log log n < lim supn→∞

b log n − aHn

log log n = 3 2

almost surely and

b log n − aHn

log log n

P

−→ 3

2 as n → ∞.

Of course, a and b agree with the constants c and d mentioned above in the sense that c = b/a and d = 3/2a. By the same methods, we obtain a similar theorem concerning hn.

Theorem 2 Let α be the solution to

2(α + 1)e−α− 1 = 0, α > 0 and let

β := 2αe−α

(we getα ≈ 1.6783 and β ≈ 0.6266). Then

1

2 = lim infn→∞

αhn− β log n

log log n < lim supn→∞

αhn− β log n

log log n = 3 2

almost surely and

αhn− β log n

log log n

P

−→ 3

(4)

This shows in particular that the lower bound given by Pittel [8] is the correct growth rate for the saturation level hnon the log scale.

Other aspects of the binary search tree model also give interesting results. The article by Chauvin et al. [3], for example, tracks the number of leaves at certain levels of the tree, called the profile of the tree, via convergence theorems for polynomial martingales associated with the system.

We are also interested in how many leaves are present at level Hnof the tree at time n. We call the set

of particles at this level the fringe of the tree, and call the size of the fringe Fn, so that F1 = 1, F2 = 2,

F3 = 2, and F4 is 2 with probability 2/3 or 4 with probability 1/3. Note that the word “fringe” has

been used also in a different context by, for example, Drmota et al. [6]. Trivially Fn ∈ {2, 4, 6, . . .}

for all n ≥ 2, and (given that Hn → ∞ almost surely, which is a simple consequence of Theorem 1)

lim infn→∞Fn = 2 almost surely. We are able to prove the following result.

Theorem 3 We have

lim sup

n→∞

Fn= ∞

almost surely.

Further work on the behaviour of Fnin the limit as n → ∞ is underway.

Our main tool throughout is the relationship between binary search trees and an extremely simple continuous time branching random walk, called the Yule tree. This relationship is well-known — see Aldous & Shields [1] and Chauvin et al. [3]. The hard work required for Theorem 1 is then done for us by a remarkable result of Hu & Shi [7]. We introduce the Yule tree model in Section 2 before proving Theorems 1 and 2 in Section 3. Finally we study Fn, and in particular prove Theorem 3, in Section 4.

2

The Yule tree

Consider a branching random walk in continuous time with branching rate 1, starting with one particle at the origin, in which if a particle with position x branches it is replaced by two children with position x − 1. That is:

• We begin with one particle at 0; • All particles act independently;

• Each particle lives for a random amount of time, exponentially distributed with parameter 1; • Each particle has a position x which does not change throughout its lifetime;

• At its time of death, a particle with position x is replaced by two offspring with position x − 1. We call this process a Yule tree. If a particle v is a descendant of u (that is, there exists k ∈ {1, 2, . . .} and particles u = u1, u2, . . . , uk = v such that uj is an offspring of uj−1for each j), then we write u ≤ v.

Let N (t) be the set of particles alive at time t, and for a particle u ∈ N (t) define Xu(t) to be the position

of u at time t. Let M (t) denote the smallest of these positions at time t, and S(t) the largest — that is,

(5)

and

S(t) := sup{Xu(t) : u ∈ N (t)}.

We note that if we look at the Yule tree model only at integer times, then we have a discrete-time branching random walk. On the other hand, we have the following simple relationship between the Yule tree process and the binary search tree process.

Lemma 4 Let T1= 0 and for n ≥ 2 define

Tn := inf{t > Tn−1: N (t) 6= N (Tn−1)}

so that the timesTnare the birth times of the branching random walk. Then we may construct the Yule

tree process and the binary search tree process on the same probability space, such that

−M (Tn) = Hn ∀n ≥ 1

and

−S(Tn) = hn+ 1 ∀n ≥ 1

almost surely.

Proof: By the memoryless property of the exponential distribution, at any time t the probability that a particular particle u ∈ N (t) will be the next to branch is exactly 1/#N (t). Thus, if we consider the sequence of genealogical trees produced by the Yule tree process at the times Tj, j ≥ 1, we have exactly

the binary search tree process: particles in N (t) correspond to leaves in the binary search tree. Clearly the position of a particle in the Yule tree process is -1 times its height in the genealogical tree, so we may build the Yule tree process and binary search tree process on the same probability space and then −M (Tn) = Hnand −S(Tn) = hn+ 1 for all n ≥ 1 (almost surely). 2

We would like to study (Hn, n ≥ 1) via knowledge of (M (t), t ≥ 0), and similarly for hn and S(t),

and hence it will be important to have control over the times Tn. It is well-known that Tnis close to log n.

We give a simple martingale proof, as seen in Athreya & Ney [2].

Lemma 5 There exists an almost surely finite random variable ζ such that

Tn− log n → ζ almost surely as n → ∞;

and hence for anyδ > 0 we may choose K ∈ N such that

P  lim sup n→∞ |Tn− blog nc| > K  < δ and lim sup n→∞ P(|T n− blog nc| > K) < δ.

(6)

Proof: For each n ≥ 1, let Vn := n(Tn− Tn−1). Then the random variables Vn, n ≥ 1 are independent

and exponentially distributed with parameter 1. Define

Xn := n X j=1 Vj− 1 j = Tn− n X j=1 j−1.

Then Xnis clearly a zero-mean martingale; and

E[Xn2] = n X j=1 Var(Vj) j2 ≤ ∞ X j=1 j−2< ∞

so by the martingale convergence theorem Xnconverges almost surely (and in L2) to some almost surely

finite limit X. But it is well-known that

n

X

j=1

j−1− log n

converges to some finite, deterministic constant (Euler’s constant). This is enough to complete the proof of the first statement in the Lemma, and the consequences are standard. 2 We mentioned above that, if we look at the Yule tree only at integer times, we see a discrete-time branching random walk. Since discrete-time branching random walks are more widely studied than their continuous-time counterparts (in particular the theorem that we would like to apply is stated only in discrete-time), it will be helpful to have some information about the branching distribution of the discrete model. This is a standard, well-known calculation, but we include it for completeness.

Lemma 6 We have E   X u∈N (1) e−θXu(1)  = exp(2e θ− 1). Proof: Let Eθ(t) = E   X u∈N (t) e−θXu(t)  ,

(7)

and for s, t ≥ 0 and a particle u ∈ N (t) define Nu(t; s) to be the set of descendants of particle u alive at

time t + s: that is, Nu(t; s) := {v ∈ N (t + s) : u ≤ v}. Then by the Markov property,

Eθ(t + s) = E   X u∈N (t+s) e−θXu(t+s)   = E   X u∈N (t) e−θXu(t) X v∈Nu(t;s) e−θ(Xv(t+s)−Xv(t))   = E   X u∈N (t) e−θXu(t) E   X v∈Nu(t;s) e−θ(Xv(t+s)−Xv(t)) Ft     = E   X u∈N (t) e−θXu(t)E θ(s)   = Eθ(t)Eθ(s).

We deduce that for s, t > 0,

Eθ(t + s) − Eθ(t) s = Eθ(t)  Eθ(s) − 1 s  and Eθ(t − s) − Eθ(t) −s = Eθ(t − s)  Eθ(s) − 1 s  .

It is easily checked that Eθ(t) is continuous in t, and hence if E0θ(0+) exists then by the above we have

that Eθ(t) is continuously differentiable and for all t > 0

Eθ0(t) = Eθ(t)Eθ0(0+).

Since Eθ(0) = 1 this entails that

Eθ(t) = exp(E0θ(0+)t).

Now, for small t,

Eθ(t) = P(first split after t) + 2eθP(first split before t) + o(t) = 1 − t + 2teθ+ o(t)

so that E0(0+) = 2eθ− 1, and hence Eθ(t) = exp((2eθ− 1)t). Taking t = 1 completes the proof. 2

(8)

3

Proof of Theorems 1 and 2

We would like to apply the following theorem of Hu and Shi [7]. Their Theorem 1.2 was proved for a large class of branching random walks, and we translate it to our particular simple case. Their conditions (1.1) and (1.2) are trivially satisfied; and their Remark (ii) (page 745) shows that, when recentred, our case also satisfies (1.3). Translating into our notation, we obtain the following result.

Theorem 7 (Hu & Shi [7]) Define

ψ(θ) := E   X u∈N (1) e−θXu(1)  . Ifθ∗satisfies θ∗ψ0(θ∗) ψ(θ∗) = log ψ(θ ∗), θ> 0, then 1 2 = lim infn→∞ θ∗M (n) + n log ψ(θ∗)

log n < lim supn→∞

θ∗M (n) + n log ψ(θ∗) log n = 3 2 and θ∗M (n) + n log ψ(θ∗) log n P −→ 3 2 as n → ∞.

In view of this result, our method of proof for Theorems 1 and 2 is unsurprising: we know that the times Tnare near log n for large n, and we may use the monotonicity of Hnand hn— together with the

flexibility offered by the log log scale — to ensure that nothing else can go wrong. It may be possible to extend this method of proof to cover more general trees, where the same monotonicity property does not necessarily hold, via a Borel-Cantelli argument. This would only introduce unneccessary complications in our case.

Proof of Theorem 1: We show first the statement involving the limsup; the proofs of the other statements are almost identical.

It is immediate from Lemma 6 that a in Theorem 1 corresponds to θ∗in Theorem 7, and that b corre-sponds to log ψ(θ∗). Fix δ > 0. Choose K ∈ N such that

P(lim sup

n→∞

|Tn− blog nc| > K) < δ

— this is possible by Lemma 5. For each n ≥ 1, let jn = blog nc − K. We use the abbreviation “i.o.” to

(9)

lim supn→∞Un. For any ε > 0, using the fact that M (t) is non-increasing,

P(aM (Tn) + b log n > (3/2 + ε) log log n i.o.)

≤ P({aM(Tn) + b log n > (3/2 + ε) log log n, |Tn− blog nc| ≤ K} i.o.)

+ P(|Tn− blog nc| > K i.o.)

< P 

aM (jn) > −bjn+ (3/2 + ε) log jn+ (bjn− b log n)

+ (3/2 + ε)(log log n − log jn) i.o.

 + δ

≤ P(aM(jn) > −bjn+ (3/2 + ε/2) log jn i.o.) + δ

≤ δ

by Theorem 7. Taking a union over ε > 0 tells us that

P  lim sup n→∞ aM (Tn) + b log n log log n > 3 2  ≤ δ; but since δ > 0 was arbitrary we deduce that

P  lim sup n→∞ aM (Tn) + b log n log log n > 3 2  = 0.

This completes the proof of the upper bound, since Hn = −M (Tn). The proof of the lower bound is

similar. We let in = blog nc + K and use the abbreviation “ev.” to mean “eventually” (that is, for all large

n; so {Un ev.} represents the event lim infn→∞Un). For any ε ∈ (0, 3/2),

P(aM (Tn) + b log n < (3/2 − ε) log log n ev.)

≤ P({aM(Tn) + b log n < (3/2 − ε) log log n, |Tn− blog nc| ≤ K} ev.)

+ P(|Tn− blog nc| > K i.o.)

< P 

aM (in) < −bin+ (3/2 − ε) log in+ (bin− b log n)

+ (3/2 − ε)(log log n − log in) ev.

 + δ

≤ P(aM(in) < −bin+ (3/2 − ε/2) log in ev.) + δ

≤ δ

by Theorem 7. As with the upper bound, taking a union over ε > 0, and then letting δ → 0, tells us that

P  lim sup n→∞ aM (Tn) + b log n log log n < 3 2  = 0

and hence combining with the upper bound we obtain

lim sup n→∞ b log n − aHn log log n = 3 2

(10)

almost surely. The proof of the statement involving the liminf is almost identical, and we omit it for the sake of brevity. The convergence in probability is also similar: one considers for example that

lim sup

n→∞ P (aM (Tn

) + b log n > (3/2 + ε) log log n)

≤ lim sup

n→∞ P (aM (Tn

) + b log n > (3/2 + ε) log log n, |Tn− blog nc| ≤ K)

+ lim sup

n→∞ P (|Tn

− blog nc| > K) < lim sup

n→∞ P (aM (T

n) + b log n > (3/2 + ε) log log n, |Tn− blog nc| ≤ K) + δ

and uses the statement about convergence in probability in Theorem 7 to show that the probability in the last line above converges to zero for any ε > 0. Then since δ > 0 was arbitrary we must have

lim sup

n→∞ P (aM (Tn

) + b log n > (3/2 + ε) log log n) = 0.

The lower bound is, again, similar. 2

Proof of Theorem 2: Consider a slightly altered Yule tree model, where each particle gives birth to two children whose position is that of their parent plus 1, instead of minus 1. If we couple this model with the usual Yule tree model in the obvious way, then clearly the minimal position of a particle in the altered model is equal to −1 times the maximal position in the usual model. Thus if we let ˆM (t) be the minimal position in the altered model, it suffices to show that

1

2 = lim infn→∞

α ˆM (Tn) − β log n

log log n < lim supn→∞

α ˆM (Tn) − β log n log log n = 3 2 and α ˆM (Tn) − β log n log log n P −→ 3 2 as n → ∞.

Lemma 6 (substituting ˆθ := −θ, say) tells us that for the altered model, α in Theorem 2 corresponds to θ∗in Theorem 7, and that −β corresponds to log ψ(θ∗). The rest of the proof proceeds exactly as in the

proof of Theorem 1. 2

4

The size of the fringe, F

n

We are now interested in the size of the fringe of the tree: how many leaves lie at level Hn at time n.

Recall that we called this quantity Fn.

We will show that Fnis unbounded almost surely, but first we need a short lemma. For this lemma we

consider again the Yule tree model, and call the set of particles with position M (t) the frontier of the Yule tree at time t — recall that this is the set of particles with minimal position at time t, so as we saw earlier the frontier of the Yule tree corresponds to the fringe of the binary search tree. Define ˜Ftto be the number

of particles at the frontier at time t, ˜

(11)

Fig. 2: The top three levels of a binary search tree run for 109steps. The thick blue line shows the size of the fringe, Fn, which is the number of leaves at level Hn; the thin red line shows the number of leaves at level Hn− 1; and the

dashed green line shows the number of leaves at level Hn− 2.

Lemma 8 If M (t) < −blog2(2k)c and ˜Ft = 2k, then there is at least one particle that is not at the

frontier at timet, but which is within distance blog2(2k)c of the frontier — that is, its position is in the interval[M (t) + 1, M (t) + blog2(2k)c].

Proof: Clearly at some time before t there was a particle which had position M (t) + blog2(2k)c; and

hence at some time there were at least 2 particles with this position, since particles (except the root) arrive in pairs. At time t, either these particles have at least one descendant not at the frontier, in which case we are done (as particles cannot move in the positive direction); or all their descendants are at the frontier. So, for a contradiction, suppose that all their descendants are at the frontier at time t. Then there must be 2 × 2blog2(2k)cparticles at the frontier (since a movement of distance 1 yields 2 new particles, and hence

a movement of distance blog2(2k)c yields 2blog2(2k)c new particles; and this holds for each of the two

initial particles). But

2 × 2blog2(2k)c = 2blog2(2k)c+1> 2log2(2k)= 2k

so there are strictly more than 2k particles at the frontier. This is a contradiction — there are exactly 2k particles at the frontier, by assumption — and hence our claim holds. 2

(12)

We now prove Theorem 3, which we recall says that lim supn→∞Fn= ∞ almost surely.

Proof of Theorem 3: Again consider the continuous time Yule tree. By the relationship between the Yule tree and the binary search tree seen in Section 2, ˜Ftand Fnhave the same paths up to a time change, and

hence it suffices to show that lim sup ˜Ft= ∞ almost surely.

The idea is as follows: suppose we have 2k particles at the frontier. By Lemma 8, there is a particle close to the frontier; and this particle has probability greater than some strictly positive constant of having 2 of its descendants make it to the frontier before the 2k already there branch. So if we have 2k particles infinitely often, then we have 2k + 2 particles infinitely often. We make this argument rigorous below.

For any t > 0 and k ∈ N, define

τ1(2k):= inf{s > 0 : M (s) < −blog2(2k)c and ˜Fs= 2k}

and for each j ≥ 1

σ(2k)j := inf{s > τj(2k): ˜Fs6= 2k}

and

τj+1(2k):= inf{s > σj(2k): ˜Fs= 2k}.

Then τj(2k)is the jth time that we have 2k particles at the frontier and at least distance log2(2k) from the

origin. We show, by induction on k, that for any k ∈ N

τj(2k)< ∞ almost surely, for all j ∈ N. (1) Trivially, since M (t) → −∞ almost surely (which is true since Hn → ∞ almost surely), we have

τj(2)< ∞ almost surely for all j ∈ N and so (1) holds for k = 1. Suppose now (1) holds for some k ≥ 1. By Lemma 8, for any j, at time τj(2k)there is at least 1 particle that is not at the frontier but is within distance blog2(2k)c of the frontier. Let A(k)j be the event that the descendants of this particle reach level

M (τj(2k)) before any of the 2k particles already at that level branch. Then the events A(k)1 , A(k)2 , A(k)3 , . . . are independent by the strong Markov property. Also, since all particles branch at rate 1, for each j the probability of A(k)j is certainly at least the probability that the sum of blog2(2k)c independent, rate

1 exponential random variables is less that the minimum of 2k independent, rate 1 exponential random variables. This is some strictly positive number, γksay.

Now, at time τj(2k)— which is finite for each j, by our induction hypothesis — there are 2k particles at the frontier. One of two things can happen: either two more particles join them and we reach 2k + 2 particles at the frontier, or one of the 2k branches before this happens and we have a new frontier with 2 particles. Call the first event, that two more particles reach the frontier before any of the 2k already there branch, Bj(k). Then A(k)j ⊆ B(k)j since the event that some pair makes it to the frontier before the 2k branch contains the event that descendants of our particular particle make it to the frontier before the 2k

(13)

branch. Thus P  lim sup m→∞ Bm(k)  ≥ P  lim sup m→∞ A(k)m  = P   \ n≥1 [ m≥n A(k)m   = lim n→∞P   [ m≥n A(k)m  = lim n→∞N →∞lim P N [ m=n A(k)m ! ≥ lim n→∞N →∞lim 1 − (1 − γk) N −n+1 = 1.

But the event lim supm→∞B(k)m is exactly the event that we have 2k + 2 particles at the frontier infinitely

often — and thus (using again that M (t) → −∞ almost surely) we have that τj(2k+2) is finite almost surely for all j. Hence by induction we have proved that (1) holds for each k. Our result follows. 2

Acknowledgements

Many thanks to Julien Berestycki and Brigitte Chauvin for their ideas and their help with this project.

References

[1] D. Aldous and P. Shields. A diffusion limit for a class of randomly-growing binary trees. Probab. Theory Related Fields, 79:509–542, 1988.

[2] K.B. Athreya and P.E. Ney. Branching Processes. Springer-Verlag, New York, 1972.

[3] B. Chauvin, T. Klein, J-F. Marckert, and A. Rouault. Martingales and profile of binary search trees. Electron. J. Probab., 10(12):420–435, 2005.

[4] L. Devroye. A note on the height of binary search trees. J. Assoc. Comput. Mach., 33(3):489–498, 1986.

[5] M. Drmota. An analytic approach to the height of binary search trees II. J. ACM, 50(3):333–374, 2003.

[6] M. Drmota, B. Gittenberger, A. Panholzer, H. Prodinger, and M.D. Ward. On the shape of the fringe of various types of random trees. Math. Methods Appl. Sci., 32(10):1207–1245, 2009.

[7] Y. Hu and Z. Shi. Minimal position and critical martingale convergence in branching random walks, and directed polymers on disordered trees. Ann. Probab., 37(2):742–789, 2009.

[8] B. Pittel. On growing random binary trees. J. Math. Anal. Appl., 103:461–480, 1984.

Figure

Fig. 1: An example of the beginning of a binary search tree: at each stage, we choose uniformly at random from amongst the available leaves and add the children of the chosen leaf to the tree.
Fig. 2: The top three levels of a binary search tree run for 10 9 steps. The thick blue line shows the size of the fringe, F n , which is the number of leaves at level H n ; the thin red line shows the number of leaves at level H n − 1; and the dashed gree

Références

Documents relatifs

Since the work of Rabin [9], it has been known that any monadic second order property of the (labeled) binary tree with successor functions (and not the prefix ordering) is a monadic

As proof of concept, we apply the D-SDN framework in two use cases, namely: (1) network capacity sharing, in which control decentralization enables nodes in a

According to Madaule [25], and A¨ıd´ekon, Berestycki, Brunet and Shi [3], Arguin, Bovier and Kistler [6] (for the branching brownian motion), the branching random walk seen from

In subgroup discovery and exceptional model mining, the enhancements proposed in the re- cent works can be divided into three categories: (i) The pattern language, (ii) the model

suffices to study the ordinary generating series of the species F Ω ◦. A rooted tree with vertex outdegrees in the set Ω ∗ consists of a root-vertex together with an unordered list

We introduce a so-called &#34;Mean-Path-Defined Yule-Nielsen&#34; (MPD-YN) model for predicting the color of halftone prints in reflectance or transmittance modes, inspired of

Our ontributions. The memory bounds of [Cha07a ℄ are not tight in general, even for 2-player arenas. Outline of the paper. Se tion 2 re alls the lassi al notions in the area, while

We show through an extensive set of experiments that MCTS is a compelling solution for a pattern mining task and that it outperforms the state-of-the- art approaches (exhaustive