• Aucun résultat trouvé

COMBINATORIAL DESCRIPTION OF THE TRAJECTORIES

N/A
N/A
Protected

Academic year: 2022

Partager "COMBINATORIAL DESCRIPTION OF THE TRAJECTORIES"

Copied!
29
0
0

Texte intégral

(1)

COMBINATORIAL DESCRIPTION OF THE TRAJECTORIES

S ´EBASTIEN FERENCZI, CHARLES HOLTON, AND LUCA Q. ZAMBONI

A

BSTRACT

. We describe an algorithm for generating the symbolic sequences which code the or- bits of points under an interval exchange transformation on three intervals. The algorithm has two components. The first is an arithmetic division algorithm applied to the lengths of the intervals.

This arithmetic construction was originally introduced by the authors in an earlier paper and may be viewed as a two-dimensional generalization of the regular continued fraction. The second compo- nent is a combinatorial algorithm which generates the bispecial factors of the associated symbolic subshift as a function of the arithmetic expansion. As a consequence we obtain a complete char- acterization of those sequences of block complexity 2n + 1 which are natural codings of orbits of three-interval exchange transformations, thereby answering an old question of Rauzy.

1. I NTRODUCTION

A well-known and well-working interaction between ergodic theory, arithmetics, and symbolic dynamics is provided by the study of irrational rotations on the torus TT

1

. To an irrational real number α we associate a dynamical system, the rotation Rx = x + α mod 1, an arithmetic algo- rithm, the usual continued fraction approximation, and a set of symbolic sequences (a part of the class of Sturmian sequences) which are codings of trajectories under R by a canonical partition;

the continued fraction algorithm forms the link between the dynamical system and the symbolic sequences, and the study of the arithmetic and symbolic objects sheds insight on various properties of the dynamical system. In this series of papers, we generalize this situation to the case of two real numbers 0 < α < 1 and 0 < β < 1, starting from the dynamical system called a three-interval exchange transformation and defined on the interval [0, 1[ by

• I x = x + 1 − α if x ∈ [0, α[,

• I x = x + 1 − 2α − β if x ∈ [α, α + β[,

• I x = x − α − β if x ∈ [α + β, 1[.

In the first part [13], we have defined, independently of any dynamical system, a new algorithm of simultaneous approximation for two real numbers and studied its properties. In this second part, we use this algorithm to describe the class of symbolic sequences which can be canonically associated to three-interval exchange transformations. In the third part [14], we use this description to solve long-standing problems on the spectral properties of three-interval exchange transformations. In a forthcoming fourth part, we shall study the joinings of three-interval exchange transformations.

The interval exchange transformations have been introduced by Katok and Stepin [16], follow- ing an idea they attribute to Arnold [1]; an exchange of k intervals is defined by a vector of k lengths and a permutation on k letters; the unit interval is then partitioned according to the vector of lengths, and T exchanges the intervals according to the permutation, see section 3 below for a

Date: June 27, 2002.

1991 Mathematics Subject Classification. Primary 37B10; Secondary 68R15.

Partially supported by NSF grant INT-9726708.

1

(2)

complete definition. Katok and Stepin used them to exhibit a class of systems with simple con- tinuous spectrum. They were further studied by Keane [17], Veech [22] and many others; it was Rauzy [20] who first saw interval exchange transformations as a possible framework for generaliz- ing the rotations/continued fractions/Sturmian sequences interaction, and he defined an algorithm of simultaneous approximation associated to interval exchange transformations, now called Rauzy induction. But the symbolic part was not complete, even in the simplest case of three-interval exchange transformations; indeed, several works ([8], [21]) were devoted to partial answers to the question posed by Rauzy “describe the category of symbolic sequences which are natural codings of three-interval exchange transformations”, a natural coding meaning a sequence (x

n

) taking the value 1, 2, 3 when the n-th iterate of some point x lies in the first, second, third interval.

In this paper we answer the question of Rauzy, and give a full characterization of the symbolic sequences, on three symbols, which are natural codings of three-interval exchange transformations with permutation (321); our main result is

Theorem 1.1. A minimal sequence is the natural coding of a non-trivial three-interval exchange transformation with 2α < 1 and 2α + β > 1 if and only if it satisfies the following four conditions:

The words of length 2 appearing in u are { 12, 13, 21, 22, 31 } .

If the word w = w

1

. . . w

s

appears in u, so does its retrograde w ¯ = w

s

. . . w

1

.

For every n there are exactly two left special words of length n, one beginning in 1 and one beginning in 2.

If w is a bispecial word ending in 1 and w 6 = ¯ w then w2 is left special if and only if w1 ¯ is left special.

For all combinatorial definitions see section 2 below; “nontrivial” refers to a condition called i.d.o.c. described in subsection 3.1 below. As for the remaining cases, i.e. when 2α > 1 or 2α + β < 1, we are able to transform them to the ones given in Theorem 1.1 by using substitutions, see subsection 4.2 below.

The “only if” part of our main result uses geometric considerations, see subsection 3.1 below; the

“if” part uses two new tools, the negative slope expansion which is the algorithm of simultaneous approximation defined and studied in [13], and the hat algorithm first defined in [19] for other types of sequences; the hat algorithm allows us to code a language satisfying the four conditions of our theorem (we call such languages symmetric 3-hat because they are generated by the hat algorithm and linked to the geometry of three intervals), by a sequence (m

k

∈ IN

, n

k

∈ IN

,

k+1

= ± )

k≥1

, and this sequence will be identified with the “partial quotients”, for a variant of the negative slope expansion, of the parameters (α, β) defining a three-interval exchange transformation.

And thus, as is the case with the link between Sturmian sequences and irrational rotations

through the classical continued fraction approximation, this characterization gives also a full de-

scription of the trajectories of the discontinuities of three-interval exchange transformations under

the natural partition, and, through a description of return words, an explicit construction of three

sequences of nested Rokhlin stacks which describe the system from a measure-theoretic point of

view (when the system satisfies the i.d.o.c. condition, which is equivalent to the fact that the ap-

proximation algorithm does not stop). This will be used in [14] to characterize possible eigenvalues

of a three-interval exchange transformation and to obtain necessary and sufficient conditions for

weak mixing, answering two questions posed by Veech in [22].

(3)

A CKNOWLEDGEMENTS

The authors were supported in part by a Cooperative Research Travel Grant jointly sponsored by the N.S.F. and C.N.R.S.. The third author was also supported in part by a grant from the Texas Advanced Research Program. We are very grateful to M. Boshernitzan for many fruitful conversations.

2. C OMBINATORICS OF SYMMETRIC 3- HAT LANGUAGES

2.1. Symmetric languages.

Definition 1. Let A be a finite set. By a language L over A we mean a collection of sets (L

n

)

n≥1

where each L

n

consists of blocks of the form a

1

a

2

· · · a

n

with a

i

∈ A and such that for each v ∈ L

n

there exists a, b ∈ A with av, vb ∈ L

n+1

and for all v ∈ L

n+1

if v = au = u

0

b with a, b ∈ A then u, u

0

∈ L

n

. We write L = ∪

n≥0

L

n

∪ { ε

} where ε

denotes the empty word, the unique word of length zero. The set A is called the alphabet.

The complexity function p

L

: IN → IN is defined by p

L

(n) = Card (L

n

).

A language L is minimal if for each v ∈ L there exists n such that v is a factor of each word w ∈ L

n

.

The following basic lemma is due to Hedlund and Morse [18]:

Lemma 2.1. Let L be a minimal language over a finite alphabet A. Then the following are equiv- alent

• p

L

(n) ≤ n for some n ≥ 1.

• p

L

is bounded

There exists a factor u ∈ L such that L consists precisely of all factors of the sequence u

ω

= uuu · · · .

If L satisfies one of the above equivalent conditions we say that L is periodic. Otherwise L is called aperiodic.

Definition 2. A word w in L is called right special (left special) if there exists distinct letters a, b ∈ A such that both wa, wb ∈ L (aw, bw ∈ L). If w ∈ L is both right special and left special, then w is called bispecial. We regard ε

as bispecial.

Definition 3. If u, w ∈ L we write

u ` w

if u is a prefix of w, and, for all z ∈ L

|w|

if u is a prefix of z then z = w;

u | = w if w is the longest word so that u ` w.

In other words, u ` w if w is the only extension of u with a given length. Thus if u is left special and u ` w then w is also left special. If u ` w and u ` z then either w ⊆ z or z ⊆ w. u | = w if u is a prefix of only one word of length | w | (namely w) and u is a prefix of at least two words of length | w | + 1. Thus if u is right special then u | = u. If u is left special and u | = w then w is the shortest bispecial word beginning in u.

As an immediate consequence of Lemma 2.1 we have

(4)

Lemma 2.2. Let L be a minimal language with p

L

(n) ≥ n + 1 for all n. For all u ∈ L there exists w ∈ L so that u | = w. In other words for each u there exists a shortest right special word w beginning in u.

Definition 4. Given a word w = a

1

a

2

· · · a

n

with a

i

∈ A, let w ¯ denote the retrograde word of w, that is w ¯ = a

n

a

n−1

· · · a

1

. A word w is called a palindrome if w = ¯ w.

We will call a language L symmetric if w ∈ L implies w ¯ ∈ L.

There are many examples of symmetric languages in the literature, the most well known being binary languages of complexity p

L

(n) = n + 1 called Sturmian languages ([18], [6]). Other examples include languages derived from the Thue-Morse sequence, Arnoux-Rauzy sequences [2], and Episturmian sequences [9].

The next lemmas describe a recursive method for constructing the bispecial factors of minimal aperiodic symmetric languages:

Lemma 2.3. Let u, v, x, y, w be words in the alphabet A. Then (i) If w = uv with w = ¯ w and u = ¯ u then wv = wv.

(ii) If v = wx and u = ¯ wy with v = ¯ v and u = ¯ u then ux = vy.

(iii) If u = wx and u ¯ = vy with w = ¯ w and v = ¯ v then uy = uy and ux ¯ = ¯ ux.

Proof. In case (i) we have

wv = ¯ v w ¯ = ¯ vw = ¯ vuv = ¯ v¯ uv = ¯ wv = wv.

In case (ii) we have

ux = ¯ x¯ u = ¯ xu = ¯ x wy ¯ = wxy = ¯ vy = vy.

Finally in case (iii) we have

uy = ¯ y¯ u = ¯ yvy = ¯ y¯ vy = uy while

¯

ux = ¯ xu = ¯ xwx = ¯ x wx ¯ = ¯ ux.

Lemma 2.4. Let L be an aperiodic minimal symmetric language. Suppose u ∈ L \ { ε

} , a ∈ A and ua is left special.

(1) If for all proper prefixes v of u ¯ the word av is not right special, then ua | = ua u. ¯

(2) Otherwise let v (possibly the empty word) be the longest proper prefix of u ¯ such that av is right special. Writing u ¯ = vx with x 6 = ε

let b denote the first letter of x. Let w be the shortest word beginning in ¯ va such that bw is right special. Writing w = ¯ vy then ua | = uy.

Proof. First suppose no such v exists. We claim that a ` a¯ u. Otherwise, since a u ¯ ∈ L there would exist a proper prefix v of u ¯ (v possibly empty) such that av is right special, contradicting our assumption that no such v exists. Thus ua ` ua¯ u. Since ua is left special it follows that ua¯ u is also left special. But ua¯ u = ua¯ u thus au¯ u is also right special. Hence ua | = ua u. ¯

Now suppose we are in case (2) and let v, x, b, w, and y be as above. Note that since b¯ va ∈ L the existence of w follows from Lemma 2.2. Also, since w begins in va ¯ we have that y begins in a.

We claim that b¯ va ` b¯ vy. Otherwise there would exist a proper non-empty prefix z of y such that

b¯ vz is right special. Since ¯ vz begins in va ¯ this contradicts the minimality of the word w. Since u

ends in b v ¯ it follows that ua ` uy and hence uy is left special. To see that uy is also right special

(5)

we claim also that avb ` avx. Otherwise there would exist a proper non-empty prefix z of x such that avz is right special, contradicting the maximality of the prefix v. Since w ¯ ends in av it follows that wb ¯ ` wx. ¯ But wb ¯ is left special (since bw is right special) and wx ¯ = ¯ yvx; thus yvx ¯ is also left special and x¯ ¯ vy = uy is right special. Since ua ` uy and uy is right special, it follows that ua | = uy as required.

In most cases we will apply Lemma 2.4 to bispecial words u. Thus given a bispecial word u and a letter a ∈ A with ua left special, Lemma 2.4 gives a rule for constructing the shortest bispecial word beginning in ua. Many symmetric languages L (including Sturmian and Arnoux-Rauzy, but not including Thue-Morse) satisfy the condition that for every left special word u there exists a unique letter a ∈ A such that ua is left special. Under this added hypothesis we can say more:

Lemma 2.5. Let L be an aperiodic minimal symmetric language with the property that for each left special word u ∈ L \ { ε

} there exists a unique letter a ∈ A so that ua is left special. Let u ∈ L \ { ε

} be bispecial and a ∈ A so that ua is left special. Suppose we are in case (2) of Lemma 2.4; let v, x, b, w, and y be as in Lemma 2.4. Suppose further that v is non-empty.

(1) If v ¯ = v then a = b, w = ¯ u, x = y and uy = uy.

(2) If v ¯ 6 = v then w 6 = ¯ u.

Proof. Suppose v ¯ = v. Since u is bispecial, u ¯ is also bispecial, in particular left special. Hence vb being a prefix of u ¯ is left special. But av is right special, and since ¯ v = v it follows that va is also left special. Since v is non-empty we have that a = b. Thus u ¯ is a word beginning in va ¯ (since u ¯ begins in vb) and b u ¯ is right special (since ua is left special). To conclude that w = ¯ u we must show that u ¯ is the shortest such word. But in the proof of case (2) of Lemma 2.4 we saw that avb ` avx and hence b¯ va ` b¯ vx = bvx = b¯ u. Hence b¯ va | = b¯ u as required. Since u ¯ = w we have vx = ¯ vy so x = y. Finally to see that uy is a palindrome we observe that

uy = ux = ¯ x¯ u = ¯ xvx = ¯ x¯ vx = ux = uy.

Case (2) is immediate since w begins in v ¯ while u ¯ begins in v and v 6 = ¯ v.

2.2. Symmetric 3 -hat languages: description of the bispecial words.

Definition 5. A language L on the alphabet { 1, 2, 3 } is called a symmetric 3-hat language if it satisfies the following conditions:

• L

2

= { 12, 13, 21, 22, 31 } .

• L is minimal and symmetric.

For every n there are exactly two left special words of length n, one beginning in 1 and one beginning in 2.

If w is a bispecial word ending in 1 and w 6 = ¯ w then w2 is left special if and only if w1 ¯ is left special.

Note that, because of the description of L

2

, if w is left special, it has exactly two left extensions aw and bw; hence the third condition implies p

L

(n + 1) − p

L

(n) = 2, hence p

L

(n) = 2n + 1. We check that this third condition could be replaced by

• p

L

(n) = 2n + 1.

• each bispecial word w is ordinary ([5]), meaning that exactly three of the possible words awb

are allowed.

(6)

Thus if u 6 = ε

is left special there exists a unique a ∈ { 1, 2, 3 } such that ua is left special, in other words a symmetric 3-hat language satisfies the hypothesis of Lemma 2.5. It follows that there exists two minimal sequences O

1

and O

2

in { 1, 2, 3 } IN with O

i

beginning in i and such that each prefix of O

i

is left special. The O

i

are called the characteristic sequences, as in the case of Sturmian and Arnoux-Rauzy sequences, where there is only one right special word and one left special word of each length.

Knowing that a language is a symmetric 3-hat language, we are now able to build explicitly each O

i

, through its sequence of nested bispecial factors: this algorithm is a variant of the hat algorithm given in [19] for constructing characteristic Sturmian or Arnoux-Rauzy sequences.

We build two auxiliary sequences with hatted letters: O ˆ

i

is deduced from O

i

by putting a hat on every letter following a bispecial factor. At these places, each 1 receives a lower hat, each 3 receives an upper hat, a 2 receives a lower hat if it follows a 1 and an upper hat if it follows a 2;

the first 1 of O

1

receives a lower hat, while the first 2 of O

2

receives both an upper and a lower hat.

Conversely, O

i

can be recovered from O ˆ

i

by removing all hats, and the bispecial prefixes of O

i

are precisely those prefixes which were followed in O ˆ

i

by a hatted letter. As there is no ambiguity, we shall denote in the same wy a bispecial prefix of O

i

and the corresponding hatted bispecial prefix of O ˆ

i

; note that with this notation, u ¯

k

denotes the k-th non-palindrome bispecial prefix of O ˆ

i

, which is the retrograde of u

k

in the non-hatted alphabet, but does not necessarily stay so after the hats have been put on, as may be seen in the examples below; similarly, we still call a word palindrome even if only its unhatted version is equal to its retrograde.

Definition 6. If u is a bispecial prefix of some O ˆ

i

, for any hatted letter a ∈ { 1 ˆ , ˆ2, 2

ˆ , ˆ3 } we denote by u(a) the suffix of u which begins in a and has no other occurrences of a. u(a) is called a cutting suffix.

The following proposition describes completely the bispecial factors of the O

i

. We give first a formal proposition, then we shall try to describe how it works concretely. To prove the proposition, we use mainly Lemma 2.5, which allows us, starting from a bispecial prefix of one O

i

, to hunt in the sequences O

i

for the next bispecial one.

Proposition 2.6. Let L be a symmetric 3-hat language. Then for every k ≥ 1 there exist positive integers n

k

and m

k

and non-empty bispecial words u

k

, w

(k)1

, w

2(k)

, . . . , w

n(k)k

, and v

(k)1

, v

2(k)

, . . . , v

m(k)k

in the alphabet { 1, 2, 3, 1 ˆ , ˆ2, 2

ˆ , ˆ3 } so that

• u

k

is a prefix of

1

, | u

k−1

| < | u

k

| (where for k = 0 we take u

0

= ε

), and u

k

6 = ¯ u

k

.

• w

1(1)

= 1

ˆ and v

(1)1

= ˆ2

ˆ . where ˆ2

ˆ counts as a 2 ˆ and ˆ2.

• w

1(k)

, w

(k)2

, . . . , w

(k)nk

and v

1(k)

, v

(k)2

, . . . , v

(k)mk

are each palindromes and u

k−1

⊂ w

1(k)

⊂ w

(k)2

⊂ · · · ⊂ w

n(k)k

⊂ u

k

and

¯

u

k−1

⊂ v

(k)1

⊂ v

2(k)

⊂ · · · ⊂ v

(k)mk

⊂ u ¯

k

.

(7)

{ w

i(k)

| 1 ≤ i ≤ n

k

} are all the bispecial prefixes of O ˆ

1

of length greater than | u

k−1

| and less than | u

k

| and { v

(k)i

| 1 ≤ i ≤ m

k

} are all the bispecial prefixes of O ˆ

2

of length greater than

| u ¯

k−1

| and less than | u ¯

k

| . with coding sequence (n

k

, m

k

,

k+1

)

k≥1

.

If u

k−1

1 is left special, then w

(k)nk

is the shortest word beginning in u

k−1

such that 2w

n(k)k

is right special and v

m(k)k

is the shortest word beginning in u ¯

k−1

such that 1v

(k)mk

is right special.

If u

k−1

2 is left special, then w

(k)nk

is the shortest word beginning in u

k−1

such that 3w

n(k)k

is right special and v

m(k)k

is the shortest word beginning in u ¯

k−1

such that 2v

(k)mk

is right special.

If u

k−1

1 is left special, then w

(k)1

= u

k−1

u ¯

k−1

(1

ˆ ). For 2 ≤ i ≤ n

k

w

(k)i

= w

i−1(k)

w

i−1(k)

(ˆ3) if w

i−1(k)

(ˆ3) exists, otherwise w

i(k)

= w

i−1(k)

ˆ3w

i−1(k)

. While u

k

= w

n(k)k

v

(k)mk

(2

ˆ ). Similarly, v

1(k)

=

¯

u

k−1

u

k−1

(2

ˆ ). For 2 ≤ i ≤ m

k

v

(k)i

= v

i(k)1

v

(k)i1

(ˆ2), and u ¯

k

= v

m(k)k

w

n(k)k

(1 ˆ ).

If u

k−1

2 is left special, then w

1(k)

= u

k−1

u ¯

k−1

(ˆ2). For 2 ≤ i ≤ n

k

w

(k)i

= w

i(k)1

w

i(k)1

(2 ˆ ) and u

k

= w

(k)nk

v

m(k)k

(ˆ3). Similarly, v

1(k)

= ¯ u

k−1

u

k−1

(ˆ3) if it exists, otherwise v

1(k)

= ¯ u

k−1

ˆ3u

k−1

. For 2 ≤ i ≤ m

k

v

i(k)

= v

(k)i1

v

(k)i1

(1

ˆ ), and u ¯

k

= v

(k)mk

w

n(k)k

(ˆ2).

Proof. We proceed by induction on k. We begin with the bispecial letter 1 which we denote by 1 ˆ . If 1

ˆ 3 is left special, then according to case (1) of Lemma 2.4 we have 1 ˆ ˆ3 | = 1

ˆ ˆ31. If 1

ˆ ˆ313 is left special, then by case (2) of Lemma 2.5 we have 1

ˆ ˆ31ˆ3 | = 1

ˆ ˆ31ˆ31. Continuing in this way we see that there exists n

1

≥ 1 so that the shortest bispecial word w beginning in 1

ˆ with w2 left special is of the form 1

ˆ (ˆ31)

n11

. Set

w

(1)i

= 1

ˆ (ˆ31)

i1

for each 1 ≤ i ≤ n

1

. Then each w

i(1)

is a palindrome,

ε

= u

0

⊂ w

1(1)

⊂ w

2(1)

⊂ · · · ⊂ w

(1)n1

and the w

i(1)

are precisely the first n

1

non-empty bispecial prefixes of O ˆ

1

.

We next consider the bispecial factor 2 which we “doubly hat” by ˆ2

ˆ . Then a similar argument to the one above implies the existence of a positive integer m

1

≥ 1 so that the shortest bispecial word w beginning in ˆ2

ˆ with w1 left special is of the form ˆ2

ˆ (ˆ2)

m11

. Set v

(1)i

= ˆ2

ˆ (ˆ2)

i1

for each 1 ≤ i ≤ m

1

. Then each v

i(1)

is a palindrome and

ε

= ¯ u

0

⊂ v

(1)1

⊂ v

(1)2

⊂ · · · ⊂ v

(1)m1

and the v

i(1)

are precisely the first m

1

non-empty bispecial prefixes of O ˆ

2

.

(8)

In order to complete the case k = 1 we must show that if u

1

is such that w

n(1)1

2

ˆ | = u

1

then v

(1)m1

1

ˆ | = ¯ u

1

. In fact since u

1

begins in 1

ˆ and u ¯

1

begins in ˆ2

ˆ , it follows that u

1

6 = ¯ u

1

. We use case (2) of Lemma 2.4 to find u

1

. Let v be the longest proper prefix of w

(1)n1

= w

n(1)1

such that 2v is right special (a from Lemma 2.4 is equal to 2). Then in this case v = ε

. Next let w be the shortest word beginning in v2 = 2 ¯ such that 1w is right special (b from Lemma 2.4 is equal to 1). Then w = v

m(1)1

. Hence following Lemma 2.4 we deduce that w

(1)n1

2

ˆ | = w

n(1)1

v

m(1)1

. In other words u

1

= w

(1)n1

v

m(1)1

= 1

ˆ (ˆ31)

n11

2

ˆ (2)

m11

.

Similarly, the longest proper prefix v of v

m(1)1

= v

m(1)1

with 1v right special is the empty word, and the shortest word w beginning in 1 such that 2w is right special is w

n(1)1

. Hence

v

m(1)1

1

ˆ | = v

m(1)1

w

(1)n1

= ˆ2

ˆ (ˆ2)

m1−1

1

ˆ (31)

n1−1

= ¯ u

1

as required. This completes the case k = 1.

Note that the hats on the letters allow us to read off the (unhatted) bispecial factors. For instance, the bispecial factors beginning in 1 of length less or equal to | u

1

| are

1, 131, 13131, . . . , 1(31)

n11

, 1(31)

n11

2(2)

m11

. Similarly the bispecial factors beginning in 2 of length less or equal to | u

1

| are

2, 22, 222, . . . , 2(2)

m11

, 2(2)

m11

1(31)

n11

.

We now suppose the result holds up to k ≥ 1 and show the result holds for k + 1. We have four cases to consider: case 1.) u

k−1

2 and u

k

1 are left special; case 2.) u

k−1

2 and u

k

2 are left special;

case 3.) u

k−1

1 and u

k

1 are left special; case 4.) u

k−1

1 and u

k

2 are left special. Each of these cases are dealt with in pretty much the same way, so we only consider the first case in detail.

Since u

k−1

2 is left special, it follows that u ¯

k−1

3 is left special. We claim there exist non-empty palindromes x, x

0

, z, z

0

such that

u

k

= xy, u

k

= zw, u ¯

k

= x

0

y

0

, u ¯

k

= z

0

w

0

,

where x is the longest proper prefix of u

k

with 3x right special, z is the longest proper prefix of u

k

with 2z right special, x

0

is the longest proper prefix of u ¯

k

with 2x

0

right special, and z

0

is the longest proper prefix of u ¯

k

with 1z

0

right special. In fact, by induction hypothesis x = w

(k)nk

and x

0

= v

m(k)k

. Since w

(1)n1

is a prefix of u

k

and 2w

n(1)1

is right special, it follows that z is non-empty. We claim that z = w

i(j)

for some 1 ≤ j ≤ k and 1 ≤ i ≤ n

j

. Suppose to the contrary that z = u

i

for some i ≤ k − 1. Then 2u

i

is right special, implying that u ¯

i

2 is left special, implying that u

i

1 is left special. But then by induction hypothesis w

(i+1)ni+1

2 is left special and hence 2w

(i+1)ni+1

is right special. But this contradicts the maximality of the length of z since | w

(i+1)ni+1

| > | u

i

| . Hence z is a palindrome. A similar argument shows that z

0

is a non-empty palindrome.

Also in case 1) we have assumed that u

k

1 and u ¯

k

2 are left special. Since x, x

0

, z and z

0

are non-empty palindromes, Lemma 2.5 applies, and we find

u

k

1 | = u

k

w

0

and u ¯

k

2 | = ¯ u

k

w.

(9)

Also, u

k

w

0

and u ¯

k

w are each palindromes by iii) of Lemma 2.3. Now if u

k

w

0

3 is left special, then u

k

w

0

3 | = u

k

w

0

yw

0

since the palindrome x is the longest proper prefix of u

k

w

0

= u

k

w

0

with 3x right special. If u

k

w

0

yw

0

3 is left special, then u

k

w

0

yw

0

| = u

k

w

0

(yw

0

)

2

since the palindrome u

k

w

0

is the longest proper prefix of u

k

w

0

yw

0

= u

k

w

0

yw

0

with 3u

k

w

0

right special. Thus continuing in this way, it follows there exists n

k+1

≥ 1 such that the shortest bispecial word v beginning in u

k

w

0

with v2 left special is u

k

w

0

(yw

0

)

nk+1−1

. Set

w

i(k+1)

= u

k

w

0

(yw

0

)

i1

for each 1 ≤ i ≤ n

k+1

. Then each w

i(k+1)

is a palindrome,

u

k

⊂ w

(k+1)1

⊂ w

2(k+1)

⊂ · · · ⊂ w

n(k+1)k+1

and the w

(k+1)i

are precisely the bispecial prefixes of O ˆ

1

of length greater than | u

k

| and of length less than or equal to | w

n(k+1)k+1

| .

Similarly there exists m

k+1

≥ 1 so that the shortest bispecial word v

0

beginning in u ¯

k

w with v

0

1 left special is of the form u ¯

k

w(y

0

w)

mk+11

. Set

v

i(k+1)

= ¯ u

k

w(y

0

w)

i−1

for each 1 ≤ i ≤ m

k+1

. Then each v

i(k+1)

is a palindrome,

¯

u

k

⊂ v

(k+1)1

⊂ v

2(k+1)

⊂ · · · ⊂ v

m(k+1)k+1

and the v

i(k+1)

are precisely the bispecial prefixes of O ˆ

2

of length greater than | u ¯

k

| and of length less than or equal to | v

m(k+1)k+1

| .

We now compute u

k+1

defined by w

(k+1)nk+1

2 | = u

k+1

using Lemma 2.4. It is readily verified that u

k

is the longest proper prefix of w

n(k+1)k+1

= w

(k+1)nk+1

with 2u

k

right special, and v

m(k+1)k+1

=

¯

u

k

w(y

0

w)

mk+11

is the shortest word beginning in u ¯

k

such that 1v

(k+1)mk+1

is right special. Hence by Lemma 2.4 we deduce that

w

(k+1)nk+1

2 | = w

n(k+1)k+1

w(y

0

w)

mk+1−1

= u

k

w

0

(yw

0

)

nk+1−1

w(y

0

w)

mk+1−1

.

Similarly u ¯

k

is the longest proper prefix of v

m(k+1)k+1

= v

(k+1)mk+1

with 1¯ u

k

right special, and w

n(k+1)k+1

= u

k

w

0

(yw

0

)

nk+1−1

is the shortest word beginning in u

k

such that 2w

n(k+1)k+1

is right special. Hence by Lemma 2.4 we deduce that

v

(k+1)mk+1

1 | = v

(k+1)mk+1

w

0

(yw

0

)

nk+1−1

= ¯ u

k

w(y

0

w)

mk+1−1

w

0

(yw

0

)

nk+1−1

.

Finally, since u ¯

k+1

= u

k

w

0

(yw

0

)

nk+11

w(y

0

w)

mk+11

and u ¯

k

w(y

0

w)

mk+11

w

0

(yw

0

)

nk+11

are each bispecial words beginning in 2 of the same length, it follows that

¯

u

k+1

= ¯ u

k

w(y

0

w)

mk+11

w

0

(yw

0

)

nk+11

as required.

(10)

2.3. The hat algorithm.

Corollary 2.7. A symmetric 3-hat language contains an infinite number of palindrome bispecial words and an infinite number of non-palindrome bispecial words.

This is an immediate consequence of Proposition 2.6. Note that this is not true for Sturmian or Arnoux-Rauzy languages, where all bispecial words are palindromes. We can thus define, by this corollary and Proposition 2.6:

Definition 7. The coding sequence of a symmetric 3-hat language is the sequence (n

k

, m

k

,

k+1

)

k≥1

, where

the integer m

k

≥ 1 is the number of successive palindrome bispecial prefixes in O

1

between the k − 1-th and the k-th non-palindrome bispecial prefixes.

the integer n

k

≥ 1 is the number of successive palindrome bispecial prefixes in O

2

between the k − 1-th and the k-th non-palindrome bispecial prefixes.

Let a, b ∈ { 1

ˆ , ˆ2 } such that u

k−1

a and u

k

b is left special (for k = 1, we set a = 1). Then set

1

= + and

k+1

=

+ if a = b

if a 6 = b

Now, what Proposition 2.6 says is: all the information we need to construct recursively the two sequences of hatted bispecial prefixes of O ˆ

i

is contained in the coding sequence. Namely, to build u

k

, w

(k)i

and u ¯

k

, v

(k)j

by the hat algorithm, the basic rule is the following: if u is a bispecial prefix of O ˆ

i

, the next bispecial prefix of O ˆ

i

is ux, where x is a cutting suffix, but with the convention that when we copy x we delete all the hats except the one on the first letter of x.

So x = y(a) for some bispecial word y; then a can only be one of the two letters which are allowed to follow the last letter of u, one with lower hat, a

ˆ (u) and one with upper hat ˆ a(u). And, for a given u, exactly which cutting suffix x = y(a) will be chosen is uniquely determined by the coding sequence in the following way (see Example 1 below to be convinced it does really work):

• if u is not a palindrome: if u = u

k−1

, y = ¯ u

k−1

, while if u = ¯ u

k−1

, y = u

k−1

; the letter a depends on the -part of the coding sequence: namely, if

k

. . .

1

= − , a = a

ˆ (u) (both for u = u

k−1

and for u = ¯ u

k−1

), and we say that u

k−1

makes the lower hat choice; if

k

. . .

1

= +, a = ˆ a(u) and we say that u

k−1

makes the upper hat choice. In all these cases, ux is a palindrome.

• if u is a palindrome: suppose that

k

. . .

1

= + ; then, if u = w

(k)nk

, y = v

m(k)k

, a = a

ˆ (u), and ux is not a palindrome;

if u = v

(k)mk

, y = w

(k)nk

where a = a

ˆ (u), and ux is not a palindrome;

if u = w

(k)i

, 1 ≤ i < n

k

, or u = v

j(k)

, 1 ≤ j < m

k

, x = u(a), a = ˆ a(u) and ux is a palindrome.

The cases where

k

. . .

1

= − are deduced from the previous ones by flipping the hats.

We start always from u

0

empty, w

(1)1

= 1

ˆ , v

1(1)

= ˆ2

ˆ . A slightly different rule applies at the very

beginning of the sequences, the simplest way to express it is to say that if the required cutting

(11)

suffix y(a) does not exist (for example y = 1, a = ˆ3) then take instead the cutting suffix y

0

(a) with y

0

= ˆ3u; also, ˆ2

ˆ can be used either as ˆ2 or as 2

ˆ , but we remove the hat we have not used. The choice for u

0

is always lower hat.

As we noticed already, the bispecial prefixes of O ˆ

i

are the prefixes followed by a hatted letter, and when we remove their hats we get the bispecial prefixes of O

i

. And, as we know all the bispecial factors of L we also know completely each language L

n

, as, by minimality, each word of L

n

is a factor of some bispecial word.

Example 1. Let L be a symmetric 3-hat language whose coding sequence starts off as (3, 2, − ), (1, 2, +), (2, 1, +). Then

O ˆ

1

= 1 ˆ ˆ31ˆ312

ˆ 2ˆ213131ˆ3122131313122ˆ213131

2 ˆ 22131313122131313122213131ˆ3122131313122 ˆ213131222131313122131313122213131 . . . O ˆ

2

= ˆ2

ˆ 1

ˆ 3131ˆ31221

ˆ 31313122ˆ213131ˆ3122131313122 ˆ213131222131313122131313122213131 ˆ3122131313122 . . .

To see how the algorithm works in Example 1, let us build O ˆ

1

: the first bispecial is 1

ˆ = w

(1)1

; the choice for u

0

is (always) lower hat, and 1 < n

1

hence the cutting suffix is u(a) for u = 1

ˆ , a is the upper hat letter which can follow a 1, hence a = ˆ3; unfortunately u(a) does not exist, but fortunately we have specified how to deal with that, so we take u

0

= ˆ31

ˆ and u

0

(a) = u

0

, so we get 1 ˆ ˆ31 = w

(1)2

, a palindrome. 2 < n

1

so the cutting suffix is u(a), u = 1

ˆ

ˆ31, a = ˆ3, and 1

ˆ ˆ31ˆ31 = w

3(1)

, a palindrome.

Now 2 = n

1

, so we have to know v

m(1)1

= w

(1)2

, the reader will check that it is ˆ2

ˆ ˆ2. The cutting suffix to add to u = 1

ˆ ˆ31 is v(a) where v = ˆ2

ˆ ˆ2 and a is the lower hat letter which can follow a 1, hence a = 2

ˆ . So v(a) = ˆ2

ˆ

ˆ2, we keep only the lower hat on the first letter, and 1 ˆ ˆ31ˆ312

ˆ 2 = u

1

, our first non-palindrome.

As

2

= −, u

1

makes the opposite choice from u

0

, that is upper hat. The cutting suffix is u(a), with ¯ u ¯ = ˆ2

ˆ ˆ21

ˆ 3131, a is the upper hat letter which can follow a 2, hence a = ˆ2, so we get 1 ˆ ˆ31ˆ312

ˆ 2ˆ213131 = w

1(2)

, a palindrome again. The reader is now able to continue by himself.

Example 2. Let L be a symmetric 3-hat language with periodic coding sequence (1, 1, − ), (1, 1, − ), (1, 1, − ) . . . . Then

O ˆ

1

= 1 ˆ 2

ˆ ˆ21ˆ3121

ˆ 312212

ˆ 21312ˆ21221312131221 ˆ3121312212213121

ˆ 3122131213122122131221221312131221 . . .

(12)

O ˆ

2

= ˆ2 ˆ 1

ˆ ˆ312ˆ212

ˆ 213121

ˆ 31221ˆ312131221221312 ˆ212213121312212

ˆ 2131221221312131221312131221221312 . . .

It can be shown (see section 4.4) that the unhatted O

2

is the fixed point of the substitution τ given by 1 7→ 2, 2 7→ 2131 and 3 7→ 21.

Another consequence of Proposition 2.6 is

Corollary 2.8. If (m

k

, n

k

,

k+1

)

k≥1

is the coding sequence of a symmetric 3-hat language, then (n

k

,

k+1

) 6 = (1, +) for infinitely many k and (m

k

,

k+1

) 6 = (1, +) for infinitely many k.

Proof. Suppose (n

k

,

k+1

) = (1, +) for every k ≥ K ; then one symbol is no longer used in the choices of the hat algorithm after K. Suppose for instance it is a ˆ3. This means that, if u is a long enough bispecial factor ending in 1 then u3 cannot be left special, hence u2 is left special, or equivalently 2¯ u is right special; now the u ¯ generate O

1

. Hence 3 O

1

has only a finite number of right special prefixes. As the sequence 3 O

1

is minimal and aperiodic, this is impossible. More generally, if the conclusions of this corollary are not satisfied, then for some a ∈ 1, 2, 3 and for some i ∈ { 1, 2 } the sequence a O

i

has only a finite number of right special prefixes, and this contradict the minimality and non-periodicity of L.

This property characterizes in fact the possible coding sequences among sequences (m

k

∈ IN

, n

k

∈ IN

,

k+1

= ± )

k≥1

, see Corollary 4.3 below.

The following proposition is just a translation of Proposition 2.6 in a language better suited to our purposes.

Proposition 2.9. Let L be a symmetric 3-hat language with coding sequence (m

k

, n

k

,

k+1

)

k≥1

; then the (unhatted) bispecial factors are built in the following way:

we build sequences of words by the following rules: at the beginning U

0

= 1, V

0

= 2, U

00

is empty, V

00

= 3; then

U

kL

= U

k01

V

k−1

U

k−1

(V

k01

U

k−1

)

nk1

U

kS

= U

k−1

(V

k−10

U

k−1

)

nk1

V

kL

= V

k01

U

k−1

V

k−1

(U

k01

V

k−1

)

mk1

V

kS

= V

k−1

(U

k−10

V

k−1

)

mk1

;

then, if

k+1

= +1, we set U

k

= U

kS

, U

k0

= U

kL

, V

k

= V

kS

, V

k0

= V

kL

; if

k+1

= − 1, we set U

k

= U

kL

, U

k0

= U

kS

, V

k

= V

kL

, V

k0

= V

kS

.

Then, if (u

k

, u ¯

k

) is the k-th pair of non-palindrome bispecial words, we have u

k

= U

1S

V

1S

U

2S

V

2S

...U

kS

V

kS

¯

u

k

= V

1S

U

1S

V

2S

U

2S

...V

kS

U

kS

.

Also, the palindrome bispecial words are the u

k−1

U

k−1

(V

k01

U

k−1

)

l

for 0 ≤ l ≤ n

k

− 1 and

¯

u

k−1

V

k−1

(U

k−10

V

k−1

)

l0

for 0 ≤ l

0

≤ m

k

− 1, setting u

0

and u ¯

0

to be empty.

And, for any k, either U

k

V

k0

= 13u

k

, U

k0

V

k

= 2u

k

, V

k0

U

k

= 31¯ u

k

, V

k

U

k0

= 2¯ u

k

, or U

k

V

k0

= 2u

k

,

U

k0

V

k

= 13u

k

, V

k0

U

k

= 2¯ u

k

, V

k

U

k0

= 31¯ u

k

.

(13)

Proof. For k = 1, we check this directly. We take k > 1, and consider the two possible suffixes which may be added to u

k−1

to get the next (palindrome) bispecial word w

1(k)

, which are the two words u ¯

k−1

(1

ˆ ) and u ¯

k−1

(ˆ2). We denote by U

k−1

the one such that indeed w

1(k)

= u

k−1

U

k−1

, and by U

k01

the other one; and similarly we call V

k−1

and V

k01

the two possible suffixes which may be added to u ¯

k−1

to get the next (palindrome) bispecial word v

1(k)

, V

k−1

being the one which is used.

Namely, if u

k−1

1 is left special, then

U

k−1

= ¯ u

k−1

(1

ˆ ) U

k−10

= ¯ u

k−1

(ˆ2), V

k−1

= u

k−1

(2

ˆ ) V

k01

= u

k−1

(ˆ3);

and if u

k−1

2 is left special, then

U

k−1

= ¯ u

k−1

(ˆ2) U

k−10

= ¯ u

k−1

(1 ˆ ), V

k−1

= u

k−1

(ˆ3), V

k01

= u

k−1

(2

ˆ ).

Then we apply the rules in Proposition 2.6; in both cases we get w

1(k)

= u

k−1

U

k−1

,

v

1(k)

= ¯ u

k−1

V

k−1

; then (for i such that they exist)

w

i(k)

= w

i−1(k)

V

k−10

U

k−1

, v

i(k)

= v

i(k)1

U

k01

V

k−1

; hence, with the above notations

w

n(k)k

= u

k−1

U

kS

, v

(k)nk

= ¯ u

k−1

V

kS

; and then, again using Proposition 2.6, we get

u

k

= u

k−1

U

kS

V

kS

,

¯

u

k

= ¯ u

k−1

V

kS

U

kS

.

A new application of Proposition 2.6 gives now that the two new suffixes U

k

and U

k0

are U

kS

and U

kL

(but without order), and similarly for the V . Then U

k

is the one of the words U

kS

, U

kL

beginning by U

k−1

(or, equivalently, U

k

is the shorter one, U

kS

) if the choice of hats (as described above) is the same for k − 1 and k, that is, either U

k−1

= ¯ u

k−1

(1

ˆ ) and U

k

= ¯ u

k

(1

ˆ ), or

U

k−1

= ¯ u

k−1

(ˆ3) and U

k

= ¯ u

k

(ˆ3), and that happens if and only if

k+1

= +1. And U

k

is the one

of the words U

kS

, U

kL

not beginning by U

k−1

if the choice of hats has changed from k − 1 to k,

which is equivalent to

k+1

= − 1. The same is true by changing U to V . The last equalities are

checked by recursion.

(14)

2.4. Return words and Rauzy graphs.

Definition 8. For a word w in the language L, a return word of w = w

1

. . . w

l

is any word w

0

= w

01

. . . w

0l0

such that there exist i < j with w = x

i

. . . x

i+l−1

= x

j

. . . x

j+l−1

, w 6 = x

i0

. . . x

i0+l−1

for every i < i

0

< j, and w

0

= x

i+l

. . . x

j+l−1

.

We shall be interested in computing the return words of some remarkable words; for this, we need the following tools from the theory of Rauzy graphs (see [2] or [3]).

Definition 9. For two words w and w

0

of equal length l, w

0

is a successor (in the Rauzy graph) of w if there exists j such that x

j

. . . x

j+l−1

= w, x

j+1

. . . x

j+l

= w

0

. The Rauzy graph Γ

l

is the graph whose vertices are all words of length l in the language of T , with an edge between each word w and each of its successors w

0

, labelled with the last letter of w

0

.

A segment (in the Rauzy graph) is a sequence of words w

1

, . . . , w

r

of the same length l, such that w

1

and w

r

are right special, w

i

is not right special for 1 < i < r, and for every 1 ≤ i ≤ r − 1 w

i+1

is a successor of w

i

. The label of the segment w

1

, . . . , w

r

is the word w

l2

. . . w

rl

made by concatenating the last letters of the words w

2

, . . . , w

r

.

Proposition 2.10. Let L be a symmetric 3-hat language. For every k, the k-th non-palindrome bispecial word u

k

has exactly three return words, A

k

, B

k

and C

k

, where A

0

= 13, B

0

= 2, C

0

= 12, and

A

k

= A

nkk1

C

k−1

B

kmk1−1

, B

k

= B

k−1

A

nk−1k1

C

k−1

B

k−1mk1

,

C

k

= A

nkk11

C

k−1

B

kmk11

if

k+1

= +1,

and

A

k

= B

k−1

A

nkk11

C

k−1

B

kmk11

, B

k

= A

nk−1k

C

k−1

B

k−1mk−1

, C

k

= B

k−1

A

nk−1k

C

k−1

B

k−1mk1

if

k+1

= − 1.

Proof. k being fixed, let l be the length of u

k

; we look at segments starting from u

k

: u

k

has two possible successors as it is right special, then each of the further successors w

i

will have only one possible successor unless it is right special, in which case the segment ends; hence there are exactly two segments starting with u

k

, and they may end either with u

k

or with u ¯

k

.

From Proposition 2.9, we get that U

k

and U

k0

are suffixes of u ¯

k

, that u ¯

k

is a suffix of u

k

U

k

and of u

k

U

k0

, and that, whenever v is a strict prefix of U

k

or U

k0

, the suffix of length l of u

k

v is not right special. This means that there exists a segment starting with u

k

, ending with u ¯

k

, and of label U

k

, and there exists a segment starting with u

k

, ending with u ¯

k

, and of label U

k0

, hence U

k

and U

k0

6 = U

k

are the labels of the two segments starting with u

k

. Similarly, V

k

and V

k0

are the labels of the two

segments starting with u ¯

k

. In other words, the Rauzy graph Γ

l

has the following shape, the label

(15)

of each segment being written under it :

−→ −→ −→

↑ U

k

−→ −→ −→

u

k

U

k0

u ¯

k

←− ←− ←−

↑ V

k

←− ←− ←−

V

k0

As there is no segment starting and ending with u

k

, and no segment starting and ending with

¯

u

k

, each return word of u

k

is a concatenation of the label of a segment starting with u

k

and ending with u ¯

k

, with the label of a segment starting with u ¯

k

and ending with u

k

, and hence is an element of the set

{ U

k

V

k

, U

k

V

k0

, U

k0

V

k

, U

k0

V

k0

} .

The formulas in Proposition 2.9 show that the first three are indeed return words of u

k

. But if the four were return words of u

k

, then the four possible words a¯ u

k

b, a = 1, 2, b = 2, 3 would occur in x and both u ¯

k

2 and u ¯

k

3 would be left special, which is not true. Hence A

k

= U

k

V

k0

, B

k

= U

k0

V

k

and C

k

= U

k

V

k

are the three return words of u

k

, and we apply Theorem 2.9 to get the claimed recursion formulas.

Note that the shape of the Rauzy graphs at these particular stages is an interesting by-product of the proof.

3. L ANGUAGES OF SYMMETRIC THREE - INTERVAL EXCHANGE TRANSFORMATIONS

3.1. Preliminaries. A k-interval exchange transformation is given by a probability vector (α

1

, α

2

, . . . , α

k

)

together with a permutation π of { 1, 2, . . . , k } . The unit interval [0, 1[ is partitioned into k sub- intervals of lengths α

1

, α

2

, . . . , α

k

which are then rearranged according to the permutation π.

Definition 10. A symmetric three-interval exchange transformation is a three-interval exchange transformation I with probability vector (α, β, 1 − (α + β)), α, β > 0, and permutation (3, 2, 1)

1

defined by

I x =

 

 

x + 1 − α if x ∈ [0, α[

x + 1 − 2α − β if x ∈ [α, α + β[

x − α − β if x ∈ [α + β, 1[.

(1)

We denote by D

1

the interval [0, α[, by D

2

the interval [α, α + β[, and by D

3

the interval [α + β, 1[.

Let R

i

denote the image of the interval D

i

. Then for each 1 ≤ i ≤ 3 there exists a real number r

i

and a partial isometry φ

i

: D

i

→ R

i

defined by φ

i

(x) = x + r

i

which coincides with I on D

i

. Definition 11. I satisfies the infinite distinct orbit condition (or i.d.o.c. for short) of Keane [17] if the two negative trajectories {I

n

(α) }

n≥0

and {I

n

(α + β) }

n≥0

of the discontinuities are infinite disjoint sets.

This has an equivalent formulation in terms of α and β:

1

All other permutations on three letters reduce the transformation to an exchange of two intervals.

(16)

Definition 12. The couple (α, β) satisfies the i.d.o.c. condition if for all nonnegative integers p and positive integers q

• pα + qβ 6 = p − q,

• pα + qβ 6 = p − q − 1,

• pα + qβ 6 = p − q + 1

Lemma 3.1. I satisfies the i.d.o.c. condition if and only if the couple (α, β) satisfies the i.d.o.c.

condition.

Proof. Set

A(α, β) = 1 − α

1 + β and B(α, β) = 1 1 + β .

Then it has been known since [16] that I is induced by a rotation on the circle by angle A(α, β).

More precisely, I is obtained from the two-interval exchange R on [0, 1] given by Rx =

( x + A(α, β) if x ∈ [0, 1 − A(α, β)[

x + A(α, β) − 1 if x ∈ [1 − A(α, β), 1[.

(2)

by inducing (according to the first return map) on the subinterval [0, B (α, β)[, and then renormal- izing by scaling by 1 + β.

Hence, the orbits under I of α and α + β are infinite if and only if A(α, β) is irrational, our first condition. Whenever A(α, β) is irrational, these two orbits are distinct if and only if the orbits under R of 0 and B(α, β) are distinct, which is equivalent to B (α, β) 6 = pA(α, β) − q and B(α, β) 6 = − pA(α, β) + q, our last two conditions.

The i.d.o.c. condition for I or (α, β) is (strictly) weaker than the irrational independence of α, β and 1. It implies that I is minimal (every orbit is dense).

Definition 13. For every point x in [0, 1[, we define an infinite sequence (x

n

)

n∈

IN by putting x

n

= i if I

n

x falls into D

i

, i = 1, 2, 3. This sequence is again denoted by x; we call it the trajectory of x.

If I satisfies the i.d.o.c. condition, the minimality implies that all trajectories have the same language, which we call the language of I , and denote by L( I ).

Equivalently, L( I ) is the set of all words w = w

1

w

2

· · · w

m

with w

i

∈ { 1, 2, . . . , n } such that the domain of the composition φw = φ

m

◦ φ

m−1

◦ · · · ◦ φ

1

is a nondegenerate interval, denoted D

w

. We denote by R

w

the image φ

w

(D

w

), and φ

w

is a translation by r

w

= P

m

i=1

r

wi

.

Lemma 3.2. Let I be a symmetric three-interval exchange transformation. Then L( I ) is a sym- metric language. Moreover for each w ∈ L( I ), if D

w

= [a, b[ then R

= [1 − b, 1 − a[.

Proof. We proceed by induction on | w | . The result is clear if | w | = 1. Now suppose w ∈ L( I ) with | w | > 1 and write w = va where | v | = | w | − 1. Set D

v

= [a, b[ and D

a

= [a

0

, b

0

[. Then R

v

= [a + r

v

, b + r

v

[. Since w ∈ L( I ) we have R

v

∩ D

a

is a nonempty interval. Then by inductive hypothesis R

a

∩ D

v¯

= [1 − b

0

, 1 − a

0

[ ∩ [1 − b − r

v

, 1 − a − r

v

[ is also a nonempty interval. Thus

¯

w ∈ L( I ). We can assume without loss of generality that a + r

v

≤ a

0

< b + r

v

. If b

0

> b + r

v

then D

w

= [a − r

v

, b[. Also, by inductive hypothesis D

¯v

∩ R

a

= [1 − b − r

v

, 1 − a

0

[ so R

= [1 − b − r

v

+r

, 1 − a

0

+r

[= [1 − b, 1 − a

0

+r

v

[ as required. If b

0

≤ b +r

v

then D

w

= [a

0

− r

v

, b

0

− r

v

[.

Thus R

a

∩ D

¯v

= R

a

= [1 − b

0

, 1 − a

0

[ so R

= [1 − b

0

+ r

¯v

, 1 − a

0

+ r

[= [1 − b

0

+ r

v

, 1 − a

0

+ r

v

[

as required.

Références

Documents relatifs

The Aleksandrov problem is a “Minkowski problem” for Aleksan- drov’s integral curvature: What are necessary and sufficient conditions on a given Borel measure (the data) on the

Research Note RN/89/14, Department of Computer Science, University College London, February 1989. 8

CLEAR IMAGE AREA (name) (item type = 2) causes the NGLI to erase all graphical information in the specified area, and then to perform a reset commandd. The following

This document and the information contained herein is provided on an &#34;AS IS&#34; basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIMS

Impact of the COVID-19 pandemic on the disease course of patients with inflammatory rheumatic diseases: results from the Swiss clinical quality management cohort. COVID-19

To explore other strategies of utilizing multiple neighborhoods and for comparison purposes, we also tried a serial application variant that utilizes reinsertion, exchange and

• the WRMC algorithm through Propositions 2.1 (consistency of the estimation), 2.2 (asymptotic normality), and 2.3 (a first partial answer to the initial question: Does waste

ProofSearch(Prg, I, Specs) (see Sec. 5.2) is a procedure that tries to build a separation logic proof of Prg using the specifications Specs and the resource invariants I.. The