• Aucun résultat trouvé

Normal numbers and automatic complexity

N/A
N/A
Protected

Academic year: 2022

Partager "Normal numbers and automatic complexity"

Copied!
84
0
0

Texte intégral

(1)

Normal numbers and automatic complexity

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen

LIRMM CNRS & University of Montpellier

(2)

Normal numbers

.00100111010111. . .

#0(n) = number of 0 among the first n bits simply normal: #0(n)/n→1/2, #1(n)→1/2

#00(n) = number of occurences of 00 in the first n positions

#00(n) + #01(n) + #10(n) + #11(n) =n

normal: #00(n)/n→1/4 and the same for all other bit strings Borel introduced as expected property of random numbers Another approach: cut the sequence intok-bit blocks and count the number of blocks of each type (aligned occurrences)

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(3)

Normal numbers

.00100111010111. . .

#0(n) = number of 0 among the first n bits simply normal: #0(n)/n→1/2, #1(n)→1/2

#00(n) = number of occurences of 00 in the first n positions

#00(n) + #01(n) + #10(n) + #11(n) =n

normal: #00(n)/n→1/4 and the same for all other bit strings Borel introduced as expected property of random numbers Another approach: cut the sequence intok-bit blocks and count the number of blocks of each type (aligned occurrences)

(4)

Normal numbers

.00100111010111. . .

#0(n) = number of 0 among the first n bits

simply normal: #0(n)/n→1/2, #1(n)→1/2

#00(n) = number of occurences of 00 in the first n positions

#00(n) + #01(n) + #10(n) + #11(n) =n

normal: #00(n)/n→1/4 and the same for all other bit strings Borel introduced as expected property of random numbers Another approach: cut the sequence intok-bit blocks and count the number of blocks of each type (aligned occurrences)

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(5)

Normal numbers

.00100111010111. . .

#0(n) = number of 0 among the first n bits simply normal: #0(n)/n→1/2, #1(n)→1/2

#00(n) = number of occurences of 00 in the first n positions

#00(n) + #01(n) + #10(n) + #11(n) =n

normal: #00(n)/n→1/4 and the same for all other bit strings Borel introduced as expected property of random numbers Another approach: cut the sequence intok-bit blocks and count the number of blocks of each type (aligned occurrences)

(6)

Normal numbers

.00100111010111. . .

#0(n) = number of 0 among the first n bits simply normal: #0(n)/n→1/2, #1(n)→1/2

#00(n) = number of occurences of 00 in the first n positions

#00(n) + #01(n) + #10(n) + #11(n) =n

normal: #00(n)/n→1/4 and the same for all other bit strings Borel introduced as expected property of random numbers Another approach: cut the sequence intok-bit blocks and count the number of blocks of each type (aligned occurrences)

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(7)

Normal numbers

.00100111010111. . .

#0(n) = number of 0 among the first n bits simply normal: #0(n)/n→1/2, #1(n)→1/2

#00(n) = number of occurences of 00 in the first n positions

#00(n) + #01(n) + #10(n) + #11(n) =n

normal: #00(n)/n→1/4 and the same for all other bit strings Borel introduced as expected property of random numbers Another approach: cut the sequence intok-bit blocks and count the number of blocks of each type (aligned occurrences)

(8)

Normal numbers

.00100111010111. . .

#0(n) = number of 0 among the first n bits simply normal: #0(n)/n→1/2, #1(n)→1/2

#00(n) = number of occurences of 00 in the first n positions

#00(n) + #01(n) + #10(n) + #11(n) =n

normal: #00(n)/n→1/4 and the same for all other bit strings

Borel introduced as expected property of random numbers Another approach: cut the sequence intok-bit blocks and count the number of blocks of each type (aligned occurrences)

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(9)

Normal numbers

.00100111010111. . .

#0(n) = number of 0 among the first n bits simply normal: #0(n)/n→1/2, #1(n)→1/2

#00(n) = number of occurences of 00 in the first n positions

#00(n) + #01(n) + #10(n) + #11(n) =n

normal: #00(n)/n→1/4 and the same for all other bit strings Borel introduced as expected property of random numbers

Another approach: cut the sequence intok-bit blocks and count the number of blocks of each type (aligned occurrences)

(10)

Normal numbers

.00100111010111. . .

#0(n) = number of 0 among the first n bits simply normal: #0(n)/n→1/2, #1(n)→1/2

#00(n) = number of occurences of 00 in the first n positions

#00(n) + #01(n) + #10(n) + #11(n) =n

normal: #00(n)/n→1/4 and the same for all other bit strings Borel introduced as expected property of random numbers Another approach: cut the sequence intok-bit blocks and count the number of blocks of each type (aligned occurrences)

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(11)

Basic questions about normal numbers

Do normal number exist? (Yes, almost all numbers are random)

Examples? e, π,√

2 normal? (May be yes, nobody knows) Champernowne number: 0 1 10 11 100 101 110 111 1000 . . . Does it depend on the base? (Yes)

Do absolutely normal numbers (all bases) exist? (Yes, almost all)

Are two approaches equivalent? (Yes)

Wall’s theorem: α is normal,n integer⇒ nα,α/n are normal

(12)

Basic questions about normal numbers

Do normal number exist?

(Yes, almost all numbers are random)

Examples? e, π,√

2 normal? (May be yes, nobody knows) Champernowne number: 0 1 10 11 100 101 110 111 1000 . . . Does it depend on the base? (Yes)

Do absolutely normal numbers (all bases) exist? (Yes, almost all)

Are two approaches equivalent? (Yes)

Wall’s theorem: α is normal,n integer⇒ nα,α/n are normal

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(13)

Basic questions about normal numbers

Do normal number exist? (Yes, almost all numbers are random)

Examples? e, π,√

2 normal? (May be yes, nobody knows) Champernowne number: 0 1 10 11 100 101 110 111 1000 . . . Does it depend on the base? (Yes)

Do absolutely normal numbers (all bases) exist? (Yes, almost all)

Are two approaches equivalent? (Yes)

Wall’s theorem: α is normal,n integer⇒ nα,α/n are normal

(14)

Basic questions about normal numbers

Do normal number exist? (Yes, almost all numbers are random)

Examples? e, π,√

2 normal?

(May be yes, nobody knows) Champernowne number: 0 1 10 11 100 101 110 111 1000 . . . Does it depend on the base? (Yes)

Do absolutely normal numbers (all bases) exist? (Yes, almost all)

Are two approaches equivalent? (Yes)

Wall’s theorem: α is normal,n integer⇒ nα,α/n are normal

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(15)

Basic questions about normal numbers

Do normal number exist? (Yes, almost all numbers are random)

Examples? e, π,√

2 normal? (May be yes, nobody knows)

Champernowne number: 0 1 10 11 100 101 110 111 1000 . . . Does it depend on the base? (Yes)

Do absolutely normal numbers (all bases) exist? (Yes, almost all)

Are two approaches equivalent? (Yes)

Wall’s theorem: α is normal,n integer⇒ nα,α/n are normal

(16)

Basic questions about normal numbers

Do normal number exist? (Yes, almost all numbers are random)

Examples? e, π,√

2 normal? (May be yes, nobody knows) Champernowne number: 0 1 10 11 100 101 110 111 1000 . . .

Does it depend on the base? (Yes)

Do absolutely normal numbers (all bases) exist? (Yes, almost all)

Are two approaches equivalent? (Yes)

Wall’s theorem: α is normal,n integer⇒ nα,α/n are normal

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(17)

Basic questions about normal numbers

Do normal number exist? (Yes, almost all numbers are random)

Examples? e, π,√

2 normal? (May be yes, nobody knows) Champernowne number: 0 1 10 11 100 101 110 111 1000 . . . Does it depend on the base?

(Yes)

Do absolutely normal numbers (all bases) exist? (Yes, almost all)

Are two approaches equivalent? (Yes)

Wall’s theorem: α is normal,n integer⇒ nα,α/n are normal

(18)

Basic questions about normal numbers

Do normal number exist? (Yes, almost all numbers are random)

Examples? e, π,√

2 normal? (May be yes, nobody knows) Champernowne number: 0 1 10 11 100 101 110 111 1000 . . . Does it depend on the base? (Yes)

Do absolutely normal numbers (all bases) exist? (Yes, almost all)

Are two approaches equivalent? (Yes)

Wall’s theorem: α is normal,n integer⇒ nα,α/n are normal

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(19)

Basic questions about normal numbers

Do normal number exist? (Yes, almost all numbers are random)

Examples? e, π,√

2 normal? (May be yes, nobody knows) Champernowne number: 0 1 10 11 100 101 110 111 1000 . . . Does it depend on the base? (Yes)

Do absolutely normal numbers (all bases) exist?

(Yes, almost all)

Are two approaches equivalent? (Yes)

Wall’s theorem: α is normal,n integer⇒ nα,α/n are normal

(20)

Basic questions about normal numbers

Do normal number exist? (Yes, almost all numbers are random)

Examples? e, π,√

2 normal? (May be yes, nobody knows) Champernowne number: 0 1 10 11 100 101 110 111 1000 . . . Does it depend on the base? (Yes)

Do absolutely normal numbers (all bases) exist?

(Yes, almost all)

Are two approaches equivalent? (Yes)

Wall’s theorem: α is normal,n integer⇒ nα,α/n are normal

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(21)

Basic questions about normal numbers

Do normal number exist? (Yes, almost all numbers are random)

Examples? e, π,√

2 normal? (May be yes, nobody knows) Champernowne number: 0 1 10 11 100 101 110 111 1000 . . . Does it depend on the base? (Yes)

Do absolutely normal numbers (all bases) exist?

(Yes, almost all)

Are two approaches equivalent?

(Yes)

Wall’s theorem: α is normal,n integer⇒ nα,α/n are normal

(22)

Basic questions about normal numbers

Do normal number exist? (Yes, almost all numbers are random)

Examples? e, π,√

2 normal? (May be yes, nobody knows) Champernowne number: 0 1 10 11 100 101 110 111 1000 . . . Does it depend on the base? (Yes)

Do absolutely normal numbers (all bases) exist?

(Yes, almost all)

Are two approaches equivalent? (Yes)

Wall’s theorem: α is normal,n integer⇒ nα,α/n are normal

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(23)

Basic questions about normal numbers

Do normal number exist? (Yes, almost all numbers are random)

Examples? e, π,√

2 normal? (May be yes, nobody knows) Champernowne number: 0 1 10 11 100 101 110 111 1000 . . . Does it depend on the base? (Yes)

Do absolutely normal numbers (all bases) exist?

(Yes, almost all)

Are two approaches equivalent? (Yes)

Wall’s theorem: α is normal,n integer⇒nα,α/n are normal

(24)

Randomness as incompressibility

Individual random sequences: plausible as outcomes of coin tossing

(Classical) probability theory: no idea

Kolmogorov, Levin, Chaitin,. . . : randomness = incompressibility

000. . .000 not random: short description: “million zeros” What is “description”? Different answers possible

Normality = weak randomness

Limited class of descriptions: finite memory

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(25)

Randomness as incompressibility

Individual random sequences: plausible as outcomes of coin tossing

(Classical) probability theory: no idea

Kolmogorov, Levin, Chaitin,. . . : randomness = incompressibility

000. . .000 not random: short description: “million zeros” What is “description”? Different answers possible

Normality = weak randomness

Limited class of descriptions: finite memory

(26)

Randomness as incompressibility

Individual random sequences: plausible as outcomes of coin tossing

(Classical) probability theory: no idea

Kolmogorov, Levin, Chaitin,. . . : randomness = incompressibility

000. . .000 not random: short description: “million zeros” What is “description”? Different answers possible

Normality = weak randomness

Limited class of descriptions: finite memory

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(27)

Randomness as incompressibility

Individual random sequences: plausible as outcomes of coin tossing

(Classical) probability theory: no idea

Kolmogorov, Levin, Chaitin,. . . : randomness = incompressibility

000. . .000 not random: short description: “million zeros” What is “description”? Different answers possible

Normality = weak randomness

Limited class of descriptions: finite memory

(28)

Randomness as incompressibility

Individual random sequences: plausible as outcomes of coin tossing

(Classical) probability theory: no idea

Kolmogorov, Levin, Chaitin,. . . : randomness = incompressibility

000. . .000 not random: short description: “million zeros”

What is “description”? Different answers possible Normality = weak randomness

Limited class of descriptions: finite memory

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(29)

Randomness as incompressibility

Individual random sequences: plausible as outcomes of coin tossing

(Classical) probability theory: no idea

Kolmogorov, Levin, Chaitin,. . . : randomness = incompressibility

000. . .000 not random: short description: “million zeros”

What is “description”? Different answers possible

Normality = weak randomness

Limited class of descriptions: finite memory

(30)

Randomness as incompressibility

Individual random sequences: plausible as outcomes of coin tossing

(Classical) probability theory: no idea

Kolmogorov, Levin, Chaitin,. . . : randomness = incompressibility

000. . .000 not random: short description: “million zeros”

What is “description”? Different answers possible Normality = weak randomness

Limited class of descriptions: finite memory

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(31)

Randomness as incompressibility

Individual random sequences: plausible as outcomes of coin tossing

(Classical) probability theory: no idea

Kolmogorov, Levin, Chaitin,. . . : randomness = incompressibility

000. . .000 not random: short description: “million zeros”

What is “description”? Different answers possible Normality = weak randomness

Limited class of descriptions: finite memory

(32)

Compressibility

Binary relation D(p,x) on strings: “p is a description of x” CD(x) = min{|p|:D(p,x)}

trivial: Λ is a description of everything, CD(x) = 0 D(p,x) :p =x, then CD(x) =|x|

D(p,x): x isp where each bit doubled

then CD(x): |x|/2 for good x (with doubled bits), and +∞ for bad

restriction: D is function: every description describes only one object

technical: weaker restriction: O(1)-valued function; onlyO(1) objects for each description

only finite-memory (automatic) relations allowed asD

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(33)

Compressibility

Binary relation D(p,x) on strings: “p is a description of x”

CD(x) = min{|p|:D(p,x)}

trivial: Λ is a description of everything, CD(x) = 0 D(p,x) :p =x, then CD(x) =|x|

D(p,x): x isp where each bit doubled

then CD(x): |x|/2 for good x (with doubled bits), and +∞ for bad

restriction: D is function: every description describes only one object

technical: weaker restriction: O(1)-valued function; onlyO(1) objects for each description

only finite-memory (automatic) relations allowed asD

(34)

Compressibility

Binary relation D(p,x) on strings: “p is a description of x”

CD(x) = min{|p|:D(p,x)}

trivial: Λ is a description of everything, CD(x) = 0 D(p,x) :p =x, then CD(x) =|x|

D(p,x): x isp where each bit doubled

then CD(x): |x|/2 for good x (with doubled bits), and +∞ for bad

restriction: D is function: every description describes only one object

technical: weaker restriction: O(1)-valued function; onlyO(1) objects for each description

only finite-memory (automatic) relations allowed asD

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(35)

Compressibility

Binary relation D(p,x) on strings: “p is a description of x”

CD(x) = min{|p|:D(p,x)}

trivial: Λ is a description of everything, CD(x) = 0

D(p,x) :p =x, then CD(x) =|x| D(p,x): x isp where each bit doubled

then CD(x): |x|/2 for good x (with doubled bits), and +∞ for bad

restriction: D is function: every description describes only one object

technical: weaker restriction: O(1)-valued function; onlyO(1) objects for each description

only finite-memory (automatic) relations allowed asD

(36)

Compressibility

Binary relation D(p,x) on strings: “p is a description of x”

CD(x) = min{|p|:D(p,x)}

trivial: Λ is a description of everything, CD(x) = 0 D(p,x) :p =x, then CD(x) =|x|

D(p,x): x isp where each bit doubled

then CD(x): |x|/2 for good x (with doubled bits), and +∞ for bad

restriction: D is function: every description describes only one object

technical: weaker restriction: O(1)-valued function; onlyO(1) objects for each description

only finite-memory (automatic) relations allowed asD

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(37)

Compressibility

Binary relation D(p,x) on strings: “p is a description of x”

CD(x) = min{|p|:D(p,x)}

trivial: Λ is a description of everything, CD(x) = 0 D(p,x) :p =x, then CD(x) =|x|

D(p,x): x isp where each bit doubled

then CD(x): |x|/2 for good x (with doubled bits), and +∞ for bad

restriction: D is function: every description describes only one object

technical: weaker restriction: O(1)-valued function; onlyO(1) objects for each description

only finite-memory (automatic) relations allowed asD

(38)

Compressibility

Binary relation D(p,x) on strings: “p is a description of x”

CD(x) = min{|p|:D(p,x)}

trivial: Λ is a description of everything, CD(x) = 0 D(p,x) :p =x, then CD(x) =|x|

D(p,x): x isp where each bit doubled

then CD(x): |x|/2 for good x (with doubled bits), and +∞

for bad

restriction: D is function: every description describes only one object

technical: weaker restriction: O(1)-valued function; onlyO(1) objects for each description

only finite-memory (automatic) relations allowed asD

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(39)

Compressibility

Binary relation D(p,x) on strings: “p is a description of x”

CD(x) = min{|p|:D(p,x)}

trivial: Λ is a description of everything, CD(x) = 0 D(p,x) :p =x, then CD(x) =|x|

D(p,x): x isp where each bit doubled

then CD(x): |x|/2 for good x (with doubled bits), and +∞

for bad

restriction: D is function: every description describes only one object

technical: weaker restriction: O(1)-valued function; onlyO(1) objects for each description

only finite-memory (automatic) relations allowed asD

(40)

Compressibility

Binary relation D(p,x) on strings: “p is a description of x”

CD(x) = min{|p|:D(p,x)}

trivial: Λ is a description of everything, CD(x) = 0 D(p,x) :p =x, then CD(x) =|x|

D(p,x): x isp where each bit doubled

then CD(x): |x|/2 for good x (with doubled bits), and +∞

for bad

restriction: D is function: every description describes only one object

technical: weaker restriction: O(1)-valued function; only O(1) objects for each description

only finite-memory (automatic) relations allowed asD

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(41)

Compressibility

Binary relation D(p,x) on strings: “p is a description of x”

CD(x) = min{|p|:D(p,x)}

trivial: Λ is a description of everything, CD(x) = 0 D(p,x) :p =x, then CD(x) =|x|

D(p,x): x isp where each bit doubled

then CD(x): |x|/2 for good x (with doubled bits), and +∞

for bad

restriction: D is function: every description describes only one object

technical: weaker restriction: O(1)-valued function; only O(1) objects for each description

(42)

Finite automata

Idea: D(p,x) is automatic if it can be checked reading p and x bit by bit, with finite memory

our examples (trivial, equality, doubled bits) are all automatic Formal definition: graph; edges labeled by (u,v), (u, ε), (ε,u), (ε, ε)

path⇒ pair of strings

D = the set of all pairs that can be read along paths multiplication and division by an integer constant are automatic relations

union of two automatic relations is automatic composition of two automatic relations is automatic warning: no initial and final states

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(43)

Finite automata

Idea: D(p,x) is automatic if it can be checked reading p and x bit by bit, with finite memory

our examples (trivial, equality, doubled bits) are all automatic Formal definition: graph; edges labeled by (u,v), (u, ε), (ε,u), (ε, ε)

path⇒ pair of strings

D = the set of all pairs that can be read along paths multiplication and division by an integer constant are automatic relations

union of two automatic relations is automatic composition of two automatic relations is automatic warning: no initial and final states

(44)

Finite automata

Idea: D(p,x) is automatic if it can be checked reading p and x bit by bit, with finite memory

our examples (trivial, equality, doubled bits) are all automatic

Formal definition: graph; edges labeled by (u,v), (u, ε), (ε,u), (ε, ε)

path⇒ pair of strings

D = the set of all pairs that can be read along paths multiplication and division by an integer constant are automatic relations

union of two automatic relations is automatic composition of two automatic relations is automatic warning: no initial and final states

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(45)

Finite automata

Idea: D(p,x) is automatic if it can be checked reading p and x bit by bit, with finite memory

our examples (trivial, equality, doubled bits) are all automatic Formal definition: graph; edges labeled by (u,v), (u, ε), (ε,u), (ε, ε)

path⇒ pair of strings

D = the set of all pairs that can be read along paths multiplication and division by an integer constant are automatic relations

union of two automatic relations is automatic composition of two automatic relations is automatic warning: no initial and final states

(46)

Finite automata

Idea: D(p,x) is automatic if it can be checked reading p and x bit by bit, with finite memory

our examples (trivial, equality, doubled bits) are all automatic Formal definition: graph; edges labeled by (u,v), (u, ε), (ε,u), (ε, ε)

path⇒ pair of strings

D = the set of all pairs that can be read along paths multiplication and division by an integer constant are automatic relations

union of two automatic relations is automatic composition of two automatic relations is automatic warning: no initial and final states

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(47)

Finite automata

Idea: D(p,x) is automatic if it can be checked reading p and x bit by bit, with finite memory

our examples (trivial, equality, doubled bits) are all automatic Formal definition: graph; edges labeled by (u,v), (u, ε), (ε,u), (ε, ε)

path⇒ pair of strings

D = the set of all pairs that can be read along paths

multiplication and division by an integer constant are automatic relations

union of two automatic relations is automatic composition of two automatic relations is automatic warning: no initial and final states

(48)

Finite automata

Idea: D(p,x) is automatic if it can be checked reading p and x bit by bit, with finite memory

our examples (trivial, equality, doubled bits) are all automatic Formal definition: graph; edges labeled by (u,v), (u, ε), (ε,u), (ε, ε)

path⇒ pair of strings

D = the set of all pairs that can be read along paths multiplication and division by an integer constant are automatic relations

union of two automatic relations is automatic composition of two automatic relations is automatic warning: no initial and final states

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(49)

Finite automata

Idea: D(p,x) is automatic if it can be checked reading p and x bit by bit, with finite memory

our examples (trivial, equality, doubled bits) are all automatic Formal definition: graph; edges labeled by (u,v), (u, ε), (ε,u), (ε, ε)

path⇒ pair of strings

D = the set of all pairs that can be read along paths multiplication and division by an integer constant are automatic relations

union of two automatic relations is automatic

composition of two automatic relations is automatic warning: no initial and final states

(50)

Finite automata

Idea: D(p,x) is automatic if it can be checked reading p and x bit by bit, with finite memory

our examples (trivial, equality, doubled bits) are all automatic Formal definition: graph; edges labeled by (u,v), (u, ε), (ε,u), (ε, ε)

path⇒ pair of strings

D = the set of all pairs that can be read along paths multiplication and division by an integer constant are automatic relations

union of two automatic relations is automatic composition of two automatic relations is automatic

warning: no initial and final states

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(51)

Finite automata

Idea: D(p,x) is automatic if it can be checked reading p and x bit by bit, with finite memory

our examples (trivial, equality, doubled bits) are all automatic Formal definition: graph; edges labeled by (u,v), (u, ε), (ε,u), (ε, ε)

path⇒ pair of strings

D = the set of all pairs that can be read along paths multiplication and division by an integer constant are automatic relations

union of two automatic relations is automatic composition of two automatic relations is automatic

(52)

Optimal decompressors

Theorem (Becher, Heiber)

A sequence x1x2x3. . .is normal ⇔

lim infCD(x1. . .xn)/n ≥1 for every automatic O(1)-valued relation D(p,x)

A counterpart in algorithmic information theory: A sequencex1x2x3. . .has effective dimension 1 ⇔

limCD(x1. . .xn)/n≥1

for every computably enumerable O(1)-valued relation D(p,x) One can use computable functions as decompressors instead of O(1)-relations; dimension 1 is weaker than Martin-L¨of randomness

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(53)

Optimal decompressors

Theorem (Becher, Heiber)

A sequence x1x2x3. . .is normal ⇔

lim infCD(x1. . .xn)/n ≥1 for every automatic O(1)-valued relation D(p,x) A counterpart in algorithmic information theory:

A sequencex1x2x3. . .has effective dimension 1 ⇔ limCD(x1. . .xn)/n≥1

for every computably enumerableO(1)-valued relation D(p,x)

One can use computable functions as decompressors instead of O(1)-relations; dimension 1 is weaker than Martin-L¨of randomness

(54)

Optimal decompressors

Theorem (Becher, Heiber)

A sequence x1x2x3. . .is normal ⇔

lim infCD(x1. . .xn)/n ≥1 for every automatic O(1)-valued relation D(p,x) A counterpart in algorithmic information theory:

A sequencex1x2x3. . .has effective dimension 1 ⇔ limCD(x1. . .xn)/n≥1

for every computably enumerableO(1)-valued relation D(p,x) One can use computable functions as decompressors instead of O(1)-relations; dimension 1 is weaker than Martin-L¨of randomness

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(55)

Non-normal sequences are compressible

String over {a1, . . . ,ak}, probabilities p1, . . . ,pk. Different probabilities ⇒more efficient coding Morse code: frequent letters have shorter codes Minimal code length: H =P

pilog(1/pi) (Shannon entropy) logk for equiprobable letters, otherwise smaller

H is a lower bound for decodable codes H+ 1 can be achieved

block codes are needed

block coding uses finite memory

Technical: select a subsequence that has limit frequencies; use these frequencies for block coding, use convexity of entropy function

(56)

Non-normal sequences are compressible

String over {a1, . . . ,ak}, probabilitiesp1, . . . ,pk.

Different probabilities ⇒more efficient coding Morse code: frequent letters have shorter codes Minimal code length: H =P

pilog(1/pi) (Shannon entropy) logk for equiprobable letters, otherwise smaller

H is a lower bound for decodable codes H+ 1 can be achieved

block codes are needed

block coding uses finite memory

Technical: select a subsequence that has limit frequencies; use these frequencies for block coding, use convexity of entropy function

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(57)

Non-normal sequences are compressible

String over {a1, . . . ,ak}, probabilitiesp1, . . . ,pk. Different probabilities ⇒more efficient coding

Morse code: frequent letters have shorter codes Minimal code length: H =P

pilog(1/pi) (Shannon entropy) logk for equiprobable letters, otherwise smaller

H is a lower bound for decodable codes H+ 1 can be achieved

block codes are needed

block coding uses finite memory

Technical: select a subsequence that has limit frequencies; use these frequencies for block coding, use convexity of entropy function

(58)

Non-normal sequences are compressible

String over {a1, . . . ,ak}, probabilitiesp1, . . . ,pk. Different probabilities ⇒more efficient coding Morse code: frequent letters have shorter codes

Minimal code length: H =P

pilog(1/pi) (Shannon entropy) logk for equiprobable letters, otherwise smaller

H is a lower bound for decodable codes H+ 1 can be achieved

block codes are needed

block coding uses finite memory

Technical: select a subsequence that has limit frequencies; use these frequencies for block coding, use convexity of entropy function

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(59)

Non-normal sequences are compressible

String over {a1, . . . ,ak}, probabilitiesp1, . . . ,pk. Different probabilities ⇒more efficient coding Morse code: frequent letters have shorter codes Minimal code length: H =P

pilog(1/pi) (Shannon entropy)

logk for equiprobable letters, otherwise smaller H is a lower bound for decodable codes

H+ 1 can be achieved block codes are needed

block coding uses finite memory

Technical: select a subsequence that has limit frequencies; use these frequencies for block coding, use convexity of entropy function

(60)

Non-normal sequences are compressible

String over {a1, . . . ,ak}, probabilitiesp1, . . . ,pk. Different probabilities ⇒more efficient coding Morse code: frequent letters have shorter codes Minimal code length: H =P

pilog(1/pi) (Shannon entropy) logk for equiprobable letters, otherwise smaller

H is a lower bound for decodable codes H+ 1 can be achieved

block codes are needed

block coding uses finite memory

Technical: select a subsequence that has limit frequencies; use these frequencies for block coding, use convexity of entropy function

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(61)

Non-normal sequences are compressible

String over {a1, . . . ,ak}, probabilitiesp1, . . . ,pk. Different probabilities ⇒more efficient coding Morse code: frequent letters have shorter codes Minimal code length: H =P

pilog(1/pi) (Shannon entropy) logk for equiprobable letters, otherwise smaller

H is a lower bound for decodable codes

H+ 1 can be achieved block codes are needed

block coding uses finite memory

Technical: select a subsequence that has limit frequencies; use these frequencies for block coding, use convexity of entropy function

(62)

Non-normal sequences are compressible

String over {a1, . . . ,ak}, probabilitiesp1, . . . ,pk. Different probabilities ⇒more efficient coding Morse code: frequent letters have shorter codes Minimal code length: H =P

pilog(1/pi) (Shannon entropy) logk for equiprobable letters, otherwise smaller

H is a lower bound for decodable codes H+ 1 can be achieved

block codes are needed

block coding uses finite memory

Technical: select a subsequence that has limit frequencies; use these frequencies for block coding, use convexity of entropy function

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(63)

Non-normal sequences are compressible

String over {a1, . . . ,ak}, probabilitiesp1, . . . ,pk. Different probabilities ⇒more efficient coding Morse code: frequent letters have shorter codes Minimal code length: H =P

pilog(1/pi) (Shannon entropy) logk for equiprobable letters, otherwise smaller

H is a lower bound for decodable codes H+ 1 can be achieved

block codes are needed

block coding uses finite memory

Technical: select a subsequence that has limit frequencies; use these frequencies for block coding, use convexity of entropy function

(64)

Non-normal sequences are compressible

String over {a1, . . . ,ak}, probabilitiesp1, . . . ,pk. Different probabilities ⇒more efficient coding Morse code: frequent letters have shorter codes Minimal code length: H =P

pilog(1/pi) (Shannon entropy) logk for equiprobable letters, otherwise smaller

H is a lower bound for decodable codes H+ 1 can be achieved

block codes are needed

block coding uses finite memory

Technical: select a subsequence that has limit frequencies; use these frequencies for block coding, use convexity of entropy function

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(65)

Non-normal sequences are compressible

String over {a1, . . . ,ak}, probabilitiesp1, . . . ,pk. Different probabilities ⇒more efficient coding Morse code: frequent letters have shorter codes Minimal code length: H =P

pilog(1/pi) (Shannon entropy) logk for equiprobable letters, otherwise smaller

H is a lower bound for decodable codes H+ 1 can be achieved

block codes are needed

block coding uses finite memory

Technical: select a subsequence that has limit frequencies; use

(66)

Normal sequences are not compressible

Normal sequence x1x2. . .

Some automatic O(1) relation D Why x1x2. . .xN is not compressible? Split it into k-bit blocksX1X2. . .XM

descriptionp can be also split into corresponding blocks so CD(xy)≥CD(x) +CD(y), the main feature of automatic compression

allk-bit strings appear equally often amongX1,X2, . . . ,XM most of k-bit strings are incompressible

so the economy is negligible compared to length

almost the same works for non-aligned definition of normality, since the frequencies of compressible blocks are onlyk times bigger

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(67)

Normal sequences are not compressible

Normal sequence x1x2. . .

Some automatic O(1) relation D Why x1x2. . .xN is not compressible? Split it into k-bit blocksX1X2. . .XM

descriptionp can be also split into corresponding blocks so CD(xy)≥CD(x) +CD(y), the main feature of automatic compression

allk-bit strings appear equally often amongX1,X2, . . . ,XM most of k-bit strings are incompressible

so the economy is negligible compared to length

almost the same works for non-aligned definition of normality, since the frequencies of compressible blocks are onlyk times bigger

(68)

Normal sequences are not compressible

Normal sequence x1x2. . .

Some automatic O(1) relation D

Why x1x2. . .xN is not compressible? Split it into k-bit blocksX1X2. . .XM

descriptionp can be also split into corresponding blocks so CD(xy)≥CD(x) +CD(y), the main feature of automatic compression

allk-bit strings appear equally often amongX1,X2, . . . ,XM most of k-bit strings are incompressible

so the economy is negligible compared to length

almost the same works for non-aligned definition of normality, since the frequencies of compressible blocks are onlyk times bigger

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(69)

Normal sequences are not compressible

Normal sequence x1x2. . .

Some automatic O(1) relation D Why x1x2. . .xN is not compressible?

Split it into k-bit blocksX1X2. . .XM

descriptionp can be also split into corresponding blocks so CD(xy)≥CD(x) +CD(y), the main feature of automatic compression

allk-bit strings appear equally often amongX1,X2, . . . ,XM most of k-bit strings are incompressible

so the economy is negligible compared to length

almost the same works for non-aligned definition of normality, since the frequencies of compressible blocks are onlyk times bigger

(70)

Normal sequences are not compressible

Normal sequence x1x2. . .

Some automatic O(1) relation D Why x1x2. . .xN is not compressible?

Split it into k-bit blocksX1X2. . .XM

descriptionp can be also split into corresponding blocks so CD(xy)≥CD(x) +CD(y), the main feature of automatic compression

allk-bit strings appear equally often amongX1,X2, . . . ,XM most of k-bit strings are incompressible

so the economy is negligible compared to length

almost the same works for non-aligned definition of normality, since the frequencies of compressible blocks are onlyk times bigger

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(71)

Normal sequences are not compressible

Normal sequence x1x2. . .

Some automatic O(1) relation D Why x1x2. . .xN is not compressible?

Split it into k-bit blocksX1X2. . .XM

descriptionp can be also split into corresponding blocks

so CD(xy)≥CD(x) +CD(y), the main feature of automatic compression

allk-bit strings appear equally often amongX1,X2, . . . ,XM most of k-bit strings are incompressible

so the economy is negligible compared to length

almost the same works for non-aligned definition of normality, since the frequencies of compressible blocks are onlyk times bigger

(72)

Normal sequences are not compressible

Normal sequence x1x2. . .

Some automatic O(1) relation D Why x1x2. . .xN is not compressible?

Split it into k-bit blocksX1X2. . .XM

descriptionp can be also split into corresponding blocks so CD(xy)≥CD(x) +CD(y), the main feature of automatic compression

allk-bit strings appear equally often amongX1,X2, . . . ,XM most of k-bit strings are incompressible

so the economy is negligible compared to length

almost the same works for non-aligned definition of normality, since the frequencies of compressible blocks are onlyk times bigger

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(73)

Normal sequences are not compressible

Normal sequence x1x2. . .

Some automatic O(1) relation D Why x1x2. . .xN is not compressible?

Split it into k-bit blocksX1X2. . .XM

descriptionp can be also split into corresponding blocks so CD(xy)≥CD(x) +CD(y), the main feature of automatic compression

allk-bit strings appear equally often amongX1,X2, . . . ,XM

most of k-bit strings are incompressible

so the economy is negligible compared to length

almost the same works for non-aligned definition of normality, since the frequencies of compressible blocks are onlyk times bigger

(74)

Normal sequences are not compressible

Normal sequence x1x2. . .

Some automatic O(1) relation D Why x1x2. . .xN is not compressible?

Split it into k-bit blocksX1X2. . .XM

descriptionp can be also split into corresponding blocks so CD(xy)≥CD(x) +CD(y), the main feature of automatic compression

allk-bit strings appear equally often amongX1,X2, . . . ,XM most of k-bit strings are incompressible

so the economy is negligible compared to length

almost the same works for non-aligned definition of normality, since the frequencies of compressible blocks are onlyk times bigger

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(75)

Normal sequences are not compressible

Normal sequence x1x2. . .

Some automatic O(1) relation D Why x1x2. . .xN is not compressible?

Split it into k-bit blocksX1X2. . .XM

descriptionp can be also split into corresponding blocks so CD(xy)≥CD(x) +CD(y), the main feature of automatic compression

allk-bit strings appear equally often amongX1,X2, . . . ,XM most of k-bit strings are incompressible

so the economy is negligible compared to length

almost the same works for non-aligned definition of normality, since the frequencies of compressible blocks are onlyk times bigger

(76)

Normal sequences are not compressible

Normal sequence x1x2. . .

Some automatic O(1) relation D Why x1x2. . .xN is not compressible?

Split it into k-bit blocksX1X2. . .XM

descriptionp can be also split into corresponding blocks so CD(xy)≥CD(x) +CD(y), the main feature of automatic compression

allk-bit strings appear equally often amongX1,X2, . . . ,XM most of k-bit strings are incompressible

so the economy is negligible compared to length

almost the same works for non-aligned definition of normality, since the frequencies of compressible blocks are onlyk times bigger

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(77)

Hall’s theorem

α is normal,n integer⇒ nα andα/n are normal

Multiplication and division by a constant are O(1)-valued automatic relations

Composition of automatic relations is automatic So ifD compresses α, then D◦[×N] compressesNα. The same for division

. . . or for adding rational numbers

Details: https://arxiv.org/pdf/1701.09060.pdf

(78)

Hall’s theorem

α is normal,n integer⇒nα andα/n are normal

Multiplication and division by a constant are O(1)-valued automatic relations

Composition of automatic relations is automatic So ifD compresses α, then D◦[×N] compressesNα. The same for division

. . . or for adding rational numbers

Details: https://arxiv.org/pdf/1701.09060.pdf

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(79)

Hall’s theorem

α is normal,n integer⇒nα andα/n are normal

Multiplication and division by a constant are O(1)-valued automatic relations

Composition of automatic relations is automatic So ifD compresses α, then D◦[×N] compressesNα. The same for division

. . . or for adding rational numbers

Details: https://arxiv.org/pdf/1701.09060.pdf

(80)

Hall’s theorem

α is normal,n integer⇒nα andα/n are normal

Multiplication and division by a constant are O(1)-valued automatic relations

Composition of automatic relations is automatic

So ifD compresses α, then D◦[×N] compressesNα. The same for division

. . . or for adding rational numbers

Details: https://arxiv.org/pdf/1701.09060.pdf

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(81)

Hall’s theorem

α is normal,n integer⇒nα andα/n are normal

Multiplication and division by a constant are O(1)-valued automatic relations

Composition of automatic relations is automatic So ifD compresses α, then D◦[×N] compressesNα.

The same for division

. . . or for adding rational numbers

Details: https://arxiv.org/pdf/1701.09060.pdf

(82)

Hall’s theorem

α is normal,n integer⇒nα andα/n are normal

Multiplication and division by a constant are O(1)-valued automatic relations

Composition of automatic relations is automatic So ifD compresses α, then D◦[×N] compressesNα.

The same for division

. . . or for adding rational numbers

Details: https://arxiv.org/pdf/1701.09060.pdf

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

(83)

Hall’s theorem

α is normal,n integer⇒nα andα/n are normal

Multiplication and division by a constant are O(1)-valued automatic relations

Composition of automatic relations is automatic So ifD compresses α, then D◦[×N] compressesNα.

The same for division

. . . or for adding rational numbers Details:

https://arxiv.org/pdf/1701.09060.pdf

(84)

Hall’s theorem

α is normal,n integer⇒nα andα/n are normal

Multiplication and division by a constant are O(1)-valued automatic relations

Composition of automatic relations is automatic So ifD compresses α, then D◦[×N] compressesNα.

The same for division

. . . or for adding rational numbers

Details: https://arxiv.org/pdf/1701.09060.pdf

alexander.shen@lirmm.fr, www.lirmm.fr/~ashen Normal numbers and automatic complexity

Références

Documents relatifs

[r]

La critique visée n’est pas celle qui, sans raison, décide de rejeter, mais celle qui considère chaque connaissance apparente comme telle, et conserve ce qui semble encore

Engrenage Transmettre sans glissement un mouvement de rotation continu entre deux arbres rapprochés, avec modification du couple transmis. TRANSMISSION

This paper is a preliminary report on the development of a new, improved version of the knowledge representation language P-log.. This version clarifies syntax and semantics of

On utilise en g´en´eral une ´echelle logarithmique pour repr´esenter car le domaine d’´evolution de cette derni`ere est, en g´en´eral, trop grand pour pouvoir ˆetre

& SAMPSON, G., On weak restricted estimates and end point problems for con- volutions with oscillating kernels (I), submitted for publication.. Thesis, Washington

L’esprit scientifique ne se produit, ni ne produit son objet, dans l’éclair d’une intuition immédiate; pas plus qu’il nel’instaure sous le coup d’une

[r]