• Aucun résultat trouvé

VII.1 Sequences of random numbers and simulation CHAPTER VII AN INTRODUCTION TO SIMULATION

N/A
N/A
Protected

Academic year: 2022

Partager "VII.1 Sequences of random numbers and simulation CHAPTER VII AN INTRODUCTION TO SIMULATION"

Copied!
19
0
0

Texte intégral

(1)

CHAPTER VII

AN INTRODUCTION TO SIMULATION

VII.1 Sequences of random numbers and simulation

In many fields of engineering and science, we use a computer to simulate natural phenomena rather than experiment with the real system. Examples of such computer experiments are simulation studies of physical processes like atomic collisions, simulation of queuing models in system engineering, sampling in applied statistics.

Alternatively, we simulate a mathematical model, which cannot be treated by analytical methods. In all cases a simulation is a computer experiment to determine probabilities empirically. In these applications, random numbers are required to make things realistic.

Random number generation has also applications in cryptography, where the requirements on randomness may be even more stringent.

Hence, we need a good source of random numbers. Since the validity of a simulation will heavily depend on the quality of such a source, its choice or construction will be

fundamental importance. Tests have shown that many so-called random functions supplied with programs and computers are far away from being random.

VII.1.a. Sequences of pseudo-random numbers

By generating random numbers, we understand producing a sequence of independent random numbers with a specified distribution. The fundamental problem is to generate random numbers with a uniform discrete distribution on {0,1,2,…,N} or more suitable on {0,1/N,2/N,…,1}, say. This is the distribution where each possible number is equally likely. For N large this distribution approximates the continuous uniform distribution U(0,1) on the unit interval. Other discrete and continuous distributions will be generated from transformations of the U(0,1) distribution.

At first, scientists who needed random numbers would generate them by performing random experiments like rolling dice or dealing out cards. Later tables of thousands of random digits created with special machines for mechanically generating random numbers or taken from large data sets as census reports were published.

(2)

With the introduction of computers, people began to search for efficient ways to obtain random numbers using arithmetic operations of a computer - an approach suggested by John von Neumann in the 1940's. Since the digital computer cannot generate random numbers, the idea is, for a given probability distribution, to develop an algorithm such that the numbers generated by this algorithm appear to be random with the specified distribution. Sequences generated in a deterministic way we call pseudo-random numbers.

VII.1.b. Sequences of random numbers

A sequence of real numbers between zero and one generated by a computer is called

"pseudo-random" sequence if it behaves like a sequence of random numbers. So far this statement is satisfactory for practical purposes but what one needs is a quantitative definition of random behavior.

Giving a mathematically satisfactory definition of randomness turns out to be a hard problem. Several authors give definitions of a random sequence requiring infinite non- periodic sequences. Following the discussion in Knuth's book “The Art of Computer Programming - Volume2: Seminumerical Algorithms” we just mention two statements about random sequences. For details we refer to this reference.

D.H. Lehmer(1951) : "A random sequence is a vague notion embodying the idea of a sequence in which each term is unpredictable to the uninitiated and whose digits pass a certain number of tests, traditional with statisticians and depending somewhat on the uses to which the sequence is to be put."

J.N. Franklin (1962): " A sequence (U0, U1,…) (note: with Ui taking values in the unit interval [0,1]) is random if it has every property that is shared by all infinite sequences of independent samples of random variables from the uniform distribution."

A complexity-theoretic approach to a definition of randomness is due to A.N.

Kolmogorov (1965), P. Martin-Löf (1966), and G.J. Chaitin (1966).

In practice we need a list of mathematical properties characterizing random sequences and tests to see whether a sequence of pseudo-random numbers yields satisfactory results or not. Loosely speaking, basic requirements on random sequences are that their elements are uniformly distributed and uncorrelated. The tests we can perform will be of

theoretical and/or empirical nature.

(3)

VII.2 Pseudo-random number generators

Deterministic generators yield numbers in a fixed sequence such that the forgoing k numbers determine the next number. Since the set of numbers used by a computer is finite, the sequence will become periodic after a certain number of iterations.

The general form of algorithms generating random numbers may be described by the following recursive procedure.

Xn= f(Xn-1,Xn-2,..., Xn-k)

with initial conditions X0, X1,...,Xk-1. Here f is supposed to be a mapping from {0,1,...,m-1}k into {0,1,...,m-1}.

For most generators k = 1 in which case the recursive relation simplifies to Xn= f(Xn-1)

with a single initial value X0, the seed of the generator. Now f is a mapping from {0,1,...,m-1} into itself.

In most cases the goal is to simulate the continuous uniform distribution U(0,1).

Therefore the integers Xn are rescaled to

Un= Xn/m.

If m is large, the resulting granularity is negligible when simulating a continuous distribution.

A good generator should be of a long period and resulting subsequences of pseudo- random numbers should be uniform and uncorrelated. Finally, the algorithm should be efficient.

Remark: Initializing the generator with the same seed X0 will give the same sequence of random numbers. Usually one uses the clock time to initialize the generator.

(4)

VII.2.a. Von Neumann generators

The first algorithm to generate random numbers was the middle square method proposed by J. von Neumann in the 1940's. Generators using this method are also called von Neumann generators in the literature. The middle square method consists of taking the square of the previous random number and to extract the middle digits. This method gives rather poor results since generally sequences tend to get into a short periodic orbit.

Example: If we generate 4-digit numbers starting from 3567 we obtain 7234 as the next number since the square of 3567 equals 12723489. Continuing in the same way the next number will be 3307. The resulting sequence enters already after 46 iterations a periodic orbit:

3567, 7234, 3307, 9362, 6470, 8609, 1148, 3179, 1060, 1236, 5276, 8361, 9063, 1379, 9016, 2882, 3059, 3574, 7734, 8147, 3736, 9576, 6997, 9580, 7764, 2796, 8176, 8469, 7239, 4031, 2489, 1951, 8064, 280, 784, 6146, 7733, 7992, 8720, 384, 1474, 1726, 9790, 8441, 2504, 2700, 2900, 4100, 8100, 6100, 2100, 4100

VII.2.b. Linear congruential generators

The linear congruential generator (LCG) was proposed D.H. Lehmer in 1948. The form of the generator is

Xn = (aXn-1 + c) mod m The linear congruential generator depends on four parameters

parameter name range

m modulus {1,2,...}

a multiplier {0,1,...,m-1}

c increment {0,1,...,m-1}

X0 seed {0,1,...,m-1}

(5)

The operation mod m is called reduction modulo m and is a basic operation of modular arithmetic. Any integer x may be represented as

x = floor(x/m)⋅m + x mod m

where the floor function floor(t) is the greatest integer less than or equal to t. This equation may be taken as definition of the reduction modulo m. It follows that for the LCG, one has

1 0

1 mod 1

n n

n

X a X ca m

a

− 

= + − 

provided a > 1.

If c = 0 the generator is called multiplicative. For nonzero c the generator is called mixed.

We want to simulate the continuous uniform distribution U(0,1). Therefore the integers Xn are rescaled to

Un= Xn/m.

Sequences resulting from LCG's are called linear congruential sequences or Lehmer sequences. The problem is to choose the four parameters appropriately such that one obtains a pseudo-random sequence of values in {0,1,...,m-1}. The sequence does not always behave very random as one can see from some simple examples, which also illustrate the principal problems of LCG's.

Examples:

m a c X0 X1, X2, …

16 1 1 0 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 0,…

16 7 1 0 1, 8, 9, 0,…

16 9 1 0 1, 10, 11, 4, 5, 14, 15, 8, 9, 2, 3, 12, 13, 6, 7, 0,…

17 4 0 1 4, 16, 13, 1, …

17 7 0 1 7, 15, 3, 4, 11, 9, 12, 16, 10, 2, 14, 13, 6, 8, 5, 1,…

Linear congruential generators are the best analyzed and most widely used generators of pseudo-random numbers. They allow an easy analysis of their period and lattice structure formed by d-dimensional vectors (Un, Un+1, …, Un+d)

(6)

VII.2.c. Analysis of LCGs

All sequences will enter a periodic orbit, i.e. a cycle of numbers which is repeated. For a mixed generator the maximal period is m while for a multiplicative it is m - 1 since 0 is a fixed point of any multiplicative generator. A good generator should have a sufficiently large period. Therefore m has to be large and we are interested in choices for the

parameters making the period maximal.

The following results give conditions that guarantee the maximal period of a linear congruential sequence.

Theorem 1: A linear congruential generator has full period m if and only if i. c is relatively prime to m

ii. b = a - 1 is a multiple of p, for every prime p dividing m iii. b = a - 1 is a multiple of 4, if m is a multiple of 4

Note that if m is a prime, the generator has full period only if a = 1.

Theorem 2: A multiplicative congruential generator has maximal period m - 1 if and only if

i. m is prime

ii. a is a primitive root of m, that is, a(m - 1)/p - 1 is not a multiple of m for each prime factor of m - 1.

When computing was expensive usual choices for m were often powers of 2, which simplifies the implementation of modular reductions on binary machines (no division is used). However, for a multiplicative generator one cannot obtain the maximal period.

Current values are often primes, in particular Mersenne primes, which are of the form 2p - 1 for some prime p (again the implementation of modular reductions could simplify).

A full period sequence will appear evenly distributed. However, a useful generator should generate subsamples, which appear to be uniformly distributed over the range of possible values. Furthermore, the subsamples should appear independent of each other, that is, the serial correlations should be small.

(7)

Example: Consider the LCG with a = 21 , c = 3, and m = 100. By theorem 1 it has full period 100. The full cycle is given by the sequence (starting at X0 = 6)

[6, 29, 12, 55, 58, 21, 44, 27, 70, 73, 36, 59, 42, 85, 88, 51, 74, 57, 0, 3, 66, 89, 72, 15, 18, 81, 4, 87, 30, 33, 96, 19, 2, 45, 48, 11, 34, 17, 60, 63, 26, 49, 32, 75, 78, 41, 64, 47, 90, 93, 56, 79, 62, 5, 8, 71, 94, 77, 20, 23, 86, 9, 92, 35, 38, 1, 24, 7, 50, 53, 16, 39, 22, 65, 68, 31, 54, 37, 80, 83, 46, 69, 52, 95, 98, 61, 84, 67, 10, 13, 76, 99, 82, 25, 28, 91, 14, 97, 40, 43]

Selecting subsequences of length 10, for example, the outputs appear to be uniformly distributed over the integers between 0 and 99. However, if we check out for serial correlations, and plot all points of the form (Xn-1, Xn) the following picture emerges:

All points lie on 9 parallel lines with slope 1: 5Xn - 5Xn+1+15 = 100k for k = -4,-3,…,3,4.

We identify also other lines, e.g. 16Xn + 4Xn+1-12 = 100k for k = 0,1,…,17,18.

(8)

Unfortunately, LCG's always exhibit a lattice type structure. G. Marsaglia pointed out that the k-tuples (Un,...,Un+k), with Un = Xn/m, will always lie on a finite number of hyperplanes in the k-dimensional unit cube. Here, we only state the result for the unit square.

Theorem 3: Let ΛΛΛΛ2 be the lattice Λ Λ Λ

Λ2 = {t1e1 + t2e2 | t1, t2 integers}

with basis

e1 = (1, a)T/m, e2 = (0,1)T/m Then for full period generators

{(Un-1, Un) | n positive integer} = [0,1)2 ∩∩∩∩ {(U0, U1) + ΛΛΛΛ2} and for maximal period multiplicative generators

{(Un-1, Un) | n positive integer} = (0,1)2 ∩∩∩∩ ΛΛΛΛ2

One important conclusion of this result is that the choice of c merely shifts the lattice.

The value of the multiplier a should be chosen such that the points in the lattice ΛΛΛΛ2 are evenly spread in the unit square.

Lattice spacings give information about the granularity of a generator. In general, the spacings are becoming bigger in higher dimensions meaning that correlations increase.

Lattice spacings can be computed from the Fourier transform of the generators output.

This important theoretical test for the randomness of sequences generated by linear congruential generators is the so-called spectral test.

(9)

VII.2.d. Examples of LCG

1. The RANDU generator has been widely used on IBM 360/370 machines.

RANDU: a = 216 + 3, m = 231, c = 0

It has very bad random properties. Indeed, it is easy to verify that for all n, Un+2 - 6Un+1 + 9Un is an integer,

that is, all vectors of the form (Un ,Un+1 Un+2) lie on 15 planes in the unit cube.

2. The MINIMAL STANDARD LCG is used in the MATLAB software.

MINIMAL STANDARD LCG: a = 75, m = 231 – 1, c = 0

Results are quite satisfactory (but considered as minimal) though its period is too short.

3. The MAPLE LCG is used in MAPLE mathematical software.

MAPLE LCG: a = 427419669081, m = 1012 – 11, c = 0 Results are quite satisfactory though its period is too short.

(10)

VII.2.e. Other generators

There are several variations or generalizations of the linear congruential generator. We list some of these methods.

Multiple recursive generators

Xn = (a1Xn-1 + a2Xn-2 + akXn-k) mod m

Lagged Fibonacci generators

Xn = (Xn-j + Xn-k) mod m

Nonlinear congruential generators

One could also use nonlinear functions in congruential generators as in the following example proposed by D.Knuth

Xn = (d(Xn-1)2 + aXn-1 + c) mod m

Combining generators: Wichmann/Hill generator

The Wichmann/Hill generator is a combination of three LCG’s yielding good randomness properties. The generator is

1 1

1

171 mod 30269 172 mod 30307

170 mod 30323

n n

n n

n n

X X

Y Y

Z X

=

=

=

and

30269 30307 30323 mod1

n n n

n

X Y Z

U = + +  , where the reduction mod1 means to extract the fractional part.

(11)

Feedback Shift register generators

Tausworthe introduced a generator based on a sequence of 0’s and 1’s generated by Xn = (a1Xn-1 + a2Xn-2 + akXn-k) mod 2

where all variables take on values in {0,1}. The recurrence operation can be performed by shifting the vector of bits to the left, one bit at a time, and the bit shifted out is combined with other bits in the vector to form the rightmost bit.

VII.3 Tests for pseudo-random number generators

The general set up of statistical testing of random number generators is the following:

We define our null hypothesis H0 as: “The generator’s output is a sequence of independent U(0,1) random variables. A statistical test for H0 can be defined by any function T of U(0,1) random variables, for which we know the distribution under H0. Chi square tests

For a brief illustration we consider the following simple variant of the serial test:

Generate n random numbers U1,…, Un. Divide the unit interval into k cells of equal size and compute the number of points Xj that fall in each cell j, j = 1,…,k. Under H0 the distribution of (X1,…, Xk) is the multinomial distribution with parameters n and p1 =…=

pk = 1/k. We compute the Chi square statistics

Q = (x1 - np1)2/(np1) + ... + (xk – npk)2/(npk)

If n →∞ for fixed k , Q converges in distribution to a Chi square random variable with (k- 1) degrees of freedom.

More generally, we want pairs of successive numbers (Un, Un+1) or (U2n, U2n+1) to be uniformly distributed in an independent manner. Other frequently used goodness-of-fits tests are the following:

Gaps Test: For fixed constants 0 < a < b < 1 consider the length of subsequences for which Ui ∉ (a, b). If the Ui are independent, the distribution of lengths should be geometric with parameter P {Ui ∈ (a, b)} = b – a. Furthermore, under H0, successive lengths are independently distributed, so that one can apply a Chi-square test to the observed data.

(12)

Runs Test: Runs up/down are monotone increasing/decreasing subsequences. There are different runs tests depending on whether both runs up and down are used. For example, assuming H0, lengths of runs up are independently distributed, so that one can apply a Chi-square test to the observed and expected frequencies.

Permutation Test: For a positive integer t such that t! is small compared to the length of the data sequence (Ui) divide (Ui ) into blocks of size t, (U1,…, Ut), (Ut+1,…, U2t.), ….

Under hypothesis H0, the t! possible orderings inside a block of length t should uniformly distributed. A Chi-square test of the data yields then a test for independence.

Kolmogorov-Smirnov test

While the Chi-square test applies to data which fall into a finite number of categories, the Kolmogorov-Smirnov test applies to random quantities taking infinitely many values.

This situation occurs when we want to specify the cumulative distribution function for random numbers simulating a continuous distribution. For n independent observations of a random variable U, thereby obtaining the values U1,…, Un, we define the empirical distribution function Fn(x) by the relative frequency of values less or equal than x:

{ : }

( ) k k

n

card U U x

F x n

= ≤

The Kolmogorov-Smirnov test is based on the difference between the (theoretical) distribution function F(x) and Fn(x) . More precisely, one considers the following statistics:

( )

( )

max ( ) ( )

max ( ) ( )

n n

x

n n

x

K n F x F x

K n F x F x

+

−∞< <∞

−∞< <∞

= −

= −

Here Kn+ measures the maximal difference when Fn is greater than F, and Knmeasures the maximal difference when Fn is less than F.

(13)

VII.4 Simulation of random variables: general techniques

Generating random numbers from a non-uniform distribution is usually done by applying a transformation to U(0,1) random numbers. The techniques described in this chapter will apply to almost any distribution and for some distributions there may be various choices of algorithms for generation of random numbers. Accuracy and speed of the algorithm will then be important criteria for its implementation.

Inverse transform method

Consider a continuous random variable X with cumulative distribution function FX(x). It is easy to show that the random variable

U = FX(X)

has a U(0,1) distribution. If FX(x) is strictly increasing we can rewrite this equation in the equivalent form

1( ) X =FX U .

Use of this straightforward transformation is called the inverse transform method. It presents a good method whenever the inverse of FX(X) is easy to compute.

Example: Let X be an exponential random variable with parameter λ. Then FX(X) = 1 – exp(-λx) if x > 0 and FX(X) = 0 otherwise.

Hence FX1( )u = −λ1ln(1−u).

The inverse transform method also applies to discrete distributions, but of course we cannot take the inverse of the cumulative distribution function. However, we may take the generalized inverse defined by

{ }

1( ) min | ( )

X X

F u = x uF x

Example: Let X be a Bernoulli random variable with parameter p. Then its cumulative distribution function is given by

0 0

( ) 1 if 0 1

1 1

X

x

F x p x

x

 <

= − ≤ <

 ≤

.

(14)

Hence

1 0 1

( ) if

1 1

X

u p

F u

p u

 ≤ −

=  − <

Replacing u by 1 – u we see that a Bernoulli random variable with parameter p can be generated from a U(0,1) random variable by the following algorithm:

If U < p , then deliver 1, otherwise deliver 0.

Acceptance-Rejection Method

We explain the acceptance-rejection method for continuous distributions. Suppose we want to simulate a random variable X with density f(x) but that is to difficult to do by the inverse transform method. Suppose now we have a method to simulate a random variable with density g(y) such that

f(y) ≤ c⋅g(y)

for all y and a positive constant c. The acceptance-rejection method is described by the following procedure:

1. Generate Y from the distribution with density g(y)

2. Generate U from U(0,1)

3. If Ucg(y) f(y), then deliver X = Y, otherwise return to step 1

The following theorem shows that the random variable X has indeed density f(x).

Theorem: The random variable X generated by the acceptance-rejection method has density f(x).

The distribution function of X is given by

( )

( )

, ( ) ( ) ( )

( )

( ) ( )

( ) P Y x U f y

f y cg y P X x P Y x U

cg y P U f y cg y

≤ ≤

 

≤ =  ≤ ≤ = ≤

Since U and Y are independent the joint density of (U, Y) is equal to g(y)⋅1[0,1](u).

Consequently

(15)

(

f Y( ) ( )

)

( )f y( )0cg y( ) ( ) / 1

P U dyg y du dy f y c

cg Y c

−∞ −∞

≤ =

∫ ∫

=

=

and

(

, f Y( ) ( )

)

x ( )f y( )0cg y( ) x ( ) /

P Y x U dyg y du dy f y c

cg Y −∞ −∞

≤ ≤ =

∫ ∫

=

which implies

( ) ( )

x

P X x f y dy

−∞

≤ =

.

Example: Let Z be a standard normal random variable. Suppose we want to simulate the absolute value X = |Z| describing the deviation from its mean value. Its probability density is given by

2

2 2

( )

x

f x = πe for x > 0.

We take Y an exponential random variable with parameter 1, hence g(y) = exp(-y). It is easy to show that

( ) 2 ( 1)2

( ) exp 2

f y e y

g y π

 − 

= − 

 

hence c= 2eπ .

Obviously, one wants to choose c⋅g(y) as close as possible to f(y) to keep the proportion of rejected points small. On the other hand, g(y) is chosen to be a very simple density avoiding long computations for Y.

(16)

VII.5 Simulation of random variables : Examples

Uniform U(a,b)

Generator: Inverse transform method Set X = a + (b – a)U

Exponential EXP(λλλλ)

Generator: Inverse transform method Set X = -λ−1⋅ln(1-U)

Normal N(µµµµ,σσσσ)

An approximate (and too slow) method is to set

12

1 k 6

k

X µ σ U

=

 

= + 

− 

where Uk , k = 1,…,12 are independent U(0,1) random variables. A simple and good method is the Box-Muller method arising from a polar transformation: If U1 and U2 are independent U(0,1) random variables, then

1 1 2

2 2 1

2 ln( ) cos(2 ) 2 ln( ) cos(2 )

X U U

X U U

π π

= −

= −

are independent N(0,1) random variables. An efficient version of this transformation is given by the following acceptance-rejection algorithm.

(17)

Generator: Polar Method of Box-Muller transform (acceptance-rejection) 1. Set V1 = 2U1 – 1, V2 = 2U2 – 1, W = (V1)2 + (V2)2.

If (W<1)

2. Set Y = [-2ln(W)/W]1/2

3. Set X1 = µ + σV1Y, X2 = µ + σV2Y else

go back to 1.

Bernoulli B(1,p) and Binomial B(N,p)

As seen before a Bernoulli random variable with parameter p , B(1,p), can be generated from a U(0,1) random variable by the following algorithm:

If U < p , then deliver 1, otherwise deliver 0.

For generating a binomial random variable with parameters N and p we use the fact that it is the sum of N independent B(1,p) random variables:

1. Set X = 0

2. For(i = 1 to N)

{Set X = X +B(1,p)}

3. Deliver X

Geometric GEOM(1,p) and negative Binomial GEOM(N,p)

A geometric random variable gives the time until the first success in a sequence of Bernoulli trials. A negative binomial random variable can be written as a sum of independent geometric random variables.

Generator GEOM(1,p):

1. Set a = 1/ln(1-p)

2. Deliver X = 1 + floor[a⋅ln(U)]

(18)

Generator GEOM(N,p):

1. Set X = 0

2. For(i = 1 to N)

{Set X = X +GEOM(1,p)}

3. Deliver X

Poisson POISSON(λλλλ)

The method we use is based on the following fact: If events occur randomly in time at a certain rate r and X is the number of events that occurs in a time period t, then X is a Poisson random variable with parameter λ = r⋅t. Therefore we set

1

min : 1

n k k

X n U eλ

=

 

=  < −

or equivalently

0

1 1

max : where 1

n

k k

k k

X n U eλ U

= =

 

=  ≥  =

The following algorithm describes the corresponding generator:

1. Set a = exp(-λ), r = 1, X = -1 2. While(r > a)

{Set r = rU, X = X + 1)}

3. Deliver X

(19)

VII.6 Monte-Carlo methods

In Monte Carlo simulations we use experiments with random numbers to evaluate mathematical expressions. Computing integrals are the most common applications of such methods. Suppose we want to evaluate the definite integral

1

0

( ) f x dx θ =

.

The probabilistic interpretation of this integral is as follows: Let X be a U(0,1) random variable. Then

[ ( )]

E f X θ =

and the problem of evaluating the integral becomes a familiar statistical problem of estimating the mean E[f (X)]. An unbiased estimator of θ is given by

1 1

ˆ ( )

n n k

k

θ f x

=

=

.

Its variance is

( )

1 1

2 2 2

1 1

0 0

[ ]ˆ n ( ) n ( )

Varθ =

f x −θ dx=

f x −θ dx.

The unknown variance can be estimated by the empirical variance. An objective is to reduce the variance of the estimators. This can be achieved by introducing random vectors with negative correlations. Such techniques are called variance reduction.

Example (Antithetic variables):

In view of the property

1 1

0 0

( ) (1 )

f x dx= fx dx

∫ ∫

we can take the unbiased estimator

( )

1

2 2

1

ˆ ( ) (1 )

n

k k

n k

f x f x

θ

=

=

+ −

Its variance is

2 2

1 1 1

2 2

1 1 1

2

0 0 0

( ) (1 ) ( ) (1 )

[ ]ˆ ( )

2 2

n n n

f x f x f x f x

Varθ =

 + − −θ dx=

f x −θ dx

 − −  dx which is smaller than Var[ ]θ . ˆ

Références

Documents relatifs

Cours de Frédéric Mandon sous licence Creative Commons BY NC SA, http://creativecommons.org/licenses/by-nc-sa/3.0/fr/1. • La symétrie par rapport à la première bissectrice

• pour vous entrainer, vous pouvez prouver le théorème : Toute suite convergente

[r]

Saisir le nombre entier naturel non nul N. Soit p un entier naturel non nul.. Dresser le tableau de variations de f. Rappel : Dans un cercle, si un angle inscrit et un angle au

Approche de la notion de limite d’une suite à partir d’exemples. Exploiter une représentation graphique des termes d’une suite. • Une suite monotone est soit une suite

La question précédente assure que si A est orthogonale alors elle conserve le produit scalaire donc l’orthogonalité.. En revanche, la réciproque

2) Envoi doublé. Par préaution, on hoisit d'envoyer le même message deux fois.. a) A ve quelle probabilité le i-ième symbole du premier message reçu

[2] Attention : cette question est à rendre par mail (e-lyco ou olivier.jaccomard@ac- nantes.fr), en envoyant le fichier fait sous Algobox (et donc pas un pdf, un doc, ou un