• Aucun résultat trouvé

■ 24.9 BIVARIATE PROBABILITY DISTRIBUTION

Dans le document ■ 24.2 RANDOM VARIABLES (Page 23-27)

is called the standard deviationof the random variable X. It is easily shown, in terms of definitions, that

2 E(X)2E(X2)2;

that is, the variance can be written as the second moment about the origin minus the square of the mean.

It has already been shown that if Zg(X) CX, then E(CX) CE(X) C,where Cis any constant and is E(X). The variance of the random variable Zg(X) CXis also easily obtained. By definition, if Xis a continuous random variable, the variance of Zis given by

E(ZE(Z))2E(CXCE(X))2

(CyC)2fX(y) dy

C2

(y)2fX(y) dyC2 2.

Thus, the variance of a constant times a random variable is just the square of the constant times the variance of the random variable. This is also true for discrete random variables.

Finally, the variance of a constant is easily seen to be zero.

It has already been shown that if the demand for a product takes on the values 0, 1, 2, . . . , 99, each with probability 1100, then E(X) 49.5. Similarly,

2k

990(k)2PX(k) k

990k2PX(k)2

k

9901k020 (49.5)2833.25.

Table 24.1 gives the means and variances of the random variables that are often use-ful in operations research. Note that for some random variables a single moment, the mean, provides a complete characterization of the distribution, e.g., the Poisson random variable.

For some random variables the mean and variance provide a complete characterization of the distribution, e.g., the normal. In fact, if all the moments of a probability distribution are known, this is usually equivalent to specifying the entire distribution.

It was seen that the mean and variance may be sufficient to completely characterize a distribution, e.g., the normal. However, what can be said, in general, about a random variable whose mean and variance 2are known, but nothing else about the form of the distribution is specified? This can be expressed in terms of Chebyshev’s inequality, which states that for any positive number C,

P{C X C }1 C

1 2,

where Xis any random variable having mean and variance 2. For example, if C3, if follows that p{3 X3 } 11/90.8889. However, if X is known to have a normal distribution, then P{3 X 3 } 0.9973. Note that the Chebyshev inequality only gives a lower bound on the probability (usually a very con-servative one), so there is no contradiction here.

24.9 BIVARIATE PROBABILITY DISTRIBUTION

Thus far the discussion has been concerned with the probability distribution of a single random variable, e.g., the demand for a product during the first month or the demand for a product during the second month. In an experiment that measures the demand during the first 2 months, it may well be important to look at the probability distribution of the

vector random variable (X1,X2), the demand during the first month, and the demand ran-dom variable X1taken on values less than or equal to b1, and X2takes on values less than or equal to b2. Then P{EbX1,b2

1,X

2} denotes the probability of this event. In the above exam-ple of the demand for a product during the first 2 months, suppose that the samexam-ple space consists of the set of all possible points , where represents a pair of nonnegative integer values (x1,x2). Assume that x1and x2are bounded by 99. Thus, there are (100)2 points in . Suppose further that each point has associated with it a probability equal to 1/(100)2, except for the points (0,0) and (99,99). The probability associated with the event {0,0} will be 1.5/(100)2, that is,P{0,0} 1.5/(100)2, and the probability associated with the event {99,99} will be 0.5/(100)2; that is,P{99,99} 0.5/(100)2. Thus, if there is interest in the “bivariate” random variable (X1,X2), the demand during the first and second months, respectively, then the event

{X11,X2 3}

A similar calculation can be made for any value of b1and b2.

For any given bivariate random variable (X1,X2),P{X1b1,X2 b2} is denoted by FX1X2(b1,b2) and is called the joint cumulative distribution function(CDF) of the bi-variate random variable (X1,X2) and is defined for all real values of b1and b2. Where there is no ambiguity the joint CDF may be denoted by F(b1,b2). Thus, attached to every bivariate random variable is a joint CDF. This is not an arbitrary function but is induced by the probabilities associated with events defined over the sample space such that {X1()b1,X2()b2}.

The joint CDF of a random variable is a numerically valued function, defined for all b1,b2such that b1,b2 , having the following properties:

1. FX1X2(b1, ) P{X1 b1,X2 } P{X1b1} FX1(b1), where FX1(b1) is just the CDF of the univariate random variable X1.

2. FX1X2( ,b2) P{X1 ,X2b2} P{X2b2} FX2(b2), where FX2(b2) is just the CDF of the univariate random variable X2.

TABLE 24.1 Table of common distributions

Distribution of Range of

random Para- Expected random

variable X Form meters value Variance variable

Binomial PX(k) pk(1p)nk n, p np np(1p) 0, 1, 2, . . . ,n

Poisson PX(k) k

k e

!

0, 1, 2, . . . .

Geometric PX(k) p(1p)k1 p 1, 2, . . . .

Exponential fX(y) ey/ 2 (0, )

Gamma fX(y) y(1)ey/ , 2 (0, )

Beta fX(y) y(1)(1y)(1)

,

(0,1)

Normal fX(y) e(y)2/2 2 , 2 ( , )

Students t fX(y) (1 y2/)(1)/2 0(for1) /(2)(for > 2) ( , )

Chi square fX(y) y(2)/2ey/2 2 (0, )

F fX(y) 1,2 (0, )

for 22. for 24

22(22214) 1(22)2(24) 2

22 (y)(1/2)1

(21y)(12)/2

122

11/222/2

21

22

1

2/2(/2)

([1]/2)

(/2)

21

1

2

()2(1)

()

()()

1

() 1

1p

p2

1 p

n!

k!(nk)!

24-25

3. FX1X2(b1, ) P{X1b1,X2 } 0, FX1X2( ,b2) P{X1 ,X2b2} 0.

4. FX1X2(b1 1,b2 2)FX1X2(b1 1,b2)FX1X2(b1,b2 2) FX1X2(b1,b2)0, for every 1,20, and b1,b2.

Using the definition of the event Eb1,b2

X1,X

2, events of the form {a1X1b1,a2X2b2}

can be described as the set of outcomes in the sample space such that the bivariate random variable (X1,X2) takes on values such that X1is greater than a1but does not ex-ceed b1and X2is greater than a2but does not exceed b2. P{a1X1b1,a2X2b2} can easily be seen to be

FX1X2(b1,b2)FX1X2(b1,a2)FX1X2(a1,b2) FX1X2(a1,a2).

It was noted that single random variables are generally characterized as discrete or continuous random variables. A bivariate random variable can be characterized in a sim-ilar manner. A bivariate random variable (X1,X2) is called a discrete bivariate random vari-able if both X1and X2are discrete random variables. Similarly, a bivariate random vari-able (X1, X2) is called a continuous bivariate random variable if both X1 and X2 are continuous random variables. Of course, bivariate random variables that are neither dis-crete nor continuous can exist, but these will not be important in this book.

The joint CDF for a discrete random variable FX1X2(b1,b2) is given by FX1X2(b1,b2) P{X1()b1,X2()b2}

all k

b

1all l

b

2

P{X1()k,X2()l}

all k

b

1all l

b

2

PX1X2(k,l),

where {X1() k,X2() l) is the set of outcomes in the sample space such that the random variable X1taken on the value kand the variable X2takes on the value l; and P{X1() k, X2() l} PX1X2(k, l) denotes the probability of this event. The PX1X2(k,l) are called the joint probability distribution of the discrete bivariate random variable (X1,X2). Thus, in the example considered at the beginning of this section,

PX1X2(k,1) 1/(100)2for all k,lthat are integers between 0 and 99, except for PX1X2(0, 0) 1.5/(100)2and PX1X2(99,99) 0.5/(100)2.

For a continuous random variable, the joint CDF FX1X2(b1,b2) can usually be written as FX1X2(b1,b2) P{X1()b1,X2() b2}

b1

b2fX1X2(s,t) ds dt,

where fX1X2(s,t) is known as the joint density function of the bivariate random variable (X1,X2). A knowledge of the joint density function enables one to calculate all sorts of probabilities, for example.

P{a1X1b1,a2X2b2}

ab11

ab22fX1X2(s,t) ds dt.

Finally, if the density function is known, it is said that the probability distribution of the random variable is determined.

The joint density function can be viewed as a surface in three dimensions, where the volume under this surface over regions in the s,tplane correspond to probabilities. Nat-urally, the density function can be obtained from the CDF by using the relation

s

Dans le document ■ 24.2 RANDOM VARIABLES (Page 23-27)

Documents relatifs