• Aucun résultat trouvé

An analytic model for request-based job shops

N/A
N/A
Protected

Academic year: 2021

Partager "An analytic model for request-based job shops"

Copied!
18
0
0

Texte intégral

(1)

Equation 10 -- Variances and Covariances of Jobs at All Stations

(Note: The derivation of Equation 10 assumes that the above matrix equation for St calculates the covariance between pairs of stations correctly. Otherwise, we would not be able to recurse on the equation for St to find the steady-state variances and covariances. Section 5.10 proves that the equation for St calculates the covariance terms correctly.)

But now, recall that we want to find the variances and covariances in terms of instructions at each station. We note that Pt = '' X . Then we apply the formula for the variance of a randomJ sum to find:

(93) Var[P

]

= E[ P ]Var[X] + (E[X])2 S jj .

Further, we found in Section 5.3 that Cov[Nk,t_, N,t_, ] = Cov[Pkt,Pl t_, ] E[Xk ]E[XI]. Then we can write:

(94) Cov[Pk,t-, P,t-] = E[Xk ]E[Xl ]Su.

These are formulas for the steady-state variances and covariances of all of the station workloads, as desired.F]

5.7 Mixed Network Mechanics

In a mixed network, we have some stations operate according to an instruction-control rule, and other stations operate according to the job-shop control rule.

In general, we can apply the recursion equations we found in the previous sections. We use the instruction-control recursion equation at stations that operate under the instruction-control rule, and the job-control recursion equation at stations that operate under the job-control rule. Then, we write all of the recursion equations simultaneously in matrix form (whether they are

instruction-rule or job-rule equations), and find recursive relationship that tells us what the expectation and variance of demands are for all stations.

However, to do this we must make some adjustments to the recursion equations. In a mixed model, we can have job-rule stations feed into instruction-rule stations, and vice versa. But, the analytic model equations for job-rule stations maintain all statistics in terms of jobs, while the equations for instruction-rule stations maintain all statistics in terms of instructions. We will need to convert statistics in terms of jobs (E[Nkt-l], Var[Nk t-l]) to statistics in terms of instructions (E[Pk,t-l], Var[Pk tl]), and vice versa. Fortunately, this is simple to do. We make the

following substitutions:

When a model equation calls for E[Nk, tl] We use E[Nk,tl ] directly if station k uses the job-control rule, since the model equations track this value under the job-control rule. If stationj uses the instruction-control rule, the model equations track E[Pkt-l]. In the latter case, we use the formula for E[Nk tl] in terms of E[Pk t-l], which is:

E[ Nkt_, ] = E[ Pk,t, ]E[ Xktl ]

(2)

* When a model equation calls for Var[Nk t-l]: We use Var[Nk tl] directly if station k uses the job-control rule. Otherwise, we use either the lower-bound or the upper-bound formula for Var[Nk

t-l]

in terms of Var[Pk

tl].

The formulas are:

Lower bound: T/,-, AT M >Var[Pk

-l-]

E[Pk,,_, ]Var[Xk ]

Lk(E[Xk ])3

Upper bound:

Var[N

kt-

].<

1 Lk E[Pk,tl ] E[Xk] Var[Pk,t -] E[Xk ]2 E[Pk Lk(E[Xk])t - ]Var[Xk ]3 (These equations were derived in Section 5.3.)

· When a model equation calls for Cov[Nk tl, Nl, tl], we make the following substitutions: - If stations k and 1 both use job control, we use Cov[Nk, t 1, Nl, t-l] directly.

- If station k uses job control, and station 1 uses instruction control, we make the substitution Cov[Nk,tl_, N,,,

]

= Cov[Nk

,

P ] / E[X, ].

- If stations k and 1 both use instruction control, we make the substitution Cov[ Nk,tl, Nl,t_1] = Cov[ Pk, P, ]

/

E[ Xk ]E[ X, ].

(These equations were derived in Section 5.3.)

We then recalculate the recursion equations for each station, making these substitutions. Section 5.8 gives the results of these calculations for the expectation equations, and Section 5.9 gives the results of these calculations for the variance equations.

5.8 Mixed Network Expectations

5.8.1 Instruction-Rule Stations

Making the substitutions given in the previous section, we can derive the following recursion equation for instruction rule stations:

(95)

Equation 11 -- Recursion Equation for the Expectation of an Instruction-Rule Station

E[P,t

]

= 1- E[Pi,t-,] +

L

(jkPk,t-1 +,U LLj kK

where

Pk,t-l = E[Nkt-l], if k uses job

-

control;

Pkt-l =

E[

Pk.t-l ],

if k uses instruction- control;

()jk = E[J ]E[Xj ], if k uses job -control;

(jk = E[Jj ]E[X ]/E[Xk], if k uses instruction control; and uj = E[R]E[Jj, R]E[X.

(3)

5.8.2 Job-Rule Stations

Making the substitutions given in the previous section, we eventually derive the following recursion equation for job-rule stations:

(96)

Equation 12 -- Recursion Equation for the Expectation of a Job-Rule Station

5.8.3 The Matrix Equation for Expectations

Define the following vectors and matrices: * I is the identity matrix.

* D is a diagonal matrix with the lead times on the diagonal.

F is a matrix whose (,k) entry is given by the formulas in the recursion equations above. * p is a vector whosejth entry is given by the formulas in the recursion equations above. * p is a vector whosejth entry is given by the formulas in the recursion equations above. Then we can rewrite all of the recursion equations in matrix form simultaneously in the following form:

P = (I - D)pt_, + D()pt_ + Du,

(97)

= (I

-

D + D'I)pt-_ + D,

Call this B

= Bpt_ + Dpu.

The proof of this matrix equation is term-by-term multiplication. Infinitely iterating the above equation, and applying linear algebra theory, we find:

(98) |p= (I- )-ul,

Equation 13 -- Results Vector Used to Calculate Expected Demands

But then, we have a results vector p whosejth entry is E[Nj] if station j uses the job-control rule, and E[Pj] if stationj uses the instruction-control rule. In the former case, we can calculate E[Pj] by using the relationship E[Pj] = E[Nj]E[Xj]. This gives us a method to calculate the expectation of demand at all stations in the mixed-rule model, as desired.

E[Nj,,]

= 1-

1-

E[Njt-]

+

(jkpk,,_l

,

+ 1j

where

Pk,t-l = E[Nk,t- ], if k uses job -control; Pk,t-l =

E[Pk,t-l ],

if k

uses

instruction

- control;

)jk = E[Jj ], if k uses job -control;

(I)jk = E[Jj] / E[Xk ],

if k uses instruction control; and

(4)

5.9 Mixed Network Variances

5.9.1 Instruction-Rule Stations

Making the substitutions discussed in Section 5.7, we can derive the following recursion equation

for an estimate of the variance of an instruction-rule station:

Var[

P]

= -

Var[P]

(Skkt-l + Uk) + j, k,t]

(99)

+ 2 +

E

[Djkit,

_,1]

1LLk

eK K

lk

+ 2 1 - + -j )i [ IjkSj,t-] kcj

Equation 14 -- Recursion Equation for the Variance of an Instruction-Rule Station

In this equation:

·

(jk

= E[ Jjk ]E[X

i

] / E[Xk ] if station k uses instruction control.

*

(jk = E[ Jik ]E[Xj

]

if station k uses job control.

y2 E[Pk] (E[Jjk ]Var[

X

] + Var[Jk

](E[Xj

])2).

*er

= E[R J

](E[J ]Var[X] + Var[JjR ](E[Xj

])2

)

+ Var[R](E[JjR ]E[Xj ])

* Skk,t-l = Var[Pk,,tl]

if station k uses

instruction

control.

* Skk,t l = Var[Nk,t_ ] if station k uses job control.

* Sk, t- = Cov(Nktl, Nlt-l) if stations k and both use job control.

* Skt l = Cov(Pk,t-l, Nlt-1 ) if station k uses instruction control and stationj uses job control.

* Skl,t-l = Cov(Pkt-l, Plt-1) if stations k and both use instruction control.

* Uk= - E[Pk][ Xk] if station k uses instruction control with the lower-bound

Lk E[Xk]

approximation.

* Uk

=

1J-(Lk)(1E

-)(

E[Pk]Var[Xk] if station k uses instruction control with

the upper-bound approximation.

* uk = O0 if station k uses job control.

5.9.2 Job-Rule Stations

Making the substitutions discussed in Section 5.7, we can derive the following recursion equation

for an estimate of the variance of the number of jobs produced at a job-rule station:

(5)

(100)

Equation 15 -- Recursion Equation for the Variance of a Job-Rule Station

In this equation:

(jk = E[Jjk] / E[Xk] if station k uses instruction control. jk = E[Jjk] if station k uses job control.

· 2

2kt = E[Pk ]Var[Jjk]/ E[Xk].

* = E[Rj]( Var[JjR])+Var[R ](E[JjR])2

* Skk,t-1 = Var[Pkt- ] if station k uses instruction control.

* Skk,t1 = Var[Nk,t _ ] if station k uses job control.

* Sk t- = Cov(Nk,t-_, Nlt,-l) if stations k and I both use job control.

Sk1 = Cov(Pk,t-_, Nt-) if station k uses instruction control and stationj uses job control.

S kl, t-1 = Cov(Pkt-l, P,t-l) if stations k and both use instruction control.

E[P]Var[Xk]

Uk ]Var ] if station k uses instruction control with the lower-bound

LkE[Xk]

approximation.

Uk ]) - E [ if station k uses instruction control with the upper-bound approximation.

· uk = 0 if station k uses job control.

5.9.3 The Matrix Equation for Variances

Define the following vectors and matrices:

* St be a square matrix whose Sj,j,t and Sj,k,t-1 entries are given by the recursion equations above.

* I is the identity matrix.

* D is a diagonal matrix with the lead times on the diagonal.

c is a matrix whose (i,k) entries are given by the formulas in the recursion equations above. * U is a diagonal matrix with uk's defined above on the diagonal.

* Y is a diagonal matrix with diagonal elements Z = oe + +j kKcjkt * B be a square matrix that equals (I - D - DC).

2

2

k eK

i(1

L- tk (St-1 k ) + ,k,t ] ( I )2 2 r - - 1 'j-j) C j j) kK L f jk klst-l] lwk + 2 1- -I I k S j,,t, Ij kzj

(6)

Then we can rewrite all of the recursion equations in matrix form simultaneously in the following form:

(101) St = BS t_,B'+(D())U(D())'+DZD.

This equation may be checked by term-by-term multiplication. Infinitely iterating this recursion, we find:

(102) S = ZB((D)U(D())'+DSD)B 's.

s=O

Equation 16 -- Results Matrix Used to Estimate Network Covariances

But then, S is a results matrix whose entries are the following estimates: * Sjj = Var[Pjl] if stationj uses instruction control.

* Sjj = Var[Nj] if stationj uses job control. To find Var[P]jl in this case, we use the following relation: Var[Pj] = E[Nj]Var[Xj] + Var[Nj](E[X])2.

* Sjk = Cov(Pj, Pk) if stationsj and k both use instruction control.

* Sjk = Cov(Nj, Pk) if stationj uses job control and station k uses instruction control. To find Cov(Pj, Pk) in this case, we use the following relation: Cov(Pj, Pk) = E[Xj] Cov(Nj, Pk). S* jk = Cov(Nj, Nk) if stationsj and k both use instruction control. To find Cov(Pj, Pk) in this

case, we use the following relation: Cov(Pj, Pk) = E[X]E[Xk] Cov(Nj, Nk).

These are estimates of the steady-state variances and covariances of demand for all of the stations, as desired. These estimates do assume that the above matrix equation for St calculates the covariance between pairs of stations correctly. Otherwise, we would not be able to recurse on the equation for St to find the steady-state variances and covariances. The following section proves that the equation for St calculates the covariance terms correctly.

5.10 Mixed Network Covariances

This section derives the covariances between pairs of stations, and shows that the formula for the station variances and covariances derived in the previous section correctly calculates the

covariances.

5.10.1 Initial Development

Define the following variable: Wit is a measure of the work produced by station i at time t, and is: * Wit = Pit if station i uses the instruction-control rule, and;

* Wit = Nit if station i uses the job-control rule.

We will derive a formula for Cov[Wi, t, Wj, t] in terms of the workstation expectations, variances, and covariances at time t-1.

From Sections 5.1 and 5.4, we know that:

(7)

whether Wjt is measured in jobs or instructions. Here, Ajt measures the arrivals to stationj at the start of period t, and is measured in instructions if stationj uses the instruction-control rule, and measured in jobs if stationj uses the job-control rule.

Then, we have that:

(104) W t

=

1 - + Ai,,+ 11

-

j +

L A.

,t

Taking the variance of (Wi,t + Wj t), we find: (105)

Var[Wi,, + Wj,t ] = Var[

Wt

] + Var[Wj ] + 2Cov[Wit, Wjt

]

=

1

-

Var[Wit_ '>] +

Var[Ai

t

] + 2(1 -

-)

(9-)

Cov[ Wi,t-, Ai,t

]

We recognize this as Var[Wit ].

+ 1- Var[Wi,,-]+ Var[Ait]+2 1 - Cov[W1

,,

Ai,,]

We recognize this as Var[Wjt ].

+ 2(1 -

-)(1

-

,W

Cov[ , 1

-

] + 21-i

1

Cov[

W,,

A), + 2 1 - Cov[ Ai,,, Wj t-1] + 2 --- Cov[ Ai't, Aj,t].

Equating terms, and solving for Cov[ Wit, Wjt], we find:

Cov[W W ] = (1 - L - Lcov Co][ ,_l ,, j- j,

]

- Cov[W,_, A,, (106)

+( L -1-

Cov[

Ai,, Wj,t

,,] ( +

L-

Cov[

AiAi, Aj,].

To calculate the terms in this expression, we will need to calculate Cov[Ai, t, Aj, t] and Cov[ Wit,

Ajt].

5.10.2 The Covariance of Arrivals

Aj,t has the following form:

Nk,t- Rjt

(107) Aj t E E Yjkt + Yj, R, t

keK 1=1 1=1

Arrivals from Arrivals from other stations outside the system

Here, the N's are the incoming requests from the other stations, Rjt is incoming requests from outside the network, and the Y' s are independent, identically distributed random variables

(8)

representing the work per request from each source. (The Ys are the number of jobs per request if the station uses the job-control rule. They are summations of instructions if the station uses the instruction-control rule.)

By assumption, there is no dependence between any arrivals from outside the network and any other arrivals. Then, to find Cov[Ait, Ajt], it is sufficient to find:

Nk[,_ Nk,,_

1

(108) Cov[AiAt,Aij t]eovL

Z

iit ZEykt

kEK 1=1 keK 1=1

To do so, we will extend Lemma 5 to calculate Var(Ait + Ajt), and use the resulting formula for Var(Ait + Ajt) to calculate Cov[Ait, At].

First, define Tto be:

Nk,t-1 Nk,t-I

(109)

T=E

Y

+Z

ZYjk

kEK =1 kEK =1

The "Law of Total Variance" (see Lemma 2), tells us that

(110) Var[T] = Var[E(T N)]+ E[Var(7]N)],

where Nis some event. Here, let Nbe the event {Nk t = nk ,Vk} . Now, using the linearity of expectation,

(111) E(TIN) = NkE[Yik ]+ NkE[ik I.

keK k~K

Then, noting that E[Yik] and E[Yjk] are constants, and applying the bilinearity principle of covariance, we find that:

Var[E(TN)] = E[Yik ]E[Yil]Cov[Nk,N]+Y E[ Yjk]E[] [Y]Cov[ k ,Nl]

(112) kEKEK lkEKIEK

+2J E[Yik ]E[Yj]Cov[Nk,N,]. kEK IEK

Next, recalling that the variance of a fixed sum of independent random variables is the sum of their variances, we find:

Nk Nk

Var(TI N) = Var[Yk ] + Z E EVar[ Yk]

(113) keK 1=1 keK 1=1

= NkVar[Yik ] + NkVar[Yik ],

keK keK

and, taking the expectation over N, we find:

(114) E[Var(TI N)] = E[Nk ]Var[Yik ]+ [Nk ]Var[Yjk ],

kEK kEK

(9)

Var[ T] = Var[E(TI N)] + E[Var(TI N)]

(115) = EZE[Yik]E[Yi]Cov[Nk,Nl]+EE[Yjk]E[Yjl]Cov[Nk,Nl]

kEKIEK kEKIEK

+ 2

E[

E[Yi

E[Y ]Cov[Nk,N, ]+

E[Nk ]Var[Yi

k

]+

E[N

k

]Var[Yk ]

keKIEK kEK kK k

Now, let us group together the terms of Var[T] as follows:

Var[T]= (E

E[Yi k]E[Y,]Cov[Nk, N,]+

E[Nk]Var[Yik])

(116)

+ (JEE[Yj ]E[Yjl ]Cov[Nk, N]+ I E[Nk ]Var[Yjk])

kEKEK keK

+ 2Z E E[k ]E[Yjl]Cov[Nk, ,N,],

kEK IEK

which we recognize to be:

Nk,t- Nktl

(117) Var[T]= Var[ I

Ykt]+Var[ I"YJkt]+2

E[Yk]E[Yl]ov[NkN,

N].

keK 1=1 kEK =1 keKleK

Now, by the definition of covariance, we know:

Nkt-I Nkt-I Nkt-I

Nk.t-(118)

Var[T] = Var[

Yit ] + Var[

Yjkt

]+2ov [

YtE

EYj

].

kEK 1=1 kEK 1=1 kEK 1=1 keK 1=1

So, equating terms, we solve for Cov[Ait,A

j

t]:

NkV,-l Nkt-I

k(119)

COV[A, A] = Co

lJ

-

=

EZ

E[Yik ]E[Yjl

]Cov[Nktl,l.

,_-[kK I=1 keK I=1 keKleK

Now, this expression for Cov[Ait,Ajt] is in terms of E[Yik] and Cov[Nk,

tl,N,

tl].

In section 5.8,

we found that we replace the E[Yik] terms with:

*

E[Jik]E[Xi] if station i uses the instruction-control rule.

*

E[Jik] if station i uses the job-control rule.

Next, in section 5.9, we found that, for k

l1,

we can replace the Cov[Nk, t_,N, t-l] term with:

*

Cov[Nk, t-l,NI t-l, if stations k and both use the job-control rule;

* Cov[Pk t_,N, t_] / E[Pk] if station k uses the instruction-control rule and station I uses the

job-control rule; and

*

Cov[Pkt-l,P, tt- ] / E[Pk]E[PI] if stations k and both use the instruction-control rule.

Finally, for k = 1, the Cov[Nk,

t_,Nl

t-l] becomes Var[Nkt_l], and we replace this term with:

*

Var[Nk tl] if station k uses the job-control rule; and

* (Var[Pk t-l] + Uk ) / E[Xk]

2

if station k uses one of the instruction-control rules. (Recall that

uk is one of the two variance correction terms discussed in section 5.3 of the analytic model

paper.)

(10)

(120) Cov[ A it jt ]= )ik(jlSk,lt-l + ( ik (i jkUkk

keK IlK keK

where:

*· Cik = E[Jik] if stations i and k both use the job-control rule.

*· cik = E[Jik]E[Xi] if station i uses the instruction-control rule and station k uses the

job-control rule.

*· cIik = E[Jik] / E[Xk] if station i uses the job-control rule and station k uses the

instruction-control rule.

ik = E[Jjk]E[Xi] / E[Xk] if stations i and k both use the job-control rule.

* Sk,l,t-1 = Cov[Nk,t-l, Nl,t-i], if stations k and I both use the job-control rule (recall that

Cov[Nk, tl,N t-l] terms become Var[Nk, t-l] terms when k = );

* Sk,l,t-1 = Cov[Pk,t-1, Nl,t-l] E[Pk] if station k uses the instruction-control rule and station I

uses the job-control rule;

* Sk,l,t-1 = Cov[Pk,t-l,Pl,t-1] / E[Pk]E[Pl] if stations k and I both use the instruction-control

rule;

*

Ukk = 0 if station k uses the job-control rule; and

* Ukk = Uk, if station k uses one of the instruction-control rules, and where uk is the

appropriate variance correction term, as discussed in section 5.3.

5.10.3 The Covariance of Arrivals and Workload

Now, we need to calculate the covariance of work arrivals with workload in the previous period,

or Cov[ Ait-l, W it-l ]. To do so, let us consider Cov[ Ait-_, Ajj,,t-_ ], which

is the covariance of

the arrivals at station i with the arrivals stationj sends to itself. We can write this as:

(121)

Cov[ Ait, A,j],t = Cov

!

Yi

, ,

-

'Y§

.

/kK =1 1=1

In the previous subsection, we found that the covariance of all the arrivals at station i with all of

the arrivals to stationj is:

Nk,t-I Nk

1t-(122)Cov[AA]COvl i, ZIk Z Z, (ik(j1SSk,,t-1 + Zik(kUkk '

IkEK =1 kEK 1=1 kEKIEK kEK

But then, by the bilinearity principle of covariance, the covariance of all the arrivals at station i

with the single arrival of the work stationj sends to itself is:

(123) Cov[Ait, Aj,j,t] = (ik(jjS k,j,t-1 + (IiiUjj.i

k eK

Now, Sk,j,t1 is a linear function of Cov[Wk t_1, Wt_l]. Then, the bilinearity principle implies that to change Cov[ Ait, Aji,j ] to Cov[ Ait, Wjt

],

we simply drop the jj terms from the equation for Cov[Ait,Ajj,t]. This gives us:

(124)

Cov[Ait, Wjt

-l] =Z (I)ikSk,j,t-l1 + ()iUjU

(11)

5.10.4 A Formula for the Covariance Terms

Recall that a formula for Cov[Wit, Wjt] is:

Cov[

Wt,

t]

=(1

-

1-

Cov[

W, _,

Wjt_

] + 1

Cov[ W,t,,

Ai, ]

(125)

(125)

-

Cov[ Ai,, Wj,+

i-

Cov[Ait, Aj,].

Using the formulas of the previous two subsections for Cov[Wi, t-1

, Ajt] and Cov[Ait, Ajt], we

find:

CovW ]=(1-J jJ11+jCov[Wt+I

( Lj(

L

j

-

I-

-

(L

j

(k (

Djk ki,t-l) I+ )jiii

(126)

+

(

_ /)

((i)ik S

k,,t-

) + ()YU j )

+

1 i kD) (j+Sk,t- +- (ik ( jk U kk

Li

ij

keKleK kEK

Finally, we can write all of the non-variance Cov[ Wit, Wjt] equations in matrix form as follows:

(127)

St = BSt_,B'+(D()UD())'+DID,

where all the matrix variables are the same as they were for Equation 19. By term-by-term

multiplication, we can check that the (i,j) entry of St is the recursion equation above. ·

This completes the derivation of the Analytic Model. [1

6. Appendix: Lemmas Used in Model Derivations

6.1 Expectation of a Random Sum of Random Variables

Lemma 1. Let N be a random variable with finite expectation, and X

i

be a set of independent,

identically distributed random variables, independent of N, that have a common mean E[X7.

N

Define Q = Z X . ThenE[Q] = E[N]E[X].

i=l

Proof. (Taken from Rice's Mathematical Statistics and Data Analysis.

l

) We first prove the

following result: E[Y] = E[E(IX)]. (This result is sometimes called "the law of total

expectation.") This law states that. To prove this result, we will show that:

(12)

(128)

E(Y) = E(YIX= X)Px(X

)

,

x

where E(Y1X = x) =

ypy (y x) .

y

The proposed formula for E(Y) is a double summation, and we can interchange the order of this

summation. Doing so yields:

(129)

Z

E(YIX = X)px(x) = I y

pYlx(lx)pPx(X).

x y x

Now, by the definition of conditional probability, we have:

(130) py(y) =

ZPylx(Y I)Px(X)

'

x

Substituting, we find that:

(131)

Y Pyx(Ylx)px(x) = lypr(y)= E(Y),

y x y

which is the desired result.

We now consider E[Q]. Using the result, E[Q] = E[E(TI N)]. Using the linearity of

expectation, E(QIN = n) = nE[X] , and E(QIN) = NE[X]. Then we have:

(132)

E[Q] = E[E((2N)] = E[NE(X)] = E[N]E[X],

which is the desired result. D[

6.2 Variance of a Random Sum of Random Variables

Lemma 2. Let N be a random variable with finite expectation and a finite variance. Let Xi be a

set of independent, identically distributed random variables, independent of N, that have a

common mean E[X], and a common variance, Var[X].

N

Define Q=

Xi . Then Var[Q]

=

E[N]Var[X] + (E[X])

2

Var[N].

i=l

Proof. (Taken from Rice's Mathematical Statistics and Data Analysis.

2

) We first prove the

following result: Var[Y] = Var[E(YI X)] + E[Var(YI X)]. (This result can be thought of as the

"law of total variance.") By definition of variance, we have:

(133) Var(IYX) = E(Y2

IX

= x)- [E(Y)IX = x]2.

Then the expectation of Var(YIX) is:

(134) E[Var(YIX) = E[E(Y2 IX)]- E {[E(YIX)]2}.

Similarly, the variance of a conditional expectation is:

(135) Var[E(Y]X)] = E {[E(YIX)]2 } - {E[E(YIX)]}2.

Next, we can use the law of total expectation to rewrite Var(Y) as:

(13)

Substituting, we find that:

Var(Y) = E[E(Y

2

IX)]-

{E[E(YIX)]}

2

(137) = E[E(Y2

IX)]-

E{[E(YIX)]

2} + E{[E(YX)]2

}

- {E[E(YIX)]}2 = E[Var(YI X)] + Var[E(Y X)],

which is the desired result.

Now consider Var[Q]. Using the result, we have that

(138) Var[Q] = Var[E(TI N)] + E[Var(TI N)] .

Because E(QI N) = NE(X), we have that

(139)

Var[E(QI

N)] = [E(X)]2Var(N) .

Further, the fact that the Xi's are independent allows us to write:

N

(140) Var(QI N) = VarX Xi = N(Var[X]),

i=l

and, taking expectations, we find that:

(141) E[Var(QIN)] = E(N)Var(X).

Substituting into the expression for Var[T], we find:

(142) Var[Q] = E[N]Var[X] + (E[X])2Var[N],

which is the desired result. L]

6.3

Uniform Distribution of Arrivals in a Poisson Process

Lemma 3. Let Xi be a set of independent, identically distributed random variables, that come from an exponential distribution with parameter A. Consider a queue containing Njobs, where N

is some positive integer, and where each job has length Xi. Then the breakpoints of the first N - 1 jobs in the queue will be uniformly distributed.

Proof: (Taken from Gallager's Discrete Stochastic Processes. 3) First, note that the queue has a total length of Q = El Xi . Next, define Si to be the location in the queue of the breakpoint between the ith and i+lth job. We know SN = Q, since the end of the last job marks the end of the queue.

We will calculate the joint distribution of SI, S2, ... SN-

,

which is f(S I N -) =

f(Sl=sl, ... , SN- =sn- I N-1 breakpoints in Q). Now, for a small , f(S I N- )Sapproximately equals the probability of no breakpoints in the intervals (0, sl], (s1+35 s2], ...

,(sN-

+ Q], and precisely one breakpoint in each of the intervals (si, si+ 7, i = 1 to N- 1, conditional on the event that exactly N- 1 breakpoints occurred.

We first consider the unconditional probability off(sl, ... sN-l)S. Since the Xi's are exponential with parameter X, the probability of no arrivals in one of the (si+G si+l] intervals equals exp[-(si+l - si - )]. Similarly, the probability of one of the arrivals falling in one of the (si, si+d] intervals is AiSexp[-l28]. Then, the unconditional probability is simply the

(14)

product of all the exp[-(si+I - si- S)] and A2exp[-AS] terms. Now, there is one

exponential term for each subinterval of (0, Q], so multiplying them together yieldsexp[-ilQ]. Further, there are N- 1 () terms, so we have:

(143) f(si ,...,SN1 ) = (>i) N-l exp[-2Q]. Now, using conditional probability, we know that:

(144) f(s,...,s_ lN-1 breakpoints in Q)S= f (sl...,sNl)

P(N - 1 breakpoints in Q) Since the Xi's are exponentially distributed, P(N - 1 breakpoints) is given by a Poisson distribution. (Effectively, P(N - 1 breakpoints) is the probability of N - 1 arrivals of a Poisson process with parameter X in an interval of length Q). Then we have:

f(sl,...,sN_ IN- 1 breakpoints in Q)S = f(sI , .,s )

P(N - 1 breakpoints in Q)' (145) _ (26)N- exp[-AQ] (AQ) N-l exp[-AQ] ' (N-l)! (2a,) N-1 (N- 1)! (AQ)N -I

Dividing by Sand taking the limit as 3 --> 0, we find:

(146) f(sl,...,sNlIN-1 breakpoints in

Q)

=

QN-

< S <...< SN- < Q.

Thenf(S I N- 1) has a uniform distribution, as desired. D[

6.4 An Upper Bound on the Variance of the Number of Jobs Processed Per

Period, for "Nice Distributions" of the Number of Instructions Per Job

Lemma 4. Suppose that Xk, a nonnegative random variable for the distribution for the number of instructions per job at station k, satisfies the following condition. For all values of a constant

to >0,

E[Xk - toIX > to] <E[Xk ]. Then the following result holds.

Var[Nk]< E[qk](L- Lk + L 2 Var[qk],

where Nk is the number ofjobs contained in the first 1 /Lkfraction of instructions in the queue at station k, and qk is the total number ofjobs in the queue.

(15)

Further, suppose we know that station k has been working on the current job for at least t time units. Then the condition says that the expectation of the time remaining until the job is finished is less than the unconditional expected time to process a job at station k.

Distributions which satisfy this property can be thought of as "nice" distributions. Most common distributions satisfy this property, including deterministic, uniform, triangular, normal, and beta distributions. The exponential distribution satisfies this property with equality: by definition, the fact that we have been waiting to for the completion of a job tells us nothing about when the job will be done.

An example distribution in which the property is not satisfied is the following: suppose that the time to complete a job is either two seconds or five hours. If we wait for more than two

seconds, we expect to wait a long time before the current job is completed.

Proof. We consider two different nonnegative distributions: Xk, which satisfies the condition of the lemma, and Xk', which is an exponential distribution. Both distributions have the same mean, E[Xk]. The variance of Xk is not known; the variance of Xk' is (E[Xk])2. However, an

established result from queuing theory is that:

Var[Xk] < Var[Xk'] (147)

E[Xk] E[Y']

since Xk satisfies the property that E[Xk - to IX > to] < E[Xk ]. Since Xk and Xk' have the same mean, we have that Var[Xk] < Var[Xk'].

Now, define Nk to be the number of jobs whose endpoints are in the first Qk / Lk

instructions of the work queue, given that the distribution of the number of instructions per job is Xk. Similarly, define Nk' to be the number of jobs whose endpoints are in the first Qk / Lk instructions of the work queue, given that the distribution of the number of instructions per job is Xk'. Mathematically, we define Nk to be:

(148) Nk = Nk : Xi k - k

i=1 k i=

and the definition of N'k is similar. We want to show that Var[Nk] < Var[Nk'] . We will use the Law of Total Variance to prove this result.

Recall this law states that Var[Y] = Var[E(YIX)] + E[Var(YIX)]. Here, let Y = Nk or N'k as appropriate, and let Xbe the event that Qk and qk equal certain fixed values.

Var[E(YIX)]: Given qk, E[Nk] = E[Nk'] = qk / Lk. Then,

(149) Var[E(Nk [X)]= Var[E(N'k IX)] = Var(qk) / L.

E[Var(YIX)]: Using the definition of variance for discrete distributions, we have that:

(150) Var(Nk X)= N k k p(Nk qk Lk)

The expression for Var(N'k I X) is similar.

Recall the probability that Nk equals a particular value, nk, is the probability that the sum of the instructions of the first nkjobs in the queue is less than or equal to Qk / Lk, and that the sum of the instructions of the first nk + 1 jobs is greater than Qk /I Lk. Then we can rewrite the above equation as:

(16)

(151)

Var(Nkl X) = kk P Xik L

,

Xi1

Nk=l i=1 k k i=1

If we take the expectation of this expression, we find that:

(152) E[Var(N IX) E[q) P N < E[Q ] EX

Nk =0 Lk j= Lk j=

The expression for Var(N'k I X) is similar. Then, the fact that Var[Xj ] < Var[X'j ] implies that:

E aNiEqk2

k E[qk] E[k+1

E[Var(Nk X)]= Nk - L ) X< < X ,k Nk =O k j= k =l Eqq '/ 2 N'k E[I ]N'k--l

(153)

<

E

N

_[ - ) P aI X' jk Xjk) N'k =0 Lk = Lk =1 < E[Var(N'k IX)].

The inequality follows from the following argument: since Var[Xj ] < Var[X'j ], the overall probability that ,YkN Xy,k takes on values comparatively far from E[Qk ] Lk is less than the

probability that j k) X'I k takes on values comparatively far from E[Qk ] / Lk. The inequality

follows.

Var(Y) = Var[E(YIX)] + E(Var(YIX): We have shown that

Var[E(Nk X)] = Var[E(N'k X)], and that E[Var(Nk I X)] < E[Var(N'k IX)]. Adding

these two terms together, we find that:

Var[Nk ] = Var[E(Nk IX)] + E(Var[Nk IX)

<

Var[E(N'kIX)]+ E(Var[N'k IX) = Var[N'k ],

Now, we found in section 5.3 that

(155)

Var[ N' ]

-(

L

k L 2

Var[qk].

(Recall that the result follows from the fact that X'k is an exponential distribution.) The result of

the lemma immediately follows. [

6.5 Covariance of a Sum of Random Sums of Random Variables

Lemma 5. Let Nk be a set K of random variables, each with afinite expectation and a finite variance. Let Xik be a set of independent, identically distributed random variables, independent from every Nk, that have a common mean E[Xk, and a common variance, Var[Xk.

Nk

Define T=

E

Xik . Then:

(17)

Var[T] = Z(E[Nk]Var[Xk ]

(E[Xk

+

]) 2Var[N

])+

Z

E[Xk]E[X]Cov[Nk, Nl]

kEK kK IEK

1•k

Proof. We assume the following result: Var[Y] = Var[E(YIX)] + E[Var(YIX)]. (This result was proved in the development of Lemma 2.) Let Nbe the event {Nk = nk, Vk} . Using this result, we have that:

(156)

Var[T] = Var[E(TIN)] + E[Var(TIN)] .

Now, using the linearity of expectation,

(157) E(TIN) = ENkE[Xk].

kEK

But then, noting that E[Xk] is a constant, and applying the bilinearity principle of covariance, we find that:

Var[E(T N)] =

Var NkE[Xk

]],

keK (158) = (E[Xk ])2 keK Var[Nk ]+ Z E[Xk k eK eK lwk

]E[XI]Cov[Nk ,N ].

Next, recalling that the variance of a fixed sum of independent random variables is the sum of their variances, we have:

Var(TIN) = Var I kEK i=l Nk

=

Var[X ] =

keK i=1

Z

NVar[Xk],

keK

and, taking the expectation over N, we find:

(160)

E[Var(]N)]

=

E(Nkar[Xk

3

kEK We therefore have:

(161)

Var[T] = Var[E(TIN)] + E[Var(TIN)] = (E[N

]Var[X]

]Var[

(E[X)N

keK = E[Nk ]Var[ Xk]. kEK Nk]) + E[ []X kEXl]Cov[NkNl], k EK eK lvk

which is the desired result. D (159)

(18)

References

Bertsimas, D., and D. Gamarnik. "Asymptotically Optimal Algorithms for Job Shop Scheduling and Packet Routing." MIT working paper, 1998.

Conway, Adrian E., and Nicolas D. Georganas. Queuing Networks - Exact Computational Algorithms: A Unified Theory Based on Decomposition and Aggregation. Cambridge: MIT Press, 1989.

Conway, R.W., W.L. Maxwell, and W.W. Miller. Theory of Scheduling. Reading: Addison-Wesley, 1967.

Gallager, Robert G. Discrete Stochastic Processes. Boston: Kluwer Academic Publishers, 1996.

Graves, Stephen C. "A Tactical Planning Model for a Job Shop." Operations Research, Vol. 34, No. 4, 522-533.

Hall, L. Approximation Algorithms for NP-Hard Problems, chapter 1. (D. Hochbaum, Ed.) PWS Publishing, 1997.

Jackson, J.R. "Networks of Waiting Lines." Operations Research, Vol. 5, 518-521. 1957. Jackson, J.R. "Jobshop-Like Queuing Systems." Management Science, Vol. 10, 131-142. 1963. Karger, D., C. Stein, and J. Wein. "Scheduling Algorithms." MIT Working Paper. 1997. Rice, John. Mathematical Statistics and Data Analysis, Second Edition. Belmont: Duxbury

Press, 1995.

Notes

' Rice, John. Mathematical Statistics and Data Analysis, Second Edition. (Belmont: Duxbury Press, 1995.) pp. 137-138.

2 Rice, pp. 138-139.

Références

Documents relatifs

We have seen what the wave function looks like for a free particle of energy E – one or the other of the harmonic wave functions – and we have seen what it looks like for the

Vugalter, Gevrey smoothing for weak solutions of the fully nonlinear homogeneous Boltzmann and Kac equation without cutoff for Maxwellian molecules, Arch.. Bobylev, Expansion of

The purpose of this paper is to derive a model governing the exchange of heat in a composite medium consisting of a background material with very small spherical inclusions of

Section 6, which is based on Klein’s book, discusses uniformization by Schwarzian differential equations.. This is gotten around by

Uffink and Valente (2015) claim that neither the B-G limit, nor the ingoing configurations are responsible for the appearance of irreversibility. Their argu- ments are specific for

[10℄ using formal series and aurate estimates of the oeients..

In particu- lar, if the multiple integrals are of the same order and this order is at most 4, we prove that two random variables in the same Wiener chaos either admit a joint

In section 2, we obtain high-order schemes by applying this technique di- rectly to the continuous wave equation and we present the numerical method we have chosen for the space