• Aucun résultat trouvé

COMPUTING TRANSFER LINES PERFORMANCE MEASURES USING DYNAMIC PROGRAMMING

N/A
N/A
Protected

Academic year: 2022

Partager "COMPUTING TRANSFER LINES PERFORMANCE MEASURES USING DYNAMIC PROGRAMMING"

Copied!
7
0
0

Texte intégral

(1)

Comput. & Indus. Engng Vol. 9, No. 4, pp. 387-393, 1985 0360-8352/85 $3.00 + .00

Printed in the U.S.A. © 1985 Pergamon Press Ltd.

COMPUTING TRANSFER LINES PERFORMANCE MEASURES USING DYNAMIC PROGRAMMING

PIERRE L'ECUYER

Universit~ Laval, Ste-Foy, Quebec, Canada GIK 7P4

(Received for publication 12 March 1985)

AbstractmThe dynamic behavior of a three-stage production transfer line with finite buffers and unreliable machines is modeled as a Markov chain. Performance measures are computed using value iteration dynamic programming with over-relaxation steps and finite element ap- proximation. This approach permits considering larger buffers than is possible when solving directly for the steady-state probabilities.

1. INTRODUCTION

A transfer line is a sequence of machines (or stages) connected by transfer mechanisms (see Fig. I). Items enter from one end of the line, visit each machine, and emerge at the other end. In the model discussed here, the machines are liable to failure at random times, and remain inoperative for random durations while they are under repair. To decrease the effects of machine failures on the rest of the line, buffer storages are placed between the unreliable machines. Larger buffers improve the line throughput, but their costs in terms of larger inventories, floor space and additional equipment must be taken into account. Applications of transfer lines models arise in a wide range of areas, including manufacturing industry, computer science, mining, etc. (see [4, 6, 8]).

Transfer lines have been studied by many authors during the last 30 years. See [6]

for a list of references. For given storage allocations, these authors wish to compute various performance measures, such as the line throughput or the mean total inventory.

Some also seek to distribute a fixed total buffer size to the k - I buffers so as to maximize the line throughput.

For two-stage lines, the throughput can be computed analytically[3, 6]. Gershwin and Schick[6] proposed a Markov chain model for a 2- or 3-stage line, and a method for solving the large linear system for the steady-state probabilities. The method is easy to implement for the two-machines case, but for the three-machines case, computer time and memory requirements increase respectively as the cube and square of the total buffer capacity. Also, even with 35 decimal digits extended precision computation, numerical instabilities prevent any reliable answers when the buffer capacities are larger than about 15. Other analytic techniques for more than two unreliable machines and finite buffers have been proposed only for models with more restricted assumptions than those retained in [6].

Lines with more than three stages have been studied by approximation[3, 5, 9], or by simulation[8,13].

In this paper, we use the same Markov chain model as in [6]. However, instead of computing the steady-state probabilities, we compute the throughput and mean total inventory via value iteration dynamic programming. We also experiment with different methods to approximate the value function, and over-relaxation steps to accelerate the convergence. This approach permits us to cope with larger buffers than in [6].

2. THE MODEL

Many variants of the transfer line model appear in the literature. Here, we adopt the same variant as in [6].

The system is comprised of k machines, serially ordered and separated by k - 1 finite capacity buffers (see Fig. 1). An inexhaustible supply of items is available to

3 8 7

(2)

388 P I.' Ecu~[ R

Unprocessed Buffer Buffer Buffer

units 1 i -1 i

Machine Machine

I i

Buffer

k-1 Flnol

~ ~ ] ~ r o d u c ts

Machine k Fig. 1. Diagram o f a k-stage transfer line.

m a c h i n e 1, and items are brought into an unlimited storage area after being p r o c e s s e d by m a c h i n e k. All m a c h i n e s are synchronized, and have equal and constant service times. T h a t c o m m o n o n e time unit fixed cycle includes transportation time to or from the buffers.

I t e m s m o v e f r o m m a c h i n e I to buffer I . . . to buffer i - 1, to machine i, to buffer i . . . to m a c h i n e k. E a c h machine in the line m a y fail at a random time and for a r a n d o m duration. Actually, a 'failure' may c o r r e s p o n d to stoppage for mainte- n a n c e , a d j u s t m e n t s , repairs, etc . . . . Machines are assumed to have geometrically distributed n u m b e r o f p r o c e s s i n g cycles between failures and n u m b e r o f cycles to re- pair. If m a c h i n e i is p r o c e s s i n g an item during a cycle, there is a probability pi that it fails during that c y c l e . If m a c h i n e i is down (in failure state) at the beginning of a cycle, t h e r e is a probability ri that it is repaired during that cycle.

F o r i = I . . . k - I, let N i be the capacity of buffer i. When machine i is d o w n , the level in b u f f e r i - 1 m a y rise. If the failure persists, that buffer may fill up and f o r c e m a c h i n e i - i to stop its processing. In such a case, machine i - 1 will be called b l o c k e d . Similarly, the level in buffer i may fall, as machine i + 1 drains its contents.

T h a t b u f f e r m a y e m p t y and f o r c e machine i + I to idleness. Such a forced down machine will be called s t a r v e d . If a failure persists long enough, the blocking and starvation effects c o u l d p r o p a g a t e up and d o w n the line. Notice that machine 1 can n e v e r be starved and m a c h i n e k can n e v e r be blocked. The role o f the buffers is to provide a partial d e c o u p l i n g o f the m a c h i n e s and lessen the effects o f a failure.

It is a s s u m e d that machines can fail only while processing items. A starved or b l o c k e d m a c h i n e c a n n o t fail. W h e n a machine fails, the item it was working on is not d a m a g e d or d e s t r o y e d , but is r e t u r n e d to the previous storage location. Processing on that item is r e s u m e d w h e n the machine is repaired.

W e a s s u m e that a c y c l e begins with the transitions in the machine conditions ( b r e a k d o w n or repair) and ends with storage levels modifications that depend on the new m a c h i n e states. N o t i c e that with this convention, a m a c h i n e that is repaired during a c y c l e is a s s u m e d to be operational for that cycle and can p r o c e s s an item (i.e. finish the p r o c e s s i n g c y c l e that was a b o r t e d previously). Actually, processing can take place on m a c h i n e i only if at the beginning of the cycle, there is at least one item in buffer i - I and at least o n e free location in buffer i. If this is the case and machine i does not fail, one item is r e m o v e d from buffer i - 1 and a d d e d to buffer i at the end o f the cycle.

During a cycle, we say that the transfer line is u p if m a c h i n e k successfully pro- cesses an item. O t h e r w i s e , it is down. The throughput or efficiency E o f the line is the s t e a d y - s t a t e p r o p o r t i o n o f up cycles, that is the mean n u m b e r o f items coming out o f the line p e r c y c l e , in the long run. The mean in-process i n v e n t o r y I is the e x p e c t e d total n u m b e r o f units in the buffers at any given cycle, in steady-state. We wish to c o m p u t e E and I, for given values o f k, N~ . . . N k - 1, P l . . . P k , r~ . . . rk.

F o r i -- 1 . . . k, the condition of machine i is given by 0 if machine i i s down

ai

=

I if machine i is operational. (1)

N o t i c e that an o p e r a t i o n a l m a c h i n e can be either processing, blocked or starved. F o r

i = I , . . . , k - 1,1et ni be the n u m b e r of items in buffer i. T h e state of the system

(3)

C o m p u t e r transfer lines p e r f o r m a n c e m e a s u r e s 389

is s = (n~ . . . n k - 1 , a~ . . . a , ) , where a,. = 0 or 1 and 0 =< n,. ~ N,- for e a c h i.

T h e cardinality o f the state space S is

k - I

M = 2* 1--[ ( N i + 1). (2)

i=1

3. A V A L U E I T E R A T I O N A P P R O A C H

T h e model can be seen as a discrete-time M a r k o v chain with state d e p e n d e n t ' r e t u r n s ' , w h e r e E and I are average values per cycle.

We define the o n e c y c l e value functions g: S ~ [0, I] and f : S --* {0, 1, 2 . . . . }

a s

0 i f n k - i = 0

g(s) = rk if n , - i > 0 and ak = 0

1 - - Pk i f n k - t > 0 a n d a x = 1

f ( s ) = nl + nz + "" + n k - l .

(3)

(4) T h e s y s t e m being in state s at the beginning of a c y c l e , g ( s ) is the probability that an item emerges from the line at the end of that cycle, and f ( s ) is the in-process i n v e n t o r y during that cycle.

F o r s and x in S, let p ( s , x) be the transition probability from state s to state x in o n e cycle. T h e c o m p u t a t i o n o f these probabilities is c u m b e r s o m e , but straightforward.

T h e y are analyzed m o r e fully in [6]. Notice that the u n d e r l y i n g M a r k o v chain has a closed irreducible set o f states, plus some transient states. F o r instance, s = (0, 0, . . . . 0) is transient, since buffer i - 1 cannot be e m p t y if m a c h i n e i is failed.

In what follows, we discuss only the c o m p u t a t i o n o f E. T h e same type o f r e a s o n i n g is also valid for I.

Theoretically, E can be c o m p u t e d by the following algorithm, due to White[12]:

b e g i n

e n d .

pick a r e c u r r e n t state y in S;

set h(s) : = 0 for all s in S;

r e p e a t

H ( s ) : = g(s) + ~ , p ( s , x ) h ( x ) , all s in S;

x E S

d + : = max ( H ( s ) - h(s));

s E S

d - : = min ( H ( s ) - h(s));

s ~ S

h(s) : = H ( s ) - H(y), all s in S until

d + - d - is small enough

(5) (6) (7) (8)

T h e o r e m 1. After each iteration of the r e p e a t loop, we have d - ~ E - d +.

F u r t h e r m o r e , the successive values of d - and d ÷ c o n v e r g e to E. H e n c e , if 'small e n o u g h ' m e a n s smaller than E, for ~ > 0, then the algorithm c o n v e r g e s in a finite n u m b e r o f iterations.

P r o o f : It follows from propositions 6 and 7 in c h a p t e r 8 o f [2].

U n f o r t u n a t e l y , w h e n S is very large, this algorithm c o u l d be v e r y time c o n s u m i n g , or e v e n impossible to implement on a c o m p u t e r in its p r e s e n t form. One m a y l o o k for approximations. F o r instance, instead of computing H ( s ) f o r e v e r y state, c o m p u t e it only for a small subset o f S, and interpolate for the remaining states. This t e c h n i q u e has been studied in [7] for the case of discounted d.p.

A n o t h e r problem with the algorithm is that its c o n v e r g e n c e is v e r y slow, especially

(4)

390 P. L'EcuYER

w h e n the buffer sizes are large. For a given s, the sequence of values of H ( s ) b e h a v e s o m e w h a t like a g e o m e t r i c sequence, but with a ratio of almost one. Various modifi- cations can be tried to accelerate the convergence. One of them is to perform o v e r - r e l a x a t i o n steps. Such a step consists o f replacing equation (8) by

h ( s ) : = h(s) + • [H(s) - H ( y ) - h(s)], all s in S (9) w h e r e "r => 1 is the relaxation factor. The step can be performed e v e r y say h iterations.

One can also v a r y the values o f h and "r during the execution.

T h e idea o f o v e r - r e l a x a t i o n was considered in [10] and [111 for the case o f dis- c o u n t e d d y n a m i c programming. Equation (9) was used at e v e r y iteration (h = I), and

"r was restricted to be in the interval (0, 2). Obviously, without restrictions on -r and h, the algorithm d o e s not necessarily converge. But for a fixed h, if one uses a decreasing s e q u e n c e o f values o f • converging to a value smaller than one, the c o n v e r g e n c e is certainly g u a r a n t e e d .

M o r e interestingly, one could implement the algorithm through an interactive pro- c e d u r e , w h e r e the user can o b s e r v e the successive values o f d + and d - , halt the e x e c u t i o n after a n y iteration to contemplate a graphical display o f H or h, adjust the values o f h and "r accordingly or change the method of approximation, and even return to a p r e v i o u s iteration (i.e. previous value of h). This is the approach we took, and it is illustrated in the next section for three-machine examples. The programs were im- p l e m e n t e d in F O R T R A N on a VAX-11/780 mini computer.

4. N U M E R I C A L ILLUSTRATIONS

C o n s i d e r a three-stage transfer line (k = 3). The state of the system is d e n o t e d s --- (hi, n2, a l , a2, a3). In equation (5), one may compute H ( s ) for all 8 possible values o f a = (al, a2, a3), but only on a grid o f values of (n~, n2). More precisely, c h o o s e two positive integers a and 13, and integers ui and vj such that

L e t

0 = u t < u 2 < "" < u ~ = N ~ ( 1 0 )

0 = v l < z'2 < "" < v13 = N_~.

S . = {(ui, vj, a), 1 =< i =< a, 1 =< j =< 13. a E {0, 1}3}.

At e a c h iteration, c o m p u t e H ( s ) on S . , and approximate for the o t h e r states, using an a p p r o x i m a t i o n surface for each value o f a. F o u r approximation methods were imple- m e n t e d in the programs.

M e t h o d 1 is piece-wise bilinear interpolation (BILIN). F o r each a, the points o f S . d e t e r m i n e a partition o f S into (a - 1) (13 - 1) subrectangles, and we c o m p u t e an interpolating f u n c t i o n which is bilinear on each subrectangle, i.e. has the form F ( n l , n2, a) = c i j a -~- c(1)ij,~nt + c(2)ijan2 + c(1, 2)ij, n t n 2 ,

ui <= nl <= ui+l and vj <= n2 <= v i + l . (11) M e t h o d 2 is piece-wise biquadratic interpolation (BIQUAD). It takes the subrec- tangles b y blocks o f four, and the interpolating function is quadratic in nl and n2, on each block, a and [3 must be odd, and the interpolating function has the form

F ( n l , n z , a) = c + c(l)nl + c(1, 1)n~ + c(1,2)nln2 + c(2)n2 + c(2, 2)n~ + c(1, 1, 2)n~n2

+ c(l, 2, 2)nln~ + c(1. 1. 2, 2)n~n~ (12)

on each o f the (a - 1) (13 - 1)/4 pieces.

(5)

Computer transfer lines performance measures 391 M e t h o d 3 is piece-wise bicubic interpolation (BICUB), as p r o p o s e d by Akima[1].

It asks for 16 coefficients on e a c h o f the (a - 1) (13 - 1) pieces.

T h e last m e t h o d uses piece-wise c o n s t a n t approximation ( P C O N S T ) , which is e q u i v a l e n t to a p p r o x i m a t i o n by state aggregation, as described in [2].

N o t i c e that the size o f the grid, or the m e t h o d o f approximation, may vary from iteration to iteration. It should be r e a s o n a b l e to start with a coarse grid and the simplest m e t h o d , refining the mesh gradually and switching to more sophisticated a p p r o x i m a t i o n m e t h o d s as the iterative p r o c e s s goes on. All these methods c o n v e r g e in the following sense: w h e n a attains N1 and 13 attains N2, we have S , = S and the e r r o r o f approx- imation b e c o m e s 0. One will n o t e that d - and d + provide bounds for the limiting value o f H(y), using the c u r r e n t a p p r o x i m a t i o n scheme. T o obtain real bounds on E, o n e must take S , = S for o n e iteration at the end.

T h e algorithm now o p e r a t e s as follows:

Algorithm begin

end.

read N , , N 2 , Pi and r~ for i = 1, 2, 3, and y;

set h(s) to 0 for all s in S;

repeat

read or, 13, ul . . . u~, v, . . . va, -r, h, an approximation m e t h o d , and a positive integer N I T E R ;

for n : = 1 to N I T E R do begin

c o m p u t e ( 5 - 7 ) for S , instead of S;

/f h divides N I T E R then c o m p u t e (9) for S , else

c o m p u t e (8) for S . ; print H(y), d - and d ÷ ; end

until no more d a t a to read;

Example 1. L e t S , -- N2 = 10, pl = 0.01 a n d r i = 0 . 0 9 f o r i = 1 , 2 , 3, a n d y

= (5, 5, 1, 1, 1). Taking a = 13 = 5, (u, . . . us) = (Vl . . . vs) = (0, 2, 4, 7, I0), p e r f o r m i n g 400 iterations with "r = 1 we obtain the results o f Table I. F o r P C O N S T t, we t o o k the a v e r a g e o f the four c o r n e r s as the approximation value on each subrec- tangle, while for P C O N S T 2, we t o o k the l o w e r left corner. We see that P C O N S T yields v e r y bad results, while for the o t h e r three methods, d ÷ - d - c o n v e r g e to 0 and the results are v e r y consistent. If w e take the results of either B I L I N , B I Q U A D or B I C U B (the resulting h) and p e r f o r m a n o t h e r 400 iterations with the whole state space (a = 13 = 11), we obtain H(y) = 0.79977 = d - = d ÷ , which is the true value o f E. T h e p r o g r a m s w e r e run with simple, double, and e x t e n d e d (35 decimal digits) precision arithmetic, and this same value was obtained in all cases. F o r the same problem, Gersh- win and Schick[6] o b t a i n e d E = 0.8000 (which is exact to 3 decimal places) using e x t e n d e d precision arithmetic.

N o t i c e that the a p p r o x i m a t i o n obtained with B I Q U A D or B I C U B is quite good.

B I L I N yields an u n d e r e s t i m a t i o n o f E, and it is due to the fact that h is c o n c a v e . E x p e r i m e n t s with o v e r - r e l a x a t i o n w e r e also done with this example, and typically good values for h and "r w e r e h = 10 and r b e t w e e n 10 and 20. With such values, it

Table 1. Results for example 1 with a 5 by 5 approximation grid

Approx. r:ethod

d "

~(y)

d +

BILIN .7851 .7852 .7852

B I Q ~ D .7945 .7946 .7946

B~CL~ .7951 .7952 .7952

-.046 .883 1.185

.000 .001 5.025

(6)

392 P. L'EcuYER

t o o k a b o u t 2 to 3 t i m e s f e w e r i t e r a t i o n s to g e t the s a m e v a l u e for d* - d , c o m p a r e d to w h a t it t o o k using "r = 1.

T a b l e 2 p r o v i d e s a t y p i c a l illustration o f w h a t g o e s on. N o t e that the iteration n u m b e r s for w h i c h the r e s u l t s are s h o w n h a v e b e e n c a r e f u l l y c h o s e n not to c o i n c i d e w i t h o v e r - r e l a x a t i o n s t e p s . E a c h o f the t h e s e s t e p s i n d u c e a p e r t u r b a t i o n o f the f u n c t i o n h(.), w h i c h s o m e t i m e d e t e r i o r a t e s the t i g h t n e s s o f the b o u n d s d-- and d + at the cor- r e s p o n d i n g iteration. A t y p i c a l p a t t e r n a p p e a r s in T a b l e 3.

Example 2. L e t N i = 40, N2 = 60, pl = 0 . 0 3 , p2 = 0.05, p3 = 0 . 0 8 , rl = r2 = r3 = 0.20, and y = (20, 30, l , l, 1). F o r this c a s e , due to n u m e r i c a l instabilities, it w o u l d b e p r a c t i c a l l y i m p o s s i b l e to s o l v e the linear s y s t e m for the s t e a d y state prob- abilities. F o r i n s t a n c e , this e x a m p l e c a n n o t b e s o l v e d using the G e r s h w i n and S c h i c k ' s m e t h o d . H o w e v e r , E c a n be c o m p u t e d u s i n g d . p . and s u c c e s s i v e grid r e f i n e m e n t s .

Table 2. Intermediate results for example 1 with a 5 × 5 grid. The values shown are those of d - ,

H(y) and d +, in that order. Note that the convergence is not monotonous

iteration BILI~ BILIN

r=l T-10, ~=i0

49 .75619 .78422 .81676 .77656 .78646 .79787 99 .77482 .78413 .79748 .78394 .78511 .78700 149 .78115 .78465 .78998 .78499 .78516 .78545 199 .78361 .78494 .78705 .78516 .78519 .78523 249 .78457 .78509 .78591 .78519 .78519 .78520 299 .78495 .78515 .78547 .78519 .78519 .78519 349 .78510 .78517 .78530 .78519 .78519 .78519 399 .78515 .78518 .78523 .78519 .78519 .78519

BICUB BICUB

x=l r=10, X=I0

19 .46282 .82~72 .88482 .56274 .79389 .90891 39 .74129 .79484 .83477 .77771 .79672 .82180 59 .76652 .79280 .82624 .78648 .79493 .80796 79 .77538 .79302 .81732 .79084 .79468 .80137 99 .78132 .79342 .81081 .79298 .79481 .79816 119 .78545 .79381 .80619 .79406 .79496 .79661 139 .78834 .79414 .80292 .79460 .79505 .79587 159 .79036 .79440 .80061 .79488 .79511 .79551 179 .79178 .79461 .79899 .79502 .79514 .79534 199 .79278 .79476 .79785 .79510 .79516 .79525 299 .79475 .79509 .79562 .79517 .79517 .79517

Table 3. Detailed intermediate results for example 1 with a 5 × 5 grid, using BICUB with r = 10, h = 10, and starting with h(.) = 0

iteration d- H (y) d + iteration d- H (y) d + 1 0.00000 .99000 0.99000

2 0.00000 . 98100 0.98100 3 0.00178 .97215 0.97290

4 0.00789 .96275 0.97185 132 .79543 .79504 .79596 5 0.01975 .95232 0.97747 133 .79454 .79504 .79595 6 0.02346 .94077 0.98826 134 .79455 .79504 .79594 7 0.06090 .92857 1.00065 135 .79456 .79505 .79592 8 0.08880 .91642 O. 96336 136 .79457 .79505 .79591 9 0.12029 .90447 0.95531 137 .79458 .79505 .79590

iO .78633 138 .79459 .79505 .79588

ii -0.61653 .77949 1.52678 139 .79460 .79505 .79587

12 -0.15458 .77725 1.04796 140 .79507 .

13 0.09811 .77794 1.00814 141 .79471 .79507 .79573 14 0.28001 .78084 1.02287 142 .79471 .79508 .79572 15 0.32000 .78381 0.99537 143 .79472 .79508 .79572 16 0.41525 .78674 0.95528 144 .79473 .79508 .79571 17 0.47106 .78940 0.93672 145 .79474 .79508 .79570 18 0.50684 .79172 0.91794 146 .79474 .79508 .79569 19 0.56274 .79389 0.90891 147 .79475 .79508 .79568

20 .81234 148 .79476 .79508 .79567

21 0.53745 .80956 1.14075 149 .79476 .79509 .79566

22 0.63682 .80775 l.lllO0 150 .79510

23 0.64938 .80647 1.08539 151 .79484 .7C510 .79556 24 0.68398 .80532 1.04246 152 .79484 .79510 .79556 25 9.72688 .80433 1.04133 153 .79485 .79510 .79555 26 0.73479 .80347 0.99842 154 .79486 .79510 .79554 27 0.74647 .80270 0.93582 155 .79466 .79511 .79554 28 0.75029 .80201 0.87887 156 .79486 .79511 .79553 29 0.75299 .80139 0.83857 157 .79487 .79511 .79552

30 .79583 158 .79488 .79511 .79551

31 0.69577 .79610 0.84770 159 .79488 .79511 .79551

32 0.73739 .79641 0.84594 160 .79512

161 .79493 .79512 .79544

162 .79494 .79512 .79544

(7)

Computer transfer lines performance measures 393 Using BICUB interpolation with X = T = I0, performing 400 iterations with a 7 x 9 grid, (ul . . . UT) = (0, 3, 6, 13, 20, 30, 40) and (vl . . . Vg) = (0, 3, 6, 13, 20, 30, 40, 50, 60), w e obtain H(y) = 0.7134, d - = 0.7123 and d ÷ = 0.7138. Refining the grid to 11 × 15 and performing 100 new iterations yields H(y) = 0.7135, and adding another 100 iterations with a 23 x 33 grid yields again H(y) = 0.7135, with d - = 0.7134 and d ÷ = 0.7144. The value of H(y) has stabilized and a further grid refinement is obviously unnecessary.

5. CONCLUSION

We have seen that value iteration dynamic programming with interpolation can provide precise performance measures for three-stage transfer lines with buffers o f virtually any sizes, while state aggregation fails. Over-relaxation steps can also accel- erate the convergence. The numerical illustrations given here are representative of a larger amount o f numerical testing that was performed. In principle, this approach can be extended to lines with more than 3 stages, and be a good alternative to simulation.

For lines with many stages, however, interpolation would be more difficult than for the three-stage case, and the amount of computing will soon become overwhelming.

Simulation and decomposition probably remain the sole efficient methods to approx- imate the performance measures for these cases.

For three-stage lines, the time spent by the Gershwin and Schick's method is O((NI + N2)3), while for our method, it is O(a13) for each iteration. As the number of iterations in our case depends on the precision we need, it is difficult to compare the two methods. For small values of NI and N2, for which the Gershwin and Schick's method can be used, the latter is certainly faster. They needed about I min of CPU time on their machine to solve example 1, while we spent about 20 min on our V A X for the same problem. But for large values of N~ and N2, our method not only becomes competitive w.r. to CPU time, but still provides reliable results, while the method of Gershwin and Schick fails, due to numerical instabilities.

Acknowledgements--This research has been supported by NSERC-Canada Grant No. A5463 and FCAC- Qu6bec Grant No. EQ2831. The author wish to thank Jacques Malenfant and Marc Veiileux, who contributed to the numerical implementation.

REFERENCES

1. H. Akima, A method of Bivariate interpolation and smooth surface fitting based on local procedures.

Commun. A C M 17(1), 18-20 and 26-31 0974).

2. D. P. Bertsekas, Dynamic Programming and Stochastic Control. Academic Press (1976).

3. J. A. Buzacott, Automatic transfer lines with buffer stocks. Int. J. Prod. Res. 5, 183-200 (1967).

4. J. A. Buzacott & L. E. Hanifin, Models of automatic transfer lines with inventory banks--a review and comparison. A I I E Trans. 10, 197-207 (1978).

5. S. B. Gershwin, An efficient decomposition method for the approximate evaluation of production lines with finite storage space, Presentation TB8.2 at ORSA/TIMS 84 in San Francisco (May 1984).

6. S. B. Gershwin and I. C. Schick, Modeling and analysis of three-stage transfer lines with unreliable machines and finite buffers. Ops. Res. 31(2), 354-380 (1983).

7. A. Haurie & P. L'Ecuyer, Approximation and bounds in discrete event dynamic programming, Les cahiers du G E R A D (G-83-25), Ecole des H.E.C., Montr6al (1983).

8. Y. C. Ho, M. A. Eyler & T. T. Chien, A new approach to determine parameter sensitivities of transfer lines. Man. Sci. 29(6), 700- (1983).

9. J. Masso & M. L. Smith, Interstage storages for three stage lines subject to stochastic failures. A I I E Trans. 6(4), 354-358 (1974).

10. E. L. Porteus & J. C. Totten, Accelerated computation of the expected discounted return in a Markov chain. Ops. Res. 26(2), 350-358 (1978).

11. D. Reetz, Solution of a Markovian decision problem by successive over-relaxation. Z. Oper. Res. 17, 29-32 (1973).

12. D. J. White, Dynamic programming, Markov chains, and the method of successive approximations. J.

Math. Anal. Appl. 6, 373-376 (1963).

13. H. Yamashima & K. Okamura, Analysis of in-process buffers for multi-stage transfer line systems. Int.

J. Prod. Res. 21(2), 183-195 (1983).

Références

Documents relatifs

In this work, we propose the use of flavones that can be derived from natural products as versatile high performance visible light photoinitiators initially with amino acid and

Le Foll, Conse- quences of cell-to-cell P-glycoprotein transfer on acquired multidrug resistance in breast cancer: a cell population dynamics model, Biol. Le Foll (2012),

Thus, chromothripsis derives from chromosome shattering followed by the random restitching of chromosomal fragments with low copy-number change whereas chromoanasynthesis results

model allows to predict the behaviour of more dilute copper solution representative of waste leachate through a

The transition from the low density phase to a glass phase is continuous at small p and discontinuous at large

The objective function (1) is the equipment cost; constraint (2) provides the required productivity rate; constraints (3-4) ensure the assignment of all the operations from N to

Figure 17 shows the relevant quantities: the L and M components of the magnetic field (i.e., in the magnetopause plane), the proton density, the L and M component of the flows and