A NNALES DE L ’I. H. P., SECTION B
W OLFGANG S TADJE
Bin-packing problems for a renewal process
Annales de l’I. H. P., section B, tome 26, n
o1 (1990), p. 207-217
<http://www.numdam.org/item?id=AIHPB_1990__26_1_207_0>
© Gauthier-Villars, 1990, tous droits réservés.
L’accès aux archives de la revue « Annales de l’I. H. P., section B » (http://www.elsevier.com/locate/anihpb) implique l’accord avec les condi- tions générales d’utilisation (http://www.numdam.org/conditions). Toute uti- lisation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impression de ce fichier doit conte- nir la présente mention de copyright.
Article numérisé dans le cadre du programme Numérisation de documents anciens mathématiques
http://www.numdam.org/
Bin-packing problems for
arenewal process
Wolfgang STADJE
Universitat Osnabruck, Fachbereich Mathematik/Informatik,
Albrechtstr. 28, D-4500 Osnabruck
Vol. 26, n° 1, 1990, p. 207-217. Probabilités et Statistiques
ABSTRACT. - A sequence
Xl, X2,
... of i. i. d.(0,l]-valued
randomvariables is distributed into blocks
successively
as follows. A blockstarting
with
Xk
ends withX k + j
iffXk + ...
1Xk + ...
Analyzing
thisprocedure,
which is known as Next-Fitalgorithm
for binpacking,
weask,
e. g., for theasymptotic
behavior of the number ofXi’s placed
in the n-th block and of the sum of the variablesXn, Xn-l’
...which are
placed
in the same block asXn
at the timeXn
enters(both
asn -
oo ).
Key words : Bin packing, renewal process, Markov chain with continuous state space.
RESUME. - Une suite
Xl, X2,
... de variables aleatoiresindépendantes, équidistribuées
et à valeurs dans(0,1] ]
estpartagée
successivement auxblocs. Le bloc
qui
commence avecXk
se termine avecXk+ j si,
et seulementsi, Xk + ...
1Xk + ...
Pour degrandes
valeurs den on etudie le comportement
asymptotique
du nombre des variables aléatoiresplacees
dans le n-ième bloc et de la somme des variablesXn, Xn _ 1, ... qui
sontplacees
dans le meme bloc queXn,
avant que arrive.L’application
auproblème
de «bin-packing »
estexposee.
Mots clés : Processus de renouvellement, bin-packing, chaine de Markov avec une espace réelle des états.
Classification A.M.S. : 60 K 05, 60 J 20.
Annales de l’Institut Henri Poincaré - Probabilités et Statistiqucs - 0246-0203
1. INTRODUCTION
Let
Xl, X2,
... E(0,1]
beindependent
random variables with the com- mon distribution function F. LetXo
E[0, ~ ~
beindependent
of allXn,
n >__ 1.We consider the
partitioning
of thecorresponding
renewal process + ...+ X
into blocks of totallength
at most 1. Thus ifand
the first block has
length Sn,
the second one haslength Xn + ~
+ ...+ X n + k?
and so on. The
following questions
will be treated:(a)
What is theasymptotic
behavior of the number ofX/s placed
inthe n-th
block,
as n -; oo ?(b)
Howlarge
is theprobability
that agiven
initial value x ofXo
willeventually
lead to an increase of the number of blocks(compared
withXc=0)?
(c)
How can one describe thelimiting
behavior of theportion
of thecurrent block which is
occupied just
after theplacement
ofX~?
A more intuitive
description
of theseproblems
can begiven
in terms ofa
storage procedure.
A sequence ofstorage
demands of random sizesX1, X2,
... are to be satisfiedby
stores which are all of size 1.Xo
unitsof the first store are
already occupied.
The demands may not be broken intopieces. Hence, if
1 and 1 the demandsXl,
...,Xn
aresuccessively placed
in the first store,Xn + 1
becomes the first amount to beplaced
in the second store, and so on. Thelanguage
of "stores" and"demands" will be used
throughout.
This
storage procedure
isclosely
related to the so-calledbin-packing problem
in which for agiven
set A of real numbers in(0, 1]
and an infinite set ofbins,
each withcapacity I,
one wants to "store" the members of A in a minimal number of bins[see,
e. g., Johnson(1974),
Johnsonet al.
(1974),
andCoffman (1978)].
There is anabounding
number ofapplications
of thebin-packing problem
to computer science andindustry,
e. g. stock
cutting
or theassignment
of tracks in disks. These and many furtherexamples are
discussed in Johnson et al.(1974)
and Coffman(1976).
In theprobabilistic analysis
of binpacking
the modelexplained
above has been introduced
by
Coffman et al.(1980)
and further studiedby Ong et
al.(1986)
under the name "Next-Fitalgorithm".
In Coffmanet
al.(1980)
theexpected performance
of this rule is boundedby
the(unknown) expected optimal
number ofbins; Ong et
al.(1986)
show thatthe ratio of the
expected
number of bins to the number of stored demands converges to a constant. In these papers one can also find someexplicit
Annales de l’Institut Henri Poincaré - Probabilités
results in the cases of uniform or truncated
exponential
demand distribu- tions.Our
analysis
of the storage process is based on theinvestigation
of twoMarkov chains involved. First consider the store in which the n-th demand
Xn
isplaced.
LetUn
be the amount stored thereimmediately
before the(n + l)-th
demand arrives.Taking
into account the recursiondenotes the indicator function of the event
A)
it is seen that ois a
homogeneous
Markov chain.Further,
let in be the number of demandsplaced
in the n-th store, and defineV 1= Uo, ‘Tn + 1- ~~ 1 + ... + ~n + ~ ~ ~ ~
1.V~
is the size of the first demandplaced
in the n-th store. Sinceobviously
the sequence
(Vn)n > 1
also forms ahomogeneous
Markov chain.In section 2 we
give
a detailedanalysis
of o. It turns out that it is anaperiodic
Harris chain whose behavior isindependent
of the initial value in a very strong sense. Thelimiting
distribution functionM (x)
ofo is characterized
by
anintegral equation. Asymptotically,
M isthe distribution function of the load an
arriving
demand will find in thecurrent store. Coffman et al.
(1980) study
theergodic
behavior of the related Markov chain whereLn
is the sum of all demandsplaced
in the n-th store.
In section 3 we determine the
probability
p(u)
that asingle
demand ofsize u is
responsible
for the need of an additional store.p (u)
can beconsidered as a measure of the load caused
by
a demand u which is moreinformative than its mere size. In
fact,
if uu’,
butp (u)
andp (u’)
areclose to each other
[even
p(u) =
p(u’)
ispossible],
the sequences of demandsstarting
with u oru’, respectively,
need the same number of stores withhigh probability.
Therefore the function p(u)
in a senseprovides
a morereasonable basis for
assessing
the costs due to the storage of a demand.Finally,
section 4 deals with the calculation of the mean number of demandsplaced together
in a store. E(rj obviously
is a measure for the"real"
capacity
of the n-th store with respect to theunderlying
demandprocess. An
asymptotic
formula for is derivedusing martingale
arguments. In the case of the uniform distribution some results on the sequence(i,~)n >
1 can be found in Coffman et al.(1980).
We note that it may
happen
that withprobability
1 every store ischarged
with the same number ofdemands, although
theXi
are notconstant. The situation is cleared in the
following
lemma.[x]
denotes theinteger
partof x >_
0. Let W be the set ofpoints
of increase of the distribution function F and a = infW, P
= sup W.LEMMA. - Assume
Xo
= 0. 03C41 is almostsurely
constantiff
oneof
thefollowing
two conditions holds:(i) [a 1]
= .(ii) oc-1=[a-1]=a-1+ 1
Proof. -
We can restrict our attention to the case 0 aP. Clearly, [a-1]
is an upper bound and[~i-1]
is a lower bound for the number of demandsplaced
in a store, and wealways
haveP (i 1= [[i -1 ])
> 0. Ifa)
>0,
it follows that > 0. Thus in this case condition(i)
is necessary and sufficient. Now let If is not aninteger,
weagain
have > 0. But if is aninteger,
and Thus is neces-
sary and sufficient for i 1= Const. almost
surely.
In what follows we shall
always
assume that Ti 1 is not almostsurely
constant, i. e., that neither(i)
nor(ii)
of the lemma holds.2. THE MARKOV CHAIN
(Un)n
oWe retain the notation of section 1 and start
by studying
thehomogen-
eous Markov chain o defined
by ( 1.1 )
and the initial conditionUo
= u. pUn denotes the distribution ofUn.
Forbackground
on Markovchains with continuous state space see Asmussen
(1987, chapter VI.3),
Laslett et al.
(1987),
and Tweedie(1975).
THEOREM 1. - o is an
aperiodic
Harris chain. PUn converges in total variation to theprobability
measure whose distribution function M isuniquely
determinedby
theequation
Proof. - (1) o is
a Harris chain. In order to showthis,
we mustfind a
probability
measure À on[0, 1],
an E > 0 and a recurrent"regener-
ation set" R c
[0, 1] such
thatfor all u E R and all Borel subsets B of
(0,1].
First we assume that a > 0.Then define
E = P (X 1 > a)
andIt is
easily
checkedthat,
for u ER,
Further we have to show that R is recurrent, i. e. P
(t (R) ~| Uo
=u)
= 1for all
Me[0,1),
wherei (R)
is the first entrance time for R. This followsimmediately
from theeasily
verified fact that there exists a y > 0 with thefollowing
property: For everyMe[0,1] ]
there is a ... ,[a -1 ] ~
suchthat
Now let a=0. In this case every interval in
[0, 1]
is recurrent. Choose aa
> 0 such thatP (Xl> a)
> 0 and defineR, E and ~
asR, E and
butwith a
replaced by a.
Thenagain P (U 1 E B E (B)
for all u ER
and all Borel subsets B of
[0, 1]. (2.2)
isproved.
(2)
o isPX1-irreducible (according
to the definition in Tweedie(1975)),
i. e., for every Me[0, 1]
and every Borel setB c [0, 1] satisfying eB)
> 0 there exists apositive integer n
such thatThe
probability
inquestion
is not smaller thanIf a.=infW >
0,
this latterprobability
ispositive
for( 1- u) + 1].
If
a = 0,
it ispositive
for allsufficiently large
n.(3)
o isaperiodic.
This will follow from a much stronger property which we shall prove next. LetUn (u)
andUn (u’)
be the Markov chainsgenerated by
the same sequenceXi, X2,
... via the recurrence relation(1.1),
but with different initial values u and u’. ThenNote that
Uk (u) = Uk (u’)
for some kimplies Un (u) = Un (u’)
for all n >__ k.(2.3)
follows if we can show that there exists an 11 > 0 and a k E Fl such thatSince
Un (u) ~ Un (u’)
for all nimplies U~k (u) ~ Uik (u’)
for all(2.4)
entails
[for
the momentusing
the abbreviationU~
=Uik (u), Ui
=U~k (u’)]
P (Un (u) ~ Un (u’)
for alln) _ P (U~ ~ U...., U~)
for
all j E
and hence P(Un (u) ~ Un (u’)
for alln)
= 0proving (2.3).
Itremains to show how 11 and k can be found such that
(2.4)
holds.For this purpose we choose y,
b E [a, ~3]
with thefollowing properties:
(i) 0 y 8.
(ii) [y-1] ~ [b-1].
(iii)
y and 8 arepoints
of increase of F.We call a demand
"small",
if its size falls into the interval(y - E,
y +E),
and
"large",
if its size falls into(b - E, 8+8).
Here E > 0 is taken smallenough
to ensure that i smalland j large
demands can beplaced
in thesame store iff 1
(in
the case when y and 3 are atoms ofpXl, only
demands of sizeexactly equal
to y or 8 will be called small orlarge, respectively).
This choice of E makes itpossible
to treat small andlarge
demands as if
they
wereconstantly equal
to y and3, respectively. By (iii),
small and
large
demands occur withpositive probability.
We consider two storage processes in
parallel,
the first one with initial value u and the second one with initial value u’. Let u u’. We startby
asequence of small demands of
length
m, where we choose mlarge enough
to guarantee that in both storage processes at least the second store is
maximally
filled. Thus in both processes there is a storecontaining
themaximal number
N = [y-1]
of small demands.Looking
at the last storesof both processes there are two
possibilities:
Eitherthey
contain the samenumber of small
demands;
thenUm (u)
=Um (u’).
Or the last store of the second process takes a lead of say I small demands. In this second case we have ...,N -1 ~ .
Now we shall show how to diminish I to 1-1by continuing
with small andlarge
demands.First note that if a store contains the maximal number N of small
demands,
r =[( 1-
N~)/(b - y)]
is the maximal number of small demands in it which can bereplaced by large
ones. Now continue both processes until in the second process the current store and an additional one is filled. Two subcases must bedistinguished:
1. subcase. r > 0.
Replace
2 r small demandsby large
ones. For thispurpose choose the r small demands which are
placed
as the first ones inthe last store of the second process, and the r small demands which are
the last ones of the
preceding
store.By
theconstruction,
thisreplacement
does not
change
the number of demands in the stores of the second process, but it does inthe first
process. The lead of the second process is reduced from 1 to l -1.2. subcase. r = o.
Replace
the small demand which is the last one in the lastmaximally
filled store of the first processby
alarge
demand. Thislarge
demand then has to beplaced
as the first demand in the next storeof the first process so that
again
the lead of the second process is reduced.These considerations
complete
theproof
of(2.4).
Now let de pl be the
period
of the Markov chain. Then there is apartition [0, 1] = E 1
U ... UEd
U N of the state space into a transient set N and non-empty setsE1,
...,Ed
for whichP (U 1 E Ei+ 1
forProbabilités
!== 1, 2,
...(where
weidentify Ed+ 1
withEi
1 and soon).
Assume d >_ 2.Choose
U’ E E2. By (2.3),
On the other
hand, by
the choice of u andu’,
But this contradicts
(2.5).
It follows that d=1. o isaperiodic.
(4)
o isergodic.
Weonly
have toverify
Doeblin’s condition[Tweedie (1975),
p.386]:
There exists aprobability
measure 8 on[0, 1],
apositive integer n
and a S > 0 suchthat,
whenever 8(B) ~ b,
Note that
where
and
as claimed.
(5)
It now follows from theergodic
theorem for Markov chains that there is auniquely
determined invariantprobability
measure which is alsothe limit in total variation of Let M be its distribution function. The invariance is characterized
by
the property that if M is the distributionfunction of
Uo,
then alsoThe Theorem is
completely proved.
3. THE PROBABILITY OF A SINGLE DEMAND TO INCREASE THE NUMBER OF STORES
We compare the cases
Uo = 0
andUo = u E [0, 1).
IfUo = u,
does thisnecessitate the use of additional stores? Let pm
(u)
be theprobability
thatin the case
Uo
= u more stores are needed than in the caseUo = 0,
if mdemands must be satisfied.
Obviously
pm(u)
isnon-decreasing
with respectto u,
non-increasing
with respect to m andpm (0) = 0, pm ( 1 ) =1,
pl
(u)
= 1 - F(1 - u).
Let p(u)
= lim pm(u).
LEMMA. -
If Uo
= u, at most one more store is needed than in the caseUo = 0..
Proof. -
For the moment do not use the first store atall,
but startstoring Xi,
...,Xm
in the second store.Suppose
that one ends up withthe store. Now try to transpose the first
X 1
units from the second to the first store. If this is notpossible (i.
e., >1),
one needsn + 1 stores if
Uo
= u. Otherwiseproceed by trying
to shift the nextX2
units from the second to the first store, and so on.
Clearly
thisprocedure
leads to the
placement
of the m demandsgenerated
ifUo = u, which, consequently,
needs at most n + 1 stores. Butgiven Uo
=0, exactly n
storesare needed.
THEOREM 2. - p
(u)
satisfies theintegral equation
Proof. - Conditioning
onX 1
we obtainFor if
Xl
> 1- u, the storage processreally
starts with the second storeso that one store
(the
firstone)
is neededadditionally.
But ifX 1= x E (0,1- u], Xi
isplaced
in the first store, and theprobability
thatone extra store is needed
(relative
toX2,
...,Xm)
isequal
to theprobabil- ity
that one extra store is necessary under the condition that u + x units in the first store areoccupied
minus thecorresponding probability given
that x units of the first store are
occupied.
This proves(3.2).
Sincep (u)
isthe monotone limit of as moo,
(3.1)
follows from(3.2) by Lebesgue’s
dominated convergence theorem.The recursion
(3.2)
alsoprovides
a convenient method for theapproxi-
mate determination
of p (u),
since an exact solution of(3.1) usually
doesnot seem
possible.
4. THE MEAN NUMBER OF DEMANDS IN A STORE
in has been defined to be the number of demands
placed
in the n-thstore. In this section we shall derive the limit of the
expected
value ETHEOREM 3:
Example. -
Let us consider the uniform case F(x)
= x,0 -- x
1.Using
Theorem 2 it is
easily
checked thatp (u) = u (2 - u).
Astraightforward
calculation then
yields
lim E(rj
=3/2.
n -. 00
This limit has
already
been derivedby
Coffman et al.(1980) by
adifferent method.
Proof of
Theorem 3. - LetTn
be the number of the store in which the n-th demandplaced. Let = 10 p(x)dF(x)
and denoteby d n
the cy-fieldgenerated by Xo,
...,Xn, To,
...,Tn.
We shall show thatis a
martingale
relative to o.Indeed,
we havewhere we have used Theorem 2 for the last
equation.
Next we mustconsider
Vn,
the size of the first demandplaced
in the n-th store. The Markov chain(Vn)n > 1
isPxl-irreducible (this
is seensimilarly
as foro)
andaperiodic (since
whereNn
is the number of the first demandplaced
in the n-th store, andNn
oo, we even have limP (Vn (v) = Vn (v’))
=1 for all v, v’E [o, 1 ]). Again
Doeblin’s conditionn -i o0
is
satisfied,
becausewhere r = P
(X 1 +
... +Xn
__1 )
1 forsufficiently large
n and8
(B)
= P(X
1 EB). Setting ~ _ (
1-r)/(n
+1 )
we obtain.__ _ , __
Thus
pVn
converges in total variation to aprobability
measure x.Let
Xl, X2,
... beindependent
random variables with distribution function F which are alsoindependent
ofXi, X2,
...Then in
has thesame distribution as
inf ~k >__
1 ...+ Xk
>l~.
It followseasily
that in
converges in distribution to1
+ ...+ Xk
>1 ~
where
P~ _ ~
and V istindependent
of theXi. Since in
isstochastically
smaller than we further have
> j) __ C p’
for some constants C > 0 andpe(0, 1) (see Shiryayev (1984),
p.601).
Thisimplies
that 1 isuniformly integrable.
Conse-quently
-
It remains to determine E
(T).
I ( Xo + ... +Xk
>1 ~
and assume thatXo
has thedistribution ~c. Then We have
Annales de l’lnstitut Henri Poincaré - Probabilités
because the
optional stopping
theorem can beapplied
to o and a;this follows from a well-known result of
martingale theory [Shiryayev (1984),
p.459],
sinceE(a)
oo and(4.3)
is tantamount toE
(Ta)
+ E(p (U~)) - ~
E(a) =
E(To)
+ E(p (Uo)). (4.4)
As
To= I, T~=2
andU~
has the same distribution asU1 (note
thatUa = v2
andUo = V l)’ (4.4) yields
The Theorem isproved.
REFERENCES
[1] S. ASMUSSEN, Applied Probability and Queues, J. Wiley, New York, 1987.
[2] E. G. COFFMAN Ed., Computer and Job-Shop Scheduling Theory, Wiley, New York, 1976.
[3] E. G. COFFMAN, J. Y. LEUNG and D. W. TING, Bin Packing: Maximizing the Number
of Pieces Packed, Acta Inform., Vol. 9, 1978, pp. 263-271.
[4] E. G. COFFMAN, K. SO, M. HOFRI and A. C. YAO, A Stochastic Model of Bin-Packing, Inform. Control, Vol. 44, 1980, pp. 105-115.
[5] W. FELLER, An Introduction to Probability Theory and Its Applications, Vol. II. 2nd edition, Wiley, New York, etc., 1971.
[6] D. S. JOHNSON. Fast Algorithms for Bin-Packing, J. Comput. Syst. Sci., Vol. 8, 1974,
pp. 272-314.
[7] D. S. JOHNSON, A. DEMERS, J. D. ULLMAN, M. R. GAREY and R. L. GRAHAM. Worst- Case Performance Bounds for Simple One-Dimensional Packing Algorithms, Siam
J. Comput., Vol. 3, 1974, pp. 299-326.
[8] G. M. LASLETT, D. B. POLLARD and R. L. TWEEDIE, Techniques for Establishing Ergodic and Recurrence Properties of Continuous-Valued Markov Chains, Naval.
Res. Logist. Quart., Vol. 25, 1987, pp. 455-472.
[9] H. L. ONG, Probabilistic Analysis of Bin Packing Heuristics, Oper. Res., Vol. 32, 1986,
pp. 983-998.
[10] A. N. SHIRYAYEV, Probability, Springer, New York, etc., 1984.
[11] R. L. TWEEDIE, Sufficient Conditions for Ergodicity and Recurrence of Markov Cahins
on a General State Space, Stoch. Proc. Appl., Vol. 3, 1975, pp. 385-403.
(Manuscript received March 10, 1989.)