• Aucun résultat trouvé

THE CM PROPERTY AND COMPARISON OF RISK PROCESSES

N/A
N/A
Protected

Academic year: 2022

Partager "THE CM PROPERTY AND COMPARISON OF RISK PROCESSES"

Copied!
11
0
0

Texte intégral

(1)

OF RISK PROCESSES

ANA MARIA R ˘ADUCAN and GHEORGHIT¸ ˘A ZB ˘AGANU

We generalize a theorem from [3] concerning comparisons of queues and apply it to both queues and risk processes.

AMS 2000 Subject Classification: 60G50, 60J15, 60K10, 60K25.

Key words: risk process, queue, service time, interarrival time, waiting time.

1. QUEUING SYSTEMS AND RISK PROCESSES

The simplest G/G/1 queue is defined by two independent sequences of i.i.d. random variables (Xn)n≥1 and (Yn)n≥1. Xn is theservice time for the nth customer,Yntheinterarrival time. Among the characteristics of a queue we shall focus on the waiting time until entering service of nth customer, Wn. It is well known that the sequence (Wn)n≥1 is given by the recurrence (1.1) W1= 0, Wn+1= (Wn+Xn−Yn)+

(see, for instance, [2]), or, equivalently, with the notation ξi=Xi−Yi,i≥1, (1.2) Wn+1= (Wnn)+ ∀n≥1.

Remark 1.1. Let us denote by Gn the distribution of Wn and by F the distribution of ξn. It was proved in [3] that the distributions Gn follow the recurrence G0 = δ0, G1 =F(+), Gn+1 = (Gn∗F)(+), ∀n ≥1, where F(+) = F◦f−1,withf(x) =x+.Exactly,F(+)=qδ0+pF|[0,∞),whereq =F((−∞,0)), p= 1−q, andF|[0,∞)(A) = FF(A∩[0,∞))([0,∞)) .

The classic risk proccesshas the form

(1.3) V(t) =SN(t)−ct,

where S0= 0, Sn=X1+X2+· · ·+Xn,n≥1 andN(t) = max{k|Tk ≤t}, with T0 = 0, Tn12+· · ·+σn,n≥1.

MATH. REPORTS10(60),3 (2008), 277–287

(2)

Here Xn is the value of the nth claim, Tn is the arrival moment of the nth claim, and σn is the interarrival time. The constant c is the intensity of the cash flow coming to the insurer.

Remark1.2. If we are interested only in the ruin moments, they may only occur at momentsTn.ButV(Tn) =Sn−c(σ12+· · ·+σn) =ξ12+· · ·+ξn, where ξi = Xi −Yi and Yi = cσi. The distribution of ξi is FX ∗F−Y. The maximum aggregate loss after n transactions is the process (Ln)n≥0 defined by Ln = max(0, ξ1, ξ12, . . . , ξ12+· · ·+ξn). It is well known (see, for instance, [8]) that

(1.4) L0 = 0, Ln+1= (LD nn)+ ∀n≥0.

Therefore, the pair of distribution functionshFX, FYican be interpreted from both the points of view of a queue and of a risk process.

Among many posibilities of comparison of queues and risk processes we focus on two of them:

Definition 1. Consider the queueshFX, FYiandhFX0, FY0i. We say that – hFX, FYi is better than hFX0, FY0i (and denote this relation by hFX, FYi ≺≺ hFX0, FY0i,orhX, Yi ≺≺ hX0, Y0i) iff

(1.5) P(Wn> x)≥P(Wn0 > x)not⇔Gn(x)≤G0n(x), ∀n≥1, ∀x≥0, where Wn0 is the sequence constructed by reccurence (1.2) with the sequences (Xn0)n≥1 of i.i.d. random variables having the distribution FX0,and (Yn0)n≥1

with the distribution FY0.

–hFX, FYi is asimptotically better thanhFX0, FY0i iff (1.6) P(W> x)≥P(W0 > x)not⇔G(x)≤G0(x),

where W and W0 are the limits of (Wn)n and (Wn0)n as n → ∞. It is also known (see, for instance, [2]) that Wn has a weak limit if and only if EXn<EYn (which here is equivalent to Eξ1 <0).

Intuitively, a queue is better that another one if the waiting time of the nth custoner is smaller in the first queue than in the second one, for each n ≥1. If the waiting time in the first queue tends to be smaller as n→ ∞, we say that it is asimptotically safer than the second queue.

From the point of view of an insurer, the safest risk process is one that has the smallest ruin probability after n transactions, for each n≥1. If one risk process has smaller infinite horizon probability than another one, we say that the first process is asimptotically safer than the second one. The reason

(3)

is that

hFX, FYi ≺≺ hFX0, FY0i ⇔P(Ln> u)≥P(L0n> u) (1.7)

⇔ψn(u)≥ψ0n(u), ∀n≥1, ∀u≥0,

where P(Ln > u) = ψn(u) is the ruin probability after n transactions, with initial capital u and

hFX, FYi ≺ hFX0, FY0i ⇔P(L> u)≥P(L0> u) (1.8)

⇔ψ(u)≥ψ0(u), ∀u≥0, where ψ(u) is the ruin probability.

An alternative way to describe these concepts is as follows: it is known that the random variableX is said to bestochastically dominated by the random variable Y (see, for instance, [5], [6]) iff P(X > x) ≤ P(Y > x)

∀x ≥ 0. Denote that by X ≺st Y, or, FXst FY. Let now two sequences of i.i.d. random variables, (ξn)n≥1 and (ξn0)n≥1 such that

(1.9) ξn∼F,(Ln)n≥0, L0= 0, Ln+1 = (Lnn)+, ξ0n∼F0,(L0n)n≥0, L00= 0, L0n+1 = (L0nn0)+

Notation. We writeF ≺++ F0 (orξ ≺++ξ0,) iff Lnst L0n ∀n≥0, and F ≺+F0 (or ξ≺+ξ0) iff LstL0.

Coming back to our previous definitions, we see that

(1.10) hFX, FYi ≺≺ hFX0, FY0i ⇔ξ≺++ξ0⇔Gnst G0n, ∀n≥1, and

(1.11) hFX, FYi ≺ hFX0, FY0i ⇔ξ≺+ξ0 ⇔Gst G0. In [3] there was proved the result below.

Theorem1.1. Let Eadenote the exponential distribution with parameter a. If X∼Ea,Y ∼Eb,X0 ∼Ea0,Y0∼Eb0,then

(1.12) hFX, F−Yi ≺≺ hFX0, F−Y0i ⇔

a≥a0 ρ≤ρ0, where ρ= EXEY .

Remark 1.3. In terms of queues, the ratio EXEY is called the traffic in- tensity. In ruin theory, the same ratio is denoted 1+θ1 , and θ is called the loading factor. The meaning of Theorem 1.1 is that in the world of exponential distributions the following assertion holds: if we have to compare two queues and both service time and trafic intensity are smaller in the first one, then the

(4)

first one is better. Or, in terms of risk theory: if the claims are smaller in the first risk process and the loading factor is greater, then the first risk process is safer.

Our purpose is to generalize this fact to other distributions.

The proof of Theorem 1 relies on a remarkable property of the exponen- tial distribution, namely,

Ea∗E−b =pEa+qE−b

with p = EX+EYEY , q = EXEX+EY. Here, we denote by E−b the distribution function with density e−b(x) = bebx1(−∞,0)(x), x ∈ R. The exact meaning of this notation is as follows: if Eb is the distribution function of the random variable X, thenE−b is the distribution function of the random variable−X.

Definition 2. We say that the distribution functionsF andGare conju- gated iff

(1.13) F ∗G=pF + (1−p)G for somep∈(0,1).

Remark 1.4. This property has been studied by D. Dugu´e for the first time in 1939 in terms of characteristic functions. He found two pairs (ϕ1, ϕ2) such thatϕ1(t)ϕ2(t) = ϕ1(t)+ϕ2 2(t).In later papers (see, for example, [4] or [7]) one can find other examples, though with incomplete or incorrect proofs.

2. GENERALIZATION

We prove the following result

Theorem 2.1. Let X, X0 > 0 (a.s.), Y, Y0 ≥ 0 such that X ∼ F1, X0 ∼F10,−Y ∼F2,−Y0 ∼F20. Assume that the pairs (F1, F2) and (F10, F20) are conjugated and ρ, ρ0 <1,where ρ= EXEY, ρ0 = EXEY00.Then

(i)ρ≤ρ0 and F1≤F10 ⇒F1∗F2++F10∗F20 ⇒ρ≤ρ0 and F1pp0F10; (ii)F1∗F2+F10∗F20 if and only if

P

n=0

(1−ρ)ρnΓnst

P

n=0

(1−ρ00nΓ0n. Here Γn=F1∗n and Γ0n=F10∗n.

The proof will be divided into several steps. Let us fix some nota- tion: F = F1 ∗F2, F0 = F10 ∗ F20, G0 = G00 = δ0, Gn+1 = (Gn ∗F)(+), G0n+1 = (G0n∗F0)(+),Γ is the row vector (Γ012, . . .),Γ0 is the row vector (Γ000102, . . .),e is the column vector (1,0,0, . . .)0.

(5)

Step 1. Lemma 2.2. If F = F1∗F2 = pF1+qF2, then Gn = ΓQne, where

(2.1) Q=Q(p) =

q q2 q3 . . . qn−1 qn qn+1 . . . p pq pq2 . . . pqn−2 pqn−1 pqn . . . 0 p pq . . . pqn−3 pqn−2 pqn−1 . . . . . . .

0 0 0 . . . p pq pq2 . . .

0 0 0 . . . 0 p pq . . .

0 0 0 . . . 0 0 p . . .

. . . .

 .

Proof. For n = 1, we have ΓQe = qΓ0 +pΓ1 = qδ0 +pF1 = (qF2+ pF1)(+)=F+=G1, hence (2.1) is true. Next, it is clear thatGn has the form (2.2) Gnn,0Γ0n,1Γ1+· · ·+αn,nΓn, ∀n≥0,

where αn,j are some real numbers. To find them notice that

Gn∗F = n

X

k=0

αn,kΓk

∗(qF2+pF1) (2.3)

=q

n

X

k=0

αn,kk∗F2) +p

n

X

k=0

αn,kΓk+1.

On the other hand it is easy to check that

(2.4) Γk∗F2 =pΓk+pqΓk−1+q2k−2+· · ·+qk−11+qkF2. Taking this into account, we see that

Gn+1= Γ0n,0q+αn,1q2+· · ·+αn,n−1qnn,nqn+1)+

+ Γ1n,0p+αn,1pq+· · ·+αn,n−1qn−1p+αn,nqnp)+

+ Γ2n,1p +αn,2pq+· · ·+αn,n−1qn−2p+αn,nqn−1p)+

. . .

+ Γnn,n−1p+αn,npq) + Γn+1n,npq),

(6)

which can be written as Gn+1= (Γ012, . . .)·

·

q q2 q3 . . . qn−1 qn qn+1 . . . p pq pq2 . . . pqn−2 pqn−1 pqn . . . 0 p pq . . . pqn−3 pqn−2 pqn−1 . . . . . . .

0 0 0 . . . p pq pq2 . . .

0 0 0 . . . 0 p pq . . .

0 0 0 . . . 0 0 p . . .

. . . .

 αn,0

αn,1 αn,2

. . . αn,n

0 0 . . .

.

Step 2. LetP = (Pn)n≥0 be a column-stochastic matrix such that the columns are stochastically increasing, which means that P

i≥k

pi,n ≤ P

i≥k

pi,n+1,

∀k≥1,∀n≥0 (or,PnstPn+1,∀n≥0).We call such a matrixmonotonous.

We say that P is dominated byQand write P ≺st QifPnst Qn,∀n≥0.

Remark 2.1. IfP, P0, Q, Q0 are monotonous matrices such thatP ≺st Q and P0st Q0 then P P0 ≺st QQ0. Indeed, the nth column of P P0 is Un = P

k≥1

p1,kp0k,n,P

k≥1

p2,kp0k,n, . . .0

= (u1,n, u2,n, . . .)0, the nth column of QQ0 is Vn =

P

k≥1

q1,kqk,n0 , P

k≥1

q2,kq0k,n, . . .0

= (v1,n, v2,n, . . .)0 and is easy to prove that P

j≥0

ul+j,n ≤ P

j≥0

vl+j,n, ∀l ≥ 1, ∀n ≥ 1. Let now sl,k = pl,k +pl+1,k+

· · ·, tl,k = ql,k +ql+1,k+· · ·. Then S1 = P

j≥0

ul+j,n = P

k≥1

p0k,nsl,k and S2 = P

j≥0

vl+j,n = P

k≥1

qk,n0 tl,k. From the hypothesis, sl,k ≤ tl,k and s0k,n ≤ t0k,n and if P

k≥0

xk ≥ 0, ∀k ≥ 1 then xk ≥ 0, ∀k ≥ 1. It follows that S1 ≤ S2 hence P P0 ≺QQ0.

We intend to prove that Gn ≺ G0n, i.e., Γ(x)Qn(p)e0 ≺ Γ0(x)Qn(p0)e0. First, we notice that Q(p) ≺Q(p0). Indeed, a comparison of thenth columns of these matrices shows that the above domination condition is equivalent to (2.5) qj+1 ≥q0j+1, ∀j= 1, . . . , n.

But, from the hypothesisρ≤ρ0,this is obvious. Then, according to Remark 4, Qn(p) ≺Qn(p0), ∀n≥1. It is also true that Γ(x)≺Γ0(x). Therefore Gnst G0n, ∀n ≥ 1, which is equivalent by definition to F ≺++ F0. Thus, the first implication from (i) is proved.

To prove the second implication, remark that F ≺++ F0 ⇒ G1st G01. Moreover,G1= (F1∗F2)(+)= (pF1+qF2)(+)=pF1+qδ0andG01=p0F10+q0δ0.

(7)

To findp and q notice that E(X−Y) is the expectation of F1∗F2.The expectation ofpF1+qF2is equal topEX−qEY, hence E(X−Y) =pEX−qEY. Then

(2.6) p= EX

EX+ EY = ρ

ρ+ 1, q= E Y

EX+ EY = 1 ρ+ 1.

But G1st G01 means that G1(x)≥G01(x) for any x >0⇔pF1(x)≤p0F10(x)

∀x >0.Lettingx→0 and taking into account thatF1(0 + 0) =F10(0 + 0) = 1, we see thatp≤p0⇒ρ≤ρ0.Moreover, F1(x)≤ pp0F10(x).

(ii)Step 3. Let us consider a Markov chain Z such that Zn+1= (Zn+ ηn)+, whereηn are i.i.d. random variables with

η1

. . . −2 −1 0 1 . . . pq3 pq2 pq p

.

We can write ξn = 1−ηn ∼Negbin(1, p), n≥1. In our case the ruin is not certain if Eηn<0,which is equivalent toq > p.

The transpose of the transition matrix ofZ is

Q=

q p 0 0 0 0 . . .

q2 pq p 0 0 0 . . . q3 pq2 pq p 0 0 . . . q4 pq3 pq2 pq p 0 . . . . . . .

 .

It is known that if Zn → Z as n → ∞, then the distribution of Z

is a distribution Π = (π0, π1, . . .) which satisfies the equation ΠQ0 = Π, or explicitely,πn=qn+1π0+pqnπ1+· · ·+pqπn+pπn+1,∀n≥0.It appears that pπn+2n+1−qπnfor any n≥1, henceπn= (1−ρ)ρn,∀n≥0,withρ= pq. Otherwise written Π = Negbin(1,1−ρ).In this way, we proved

Lemma2.3. We have

(2.7) G=

X

n=0

(1−ρ)ρnΓn.

Now, the proof of (ii) from Theorem 2.1 is obvious.

Here are some particular cases of (2.7).

Lemma2.4. Let us denote α= 1−α for any α∈(0,1). Then a)If X ∼exp(a) then G=ρδ0+ρexp(aρ) =ρδ0+ρexp(a−b).

b) If X ∼Negbin(1, α) thenG=ρδ0+ρNegbin(1,1−αραρ ).

c)If X ∼Geometric(α) thenG=ρδ0+ρGeometric(αρ).

d) If X ∼ B(1, α) then G = Negbin(1,1−ραρ ), where B(1, α) =αδ0+ αδ1.

(8)

Proof. Since Γn = FX∗n, if mF is the moment generating function cor- responding to the distribution function F then, according to Lemma 2.3, mG(t) = (1−ρ)

P

n=0

ρn(mFX(t))n, hencemG(t) = 1−ρm1−ρ

FX(t). a)mG(t) = (1−ρ)1−ρ1a

a−t = (1−ρ)a−t−ρaa−t = (1−ρ)(1 + a(1−ρ)−tρa ) = ρ+ρaρ−t ⇒G=ρδ0+ρexp(aρ).

b)mG(t) = (1−ρ)1−ρ1α

1−αet = (1−ρ)(1−αρ)−αe1−αet t = (1−ρ)(1+ρ(1−αρ)−αeα t)

= ρ +ρ αρ

(1−αρ)(1−1−αραet ) = ρ +ρ1−βeβ t ⇒ G = ρδ0 + ρNegbin(1, β) with β = 1−αραρ .

c)mG(t) = (1−ρ) 1

1−ρ αet

1−αet

= (1−ρ)1−(αρ+α)e1−αet t = (1−ρ)(1+ρ1−(αρ+α)eαet t)

=ρ+ρ1−βeβ t,withβ =ρα,hence G=ρδ0+ρGeometric(αρ).

d)mG(t) = (1−ρ)1−ρ(α+αe1 t) = (1−ρ)(1−ρα)−αρe1 t =

1−ρ 1−ρα

1−1−ρααρ et ⇒G= Negbin 1,1−ραρ

.

Remark 2.2. Formulae b), c), d) from Lemma 2.4. can be slightly gen- eralized as follows. Let h >0 be arbitrary. Denote by

– Geometric(α, h) the distribution of hX ifX ∼Geometric(α);

– Negbin(1, α, h) the distribution of hX ifX ∼Negbin(1, α);

– B (k, α, h) the distribution ofhX ifX ∼B(k, α).

Then b), c), d) from Lemma 2.4 become

b0) IfX ∼Negbin(1, α, h) thenG=ρδ0+ρNegbin(1,1−αραρ , h).

c0) IfX ∼Geometric(α, h) then G=ρδ0+ρGeometric(αρ, h).

d0) IfX ∼B(1, α) thenG= Negbin(1,1−ραρ , h).

Remark 2.3. We have seen that, for X, X0 > 0, ρ ≤ ρ0 and X ≺st X0 imply G1st G01.But is still true thatG1st G01 does imply thatρ≤ρ0 and X ≺st X0? The answer is no. For instance in Examples 1) and 2) below this is true but in Example 3) just the opposite X0st X holds. In Example 4, there is no stochastical domination at all.

Example 1. X∼Geometric(α, h), Y ∼Negbin(1, β, h).It is known (see [7]), that (FX,F−Y) are conjugated withp= αβ+ααβ .We have

G1 =pFX+qδ0, G01=p0FX0+q0δ0 ⇒G1st G01

⇔pαn≤p0α0n

α≥α0 p≤p0.

We notice that p ≤ p0 ⇔ ρ ≤ ρ0. In this case, FXst FX0 ⇔ FX(x) ≤ FX0(x)⇔α≥α0.Now, it is clear thatG1st G01⇒ρ≤ρ0 and X≺stX0.

(9)

Example 2. X ∼ Negbin(1, α, h), Y ∼ Geometric(β, h). In this case (FX,F−Y) are conjugated with p = αβ+αβ ; the computations are similar to those from Example 1.

Example 3. X ∼

0 h α α

, −Y ∼

. . . −3h −2h −h 0 . . . γββ2 γββ γβ γ

. It was proved in [7] that (FX,F−Y) are conjugated if an only if γ = 1−αβ. In this case, F =FX−Y =pFX +qF−Y with p=γ =αβ.

The reader can check that G1 = (1−α2β)δ02βδh = B (1, αp) and ρ = EXEY = αβγ = 1−αβαβ = 1−pp .If X0 and Y0 are of the same type as X and Y, but with parametersα0, β0, γ0, h0 instead, we have

G1stG01

h≤h0

α2β ≤α02β0

h≤h0 αp≤α0p0 . On the other hand, X ≺st X0

h≤h0

α≤α0 . As ρ ≤ ρ0 ⇔ p ≤ p0, this fact confirms once more the implication “FXst FX0,ρ≤ρ0 ⇒G1st G01”.

To find a counterexample, let us takeα= 120 = 13,β= 140= 34,h= h0.In this caseρ= 170= 13,p= 18,p0 = 14. Notice thatX0st X, but it is not true thatX ≺stX0.Moreover Γn= B n,12

, Γ0n= B n,13

.The reader can see thatG1= 1516δ0+161δh= B 1,161, h

, G01= 1112δ0+121δh= B 1,121, h

.Of course G1stG01.More than that: not onlyG1stG01, but alsoGstG0. Indeed, according to Lemma 2.4 d), G= (1−ρ) P

n≥0

ρnB(n, α) = Negbin(1,1−ραρ , h) and G0 = Negbin(1,1−ρρ00

α0, h). Forh =h0 = 1, we get in our particular case G= Negbin(1,1213), G0= Negbin(1,1214).Obviously,Gst G0.

Remark 2.4. We believe that, at least for distribution of this typeG1st G01, Gst G0 does imply Gnst G0n,∀n≥1. If that were true, we would have an example in which X−Y ≺++X0−Y0 but it would not be true that X ≺stX0!

So, we know thatG1 = B(1, αp),G01 = B(1, α0p0),G= Negbin(1,1−ρα1−ρ )

= Negbin(1,1−2p+αp1−2p ) and G0 = Negbin(1,1−2p1−2p000p0), and G1st G01 ⇔ αp≤α0p0,Gst G01−2pαp1−2pα0p00. We conjecture that

(2.8) αp≤α0p0, αp

1−2p ≤ α0p0 1−2p0 implies

(2.9) GnstG0n ∀n≥1.

(10)

We were unable to infirm it on the computer. We tried tens of randomly chosen pairs (α, p), (α0, p0) which satisfy (2.8) for nfrom 1 to 100. However, we are able to prove that (2.9) holds at least for n= 2.

Let us thus prove that G1st G01 and Gst G0 impliesG2st G02.This is equivalent to (2.8) ⇒ (2.10), where

(2.10) αp(1−p2+ 2p−αp)≤α0p0(1−p02+ 2p0−α0p0).

Let us consider the function f : (0,12)×(0,1)→D, f(p, α) = (αp,1−2pαp ) (our conditions are 0 < p, p0 < 12, 0 < α, α0 < 1). The system

x=αp y= 1−2pαp has the unique solution p = 1−

x y

2 = y−x2y , α = xp = y−x2xy. If we denote by h x, y

, αp(1−p2 + 2p−αp), the expression we are interested in, we have h(x, y) =x 2− xy1−

x y

2

2

−x

, wherex, y satisfy (2.11) 0< x < y, y−x−2xy >0.

A more convenient expression of his

(2.12) 4h(x, y) = 7x−4x2−2x2 y −x3

y2.

We prove that h is increasing by verifying that its partial derivatives are positive. These are

4∂h

∂x = 7−8x−4x y−3x2

y2 = 3(y2−x2) + 4y(y−x−2xy)

y2 , ∂h

∂y = 2x2 y2 +2x3

y3 . Now, applying (2.11), it is clear that ∂h∂x ≥ 0, ∂h∂y ≥ 0. Therefore G1st G01 and Gst G0 ⇒ G2st G02.

Example 4. X ∼ Geometric(1−e−2,2), Y ∼ Negbin(1,e−4,2), X ∼ Geometric(1−e−3,3), Y ∼ Negbin(1,e−3,3).In this case

ρ = EX EY =

e2 e2−1

e4−1 = e2

(e2−1)(e4−1), p= ρ

ρ+ 1 = e2

e2+ (e2−1)(e4−1), ρ0 = EX0

EY0 =

e3 e3−1

e4−1 = e3

(e3−1)(e4−1), p0= ρ0

ρ0+ 1 = e3

e3+ (e3−1)(e4−1).

(11)

Moreover, for x > 0 we have G1(x) = pFX(x) = pe−2[x2] and G01(x) = p0FX0(x) = p0e−3[x3]. Consequently, G1st G01 is equivalent to pe−2[x2] ≤ p0e−3[x3] ∀x > 0 ⇔ sup(e3[x3]−2[x2]) ≤ pp0 ⇔ e ≤ pp0. As the last inequality holds, we see that G1st G01.

AsFX ≤FX0 ⇔2x

2

≥3x

3

and this is not always true (one can see a graphic representation), there is no stochastic domination between X and X0! In this case, G1st G01 ⇒ρ≤ρ0,butX0stX and X⊀stX0.

Acknowledgements.This research was supported by The Hungarian-Romanian In- tergovernmental Cooperation for Science and Technology under Grant RO-40/2005.

REFERENCES

[1] S. Assmusen,Ruin Probabilities. Word Scientific, Singapore, 2000.

[2] A.A. Borovkov,Stochastic Processes in Queueing Theory. Springer, 1976.

[3] L. Lakatos and G. Zb˘aganu,Comparisons of G/G/1queues. Proc. Romanian Acad. Ser.

A Math. Phys. Tech. Sci. Inf. Sci.8(2007), 85–94.

[4] H.J. Rossberg,Characterization of the exponential and the Pareto distributions by means of some properties of the distributions. Math. Operationsforsch. Statist. 3 (1972), 3, 207–216.

[5] M. Shaked and J. Shantikumar,Stochastic Orderings and Their Applications. Academic Press, New York, 1993.

[6] R. Szekli, Stochastic Ordering and Dependence in Applied Probability. Springer, Berlin, 1995.

[7] Anna Wolinska-Welcz, On a solution of the Dugu´e problem. Probab. Math. Statist. 7 (1986),2, 169–185.

[8] G. Zb˘aganu, Mathematical Methods in Risk Theory and Actuaries. Univ. Press, Bucharest, 2004. (Romanian)

Received 21 May 2007 “Gheorghe Mihoc-Caius Iacob”

Institute of Mathematical Statistics and Applied Mathematics Casa Academiei Romˆane Calea 13 Septembrie nr. 13 050711 Bucure¸sti 5, Romania

anaraducan@yahoo.ca and

University of Bucharest

Faculty of Mathematics and Computer Science Str. Academiei 14

010014 Bucharest, Romania zbagang@fmi.unibuc.com

Références

Documents relatifs

In summary, 90% of experimented anaesthetists distinguish the difficulty of the surgery and risks linked to the patient’s health state, while 60% of novices suggest the overall risk

Moreover, the criticality level can differ and can take different significances according to the method or the approach used to estimate the level of probability and the level

In the limited examples and experiments considered, we have established that PLAD and PLS estimate different features, but PLS is superior to PLAD in the sense that the

Section 5 reviews Asmussen’s approach for solving first passage problems with phase-type jumps, and illustrates the simple structure of the survival and ruin probability of

Es decir, el operador por medio del código de la movilidad presenta al espectador imágenes a partir de una cámara fija o por medio del travelling colocando la cámara

Cross-spectral analyses are used to determine the cross-shore evolution of (single-valued) dominant wavenumber κ and phase velocity c spectra, and lead to the identification of

A result of Gelander provides exponential upper bounds in terms of the volume for the torsion homology of the noncompact arithmetic locally symmetric spaces \X.. We show that

Assume that 2 , the variance of income risk &#34;, is small. It may perhaps help intuition to underline that, other things equal, the tax rate is: a decreasing function of