ON THE ASYMPTOTIC RUIN PROBABILITY WITH SOME SPECIFIC STOCHASTIC
DISCOUNT FACTORS
RALUCA VERNIC
We consider an insurance context in which the discounted sum of losses can be described as a randomly weighted sum of a sequence of independent random vari- ables. We introduce a specific dependence structure for the discount factors based on their marginal distributions, and apply the results obtained by Goovaerts et al. (2005) on approximating ruin probabilities. The particular case of the Farlie- Gumbel-Morgenstern distribution is underlined.
AMS 2000 Subject Classification: 62P05, 62E20.
Key words: asymptotics, tail probability, finite and infinite time ruin probabilities, Farlie-Gumbel-Morgenstern distributions.
1. INTRODUCTION
Recently, in a general probabilistic setting, Goovaerts et al. [5] investi- gated the tail probabilities of the randomly weighted sums
(1) Sn=
n k=1
θkXk, n≥1,
and their maxima. Here{Xn, n≥1} is a sequence of independent and iden- tically distributed (i.i.d.) real-valued random variables with generic random variableXand common distribution functionF, and{θn, n≥1}is another se- quence of positive random variables, independent of the sequence{Xn, n≥1}. We will present an actuarial model that reduces to (1).
As in the recent work of Nyrhinen ([10], [11]) and Tang and Tsitsiashvili ([12], [13]), we consider the discrete time risk model
(2) S0 = 0, Sn=
n i=1
Xi i j=1
Yj, n= 1,2, . . . ,
where{Xn, n≥1}is as in model (1), and{Yn, n≥1}is another sequence of nonnegative random variables distributed on [0,∞), the two sequences being
REV. ROUMAINE MATH. PURES APPL.,52(2007),4, 479-495
mutually independent. In this model, the random variableXnis the total loss during periodnand the random variableYn is the discount factor from time n to time n−1, n = 1,2, . . .. Thus, the sum Sn represents the aggregated discounted losses by timenof an insurer in a stochastic economic environment.
In the terminology of Norberg [9],{Xn, n≥1} are called insurance risks and {Yn, n≥1} financial risks.
We notice that (2) reduces to model (1) with θk = k
i=1Yi, a product of positive random variables. For model (2), the finite and infinite time ruin probabilities are defined for an insurer whose initial wealth isx≥0 as
ψ(x;n) = Pr
0≤maxk≤nSk > x
and ψ(x) = Pr
0≤maxk<∞Sk> x , respectively.
Goovaerts et al. [5] derived asymptotic expressions for the tail proba- bilities of (1) and its maxima and, based on them, considered two particular distributions for {θk, k ≥ 1}, for which the finite time ruin probability can be asymptotically evaluated. After recalling the main results in Section 2, in Section 3 we will present other such distributions with given marginals and a specific dependence structure. The particular case of the Farlie-Gumbel- Morgenstern (FGM) distribution is investigated, and for this distribution we also obtain an explicit asymptotic form of the infinite time ruin probability.
In what follows we will assume that summing over an empty domain gives 0.
For two positive infinitesimals a(x) and b(x), we write a(x) ∼ b(x) if lim sup
x→∞
a(x) b(x) = 1.
2. A RECENT RESULT
As in Goovaerts et al. [5], we shall restrict ourselves to the case of regularly varying tailed loss distributions. We say that the right tail of F is regularly varying with regularity index α > 0, denoted by F ∈ R−α, if F(x) = 1−F(x)>0 for all xand
F(xy)∼y−αF(x) for all y >0.
This class contains the famous Pareto distributions, widely used in insurance to model losses. For more details on this class, see Bingham et al. [1].
Goovaerts et al. [5] proved the result below.
Proposition 2.1. Consider model (1) with F ∈ R−α for some α > 0.
We have
Pr
1≤maxm≤nSm > x
∼F(x) n k=1
Eθαk
if there exists some δ >0 such thatEθkα+δ<∞ for each 1≤k≤n.
We also have Pr
1≤maxn<∞Sn> x
∼F(x)∞
k=1
Eθαk if one of the following assumptions holds:
(1) 0< α <1, ∞
k=1Eθkα+δ<∞ and∞
k=1Eθαk−δ <∞ for some δ >0;
(2) 1 ≤ α < ∞, ∞
k=1
Eθαk+δα+δ1
< ∞ and ∞
k=1
Eθkα−δα+δ1
< ∞ for someδ >0.
From this proposition, the result below is immediate.
Corollary 2.1.In particular, under the assumptions of Proposition2.1, we have
(3) ψ(x;n)∼F(x)
n k=1
Eθkα and
(4) ψ(x)∼F(x)
∞ k=1
Eθkα.
3. SPECIFIC CASES
In order to evaluate the asymptotic ruin probability given in Corol- lary 2.1, we need to calculate the expectations Eθαk for k = 1,2, . . .. In this section, we give some special cases for which such calculation can be per- formed. We start by presenting some multivariate distribution families with given marginals.
3.1. A family of multivariate density functions with preassigned marginals
3.1.1. The general family
A variety of joint distribution functions with given marginal distributions have been developed over the years. In what follows we will consider a general family given in Kotz et al. [6], Section 44.13. Let fX1, . . . , fXn be univariate density functions corresponding to the random variables X1, . . . , Xn, respec- tively, and let φi, i = 1, . . . , n, be a set of bounded non-constant functions
such that
(5) Rφi(x)fXi(x) dx= 0 for all i= 1, . . . , n.
Then the function
(6) f(x1, . . . , xn) = n
i=1
fXi(xi) 1 + 1
αnRφ1,...,φn,Ωn(x1, . . . , xn)
is a multivariate density function, where Rφ1,...,φn,Ωn(x1, . . . , xn) =
1≤i1<i2≤n
ωi1i2φi1(xi1)φi2(xi2) +
+
1≤i1<i2<i3≤n
ωi1i2i3φi1(xi1)φi2(xi2)φi3(xi3) +· · ·+ω12...n n i=1
φi(xi), and Ωn ={ωi1i2, ωi1i2i3, . . . , ω12...n}.The sets Ωn and αn of real numbers are chosen such that
|Rφ1,...,φn,Ωn(x1, . . . , xn)| ≤αn
for allxi ∈R,i= 1, . . . , n.ThenfX1, . . . , fXnare the marginal densities of (6).
If |φi(x)| ≤ Ci for all x ∈ R, i= 1, . . . , n,then Ωn can be chosen such that
(7) |ωi1i2| ≤ 1
Ci1Ci2, . . . ,|ω12...n| ≤ 1 C1·. . .·Cn,
and αn can be chosen as the number of nonzero ω’s in the set Ωn, with 1 ≤ αn ≤ 2n−n−1. If all ω’s are taken to be 0, the density (6) reduces to the independent case.
Restricting to the case|φi(x)| ≤1,i= 1, . . . , n,it can be shown without loss of generality that for any subset of (X1, . . . , Xn),say (Xi1, . . . , Xil) where 1≤i1 <· · ·< il≤n, the corresponding joint density function is
f(xi1, . . . , xil) = l
j=1
fXij
xij 1 + 1
αnRφi
1,...,φil,Ωl(xi1, . . . , xil)
, where Rφ1,Ω1 = 0 and Ωl is the subset of Ωn such that the subscripts of ω’s involve only combinations of the integersi1, . . . , il.
A particular case of (6) often encountered in the literature is the Farlie- Gumbel-Morgenstern distribution, obtained whenφi = 1−2FXi,whereFXi is the distribution function ofXi,i= 1, . . . , n. This case will be discussed below in detail.
3.1.2. The Farlie-Gumbel-Morgenstern family
As in Kotz et al. [6], the general n-dimensional FGM distribution has the form
(8) FX1,...,Xn(x1, . . . , xn) = n
i=1
FXi(xi)
×
×
1 +
1≤i<j≤n
aij[1−FXi(xi)]
1−FXj(xj)
+· · ·+a12...n n i=1
[1−FXi(xi)]
, whereFXi,i= 1, . . . , n, are the marginal distributions. The coefficientsai1...il
are real numbers with constraints in order to ensure that FX1,...,Xn in (8) is non-decreasing in each ofx1, . . . , xn.These constraints are
(9) 1 +
1≤i<j≤n
εiεjaij +· · ·+ε1. . . εna12...n≥0
for allεi =−Mior 1−mi,whereMiandmiare the supremum and the infimum of the set {FXi(x) ;−∞< x <∞} \ {0,1}. If FXi is absolutely continuous, we have Mi = 1 and mi = 0, hence εi = ±1. This will be the case in the next section.
In order to explain the meaning of the coefficients aij,denote ci =
RFXi(x) [1−FXi(x)] dx, i= 1,2, . . . . Then from Cambanis [2] we have that
E [(Xj1 −µj1)·. . .·(Xjk−µjk)] = (−1)kcj1 ·. . .·cjk aj1...jk,
where µi = EXi. In particular, cov (Xi, Xj) = cicjaij. This shows that if Xi andXj are uncorrelated, then they are independent.
As a remark, the joint distribution (8) can also be obtained using a Farlie-Gumbel-Morgenstern multivariate copula function, namely,
C(u1, . . . , un) = n
i=1
ui
×
×
1 +
1≤i<j≤n
aij(1−ui)(1−uj) +· · ·+a12...n n i=1
(1−ui)
.
From a mathematical point of view, a copulaCis a distribution function on the hypercube [0,1]nwith (0,1)-uniform marginals. A copula provides the natural link between then-dimensional distribution F and its marginals (F1, . . . , Fn) asF(x1, . . . , xn) = C(F1(x1), . . . , Fn(xn)).For more details on copulas see Nelsen [8]. In the actuarial literature, there is a growing interest for the use
of copulas to model risk dependency. See e.g. Frees and Valdez [4], Klugman and Parsa [7], Embrechts et al. [3] etc.
If the marginal densities fXi, i = 1, . . . , n, exist, then the joint density of the FGM distribution given in (8) is
(10) fX1,...,Xn(x1, . . . , xn) = n
i=1
fXi(xi)
×
×
1+
1≤i<j≤n
aij[1−2FXi(xi)]
1−2FXj(xj)
+· · ·+a12...n n i=1
[1−2FXi(xi)]
. As mentioned at the end of the last section, this corresponds to (6) forφi= 1− 2FXi andai1...il=ωi1...il/αn. Hence the k-dimensional marginal distributions of (8) are of the same type and with the same coefficients. In particular, (11) fX1,...,Xk(x1, . . . , xk) =
k i=1
fXi(xi)
×
×
1+
1≤i<j≤k
aij[1−2FXi(xi)]
1−2FXj(xj)
+· · ·+a12...k k i=1
[1−2FXi(xi)]
.
3.2. The asymptotic ruin probability
The calculation involved in this section will need the following result.
Lemma 3.1. If N is a positive integer,then (i)
N k=1
kmk−1 = (N + 1)mN
m−1 +1−mN+1
(m−1)2 = (Nm−N −1)mN+ 1 (1−m)2 ;
(ii)
∞ k=1
kmk−1= 1
(1−m)2, where 0< m <1;
(iii)
1≤i1<···<il≤N
h(i2−i1, . . . , il−il−1) =
=
1≤i1,...,il−1≤N i1+···+il−1≤N
[N−(i1+· · ·+il−1)]h(i1, . . . , il−1),
where l≤N and h:Rl+−1 →R.
Proof. We will not give a detailed proof of the first two formulas because they are based on elementary algebra. As a hint, (i) can be obtained con- sidering the first derivative of N
k=0mk, with respect to m while (ii) results from (i) by lettingN → ∞.
In order to show (iii), we rewrite it as (12)
1≤i1<···<il≤N
h(i2−i1, . . . , il−il−1) = N il=l
1≤i1<···<il−1<il
h(i2−i1, . . . , il−il−1).
Let us fix the values 1≤j1, . . . , jl−1 ≤N.We want to count how many times h(j1, . . . , jl−1) appears in the above sum. We are looking for alli1, . . . , il such that 1≤i1 <· · ·< il ≤N and h(i2−i1, . . . , il−il−1) =h(j1, . . . , jl−1),i.e.
jk=ik+1−ik,k= 1, . . . , l−1.This gives
i2 =j1+i1
i3 =j2+i2 =j2+j1+i1 . . .
il=jl−1+il−1 =jl−1+· · ·+j1+i1.
This means thath(j1, . . . , jl−1) appears in (12) forjl−1+· · ·+j1+ 1≤il≤N, i.e. N−(jl−1+· · ·+j1) times. This completes the proof of (iii).
3.2.1. The general case
We will now assume that the random vectorY = (Y1, . . . , Yn) introduced in (2) has ann-variate distribution with density (6). In order to find an explicit form for ψ(x;n) from Corollary 2.1, we should evaluate n
k=1Eθαk, k ≤ n, whereθk=k
i=1Yi.Starting with Eθαk, we have the following result.
Proposition 3.1. If the random vector Y has density (6), then (13) Eθαk =
k i=1
EYiα+ 1 αn
1≤i<j≤k
ωij
k l=1 l=i,j
EYlα
E [Yiαφi(Yi)] E
Yjαφj(Yj)
+· · ·+ 1 αnω12...k
k i=1
E [Yiαφi(Yi)]. Proof. The formula follows from
Eθαk = E k
i=1
Yiα (6)
= · · ·
Rk
k i=1
xαifYi(xi)×
×
1 + 1
αnRφ1,...,φk,Ωk(x1, . . . , xk)
dx1. . .dxk.
Some extraassumptions will simplify this complex formula. A usual as- sumption is that {Yk, k ≥1} is a sequence of identically distributed random variables, with common density fY and distribution function FY. We will also consider that φ1 = · · · = φn = φ. Then, denoting mα = EYα and sα = E [Yαφ(Y)],Proposition 3.1 reduces to
Corollary 3.1. Under the above assumptions, (14)
Eθkα=mkα+ 1 αn
mkα−2s2α
1≤i<j≤k
ωij+mkα−3s3α
1≤i<j<l≤k
ωijl+· · ·+ω12...kskα
. For this particular case, by taking α= 1 and k= 2 in (14), we can also evaluate the covariance of any (Yi, Yj),i < j, as
(15) cov (Yi, Yj) = E (YiYj)−EYiEYj =m21+ωij
αns21−m21 = ωij αns21. We will now choose some special forms for the coefficientsω. There are many interesting choices forωand we will try to give a general idea on some of them.
1. We assume that there exist some functions hl :Rl+−1 → R such that ωi1...il =hl(i2−i1, . . . , il−il−1), l= 2, . . . , n.Then, from Lemma 3.1,
1≤i1<···<il≤k
ωi1...il =
1≤i1,...,il−1≤k i1+···+il−1≤k
[k−(i1+· · ·+il−1)]hl(i1, . . . , il−1),
and, denotingil+=i1+· · ·+il−1,from (14) we have Eθαk =mkα+ 1
αn
k l=2
mkα−lslα
1≤i1,...,il−1≤k il+≤k
(k−il+)hl(i1, . . . , il−1).
Hence, applying again Lemma 3.1, (16)n
k=1
Eθαk= n k=1
mkα+ 1 αn
n l=2
slα n k=l
mkα−l
1≤i1,...,il−1≤k il+≤k
(k−il+)hl(i1, . . . , il−1) =
= n k=1
mkα+ 1 αn
n l=2
slα
1≤i1,...,il−1≤n il+≤n
hl(i1, . . . , il−1) n k=il++1
(k−il+)mkα−l=
= n k=1
mkα+ 1 αn
n l=2
slα
1≤i1,...,il−1≤n il+≤n
hl(i1, . . . , il−1)
n−il+
j=1
jmjα+il+−l=
= n k=1
mkα+mα αn
n l=2
sα mα
l
1≤i1,...,il−1≤n il+≤n
hl(i1, . . . , il−1)miαl+
n−il+
j=1
jmjα−1 =
(i)= mα(1−mnα)
1−mα + mα
αn(1−mα)2 n
l=2
sα
mα
l
1≤i1,...,il−1≤n il+≤n
hl(i1, . . . , il−1)miαl+×
×
((n−il+)mα−n+il+−1)mnα−il+ + 1
=
= mα(1−mnα)
1−mα + mα
αn(1−mα)2 n l=2
sα
mα l
1≤i1,...,il−1≤n il+≤n
hl(i1, . . . , il−1)miαl++
+mnα
1≤i1,...,il−1≤n il+≤n
hl(i1, . . . , il−1) ((n−il+)mα−n+il+−1)
. 2. We will now assume that there exist some nonincreasing functions gl : R+ → R such that ωi1...il = gl(il−i1), l = 2, . . . , n. We notice that we are in fact in case 1 with hl(j1, . . . , jl−1) =gl(j1 +· · ·+jl−1), sinceωi1...il = hl(i2 −i1, . . . , il−il−1) = gl((i2−i1) + (i3−i2) +· · ·+ (il−il−1)) =gl(il− i1). Hence (16) reduces to
(17)
n k=1
Eθαk = mα(1−mnα)
1−mα + mα αn(1−mα)2
n l=2
sα mα
l
×
×
n
il+=l−1
gl(il+)miαl+ +mnα n il+=l−1
gl(il+) ((n−il+)mα−n+il+−1)
=
= mα(1−mnα)
1−mα + mα
αn(1−mα)2 n
l=2
sα
mα l
×
×[Sgl;mα +mnα(1−mα)Sgl;•+mnα(nmα−n−1)Sgl], where
(18) Sgl= n i=l−1
gl(i), Sgl;•= n i=l−1
igl(i), Sgl;mα = n i=l−1
miαgl(i).
The above sums can all go till n−1 since for i=n, the quantity within the square brackets of (17) is zero. Also, in this case, from (15) fori < j we have
cov (Yi, Yj) = g2(j−i) αn s21.
Sinceg2is nonincreasing, cov(Yi, Yj)≥cov(Yi, Yj) whenever|j−i|<|j−i|, i.e., the correlation between the random variables is not increasing in time.
This is very good, because it is normal for two random discount factors more distanced in time to be less correlated.
Let us have a look at condition (7) that must be fulfilled by the g’s.
Assuming that there exists a positive C such that |φ(x)| ≤ C for all x ∈ R, since the g’s are nonincreasing and we need just gl(l−1), gl(l), . . ., it is sufficient that
(19) |gl(l−1)| ≤ 1
Cl
for any l= 2, . . . , n. We will now consider two particular cases of such g’s.
3. Letωi1...il =ω for any 1≤i1<· · ·< il≤n.We notice that we are in the above case forgl(x) =ω, l= 2, . . . , n,x∈R. We could replace these g’s into (18), but straightforward calculation seems simpler in this case. We have
1≤i1<···<il≤k
ωi1...il=ω k
l
,
and, from (14), Eθkα=mkα+ ω
αn
k l=2
k l
mkα−lslα=mkα+ ω αn
(mα+sα)k−mkα−kmkα−1sα
=
=
1− ω αn
mkα+ ω αn
(mα+sα)k−kmkα−1sα . Hence, using (i) in Lemma 3.1,
n k=1
Eθkα=
1− ω αn
n k=1
mkα+ ω αn
n k=1
(mα+sα)k−kmkα−1sα
=
=
1− ω αn
mα(1−mnα) 1−mα + ω
αn
(mα+sα)1−(mα+sα)n 1−(mα+sα) −
−sα
(n+ 1)mnα
mα−1 + 1−mnα+1 (mα−1)2 .
Also, we see from (19) that it is sufficient thatω ≤min C−2, C−n! .
4. Let now ωi1...il = ωl for any 1 ≤i1 <· · · < il ≤n. We are again in the second case forgl(x) =ωl,but we will use direct calculation as
1≤i1<···<il≤k
ωi1...il =ωl k
l
,
and, from (14),
Eθkα=mkα+ 1 αn
k l=2
k l
mkα−l(ωsα)l =
=
1− 1 αn
mkα+ 1 αn
(mα+ωsα)k−kmkα−1ωsα . Then, just as before,
n k=1
Eθkα=
1− 1 αn
mα(1−mnα) 1−mα + 1
αn
(mα+ωsα)1−(mα+ωsα)n 1−(mα+ωsα) −
−ωsα
(n+ 1)mnα
mα−1 + 1−mnα+1 (mα−1)2 . From (19), a sufficient condition forω is that ω≤C−1.
5. We will now assume thatωi1...il = 0 for any l >2,and ωij =h(j−i) for 1≤i < j ≤n.We are in the second case withgl≡0 forl >2,and g2 =h.
Then (17) reduces to n k=1
Eθαk = mα(1−mnα)
1−mα + s2α
αnmα(1−mα)2×
×[Sh;mα+mnα(1−mα)Sh;•+mnα(nmα−n−1)Sh], where, from (18),
Sh = n i=1
h(i), Sh;• = n
i=1
ih(i), Sh;mα = n
i=1
miαh(i).
Also,h must fulfill the condition|h(l)| ≤C−2 forl= 1, . . . , n.Taking h nonincreasing, it will thus be sufficient that|h(1)| ≤C−2.
3.2.2. The Farlie-Gumbel-Morgenstern case
We will now assume that the distribution of Y is of the form (8) with density (10). As before, in order to apply Corollary 2.1, we need to evaluate Eθkα,whereθk=k
i=1Yi.Since in fact this is a particular case of the general one, in the following we will only consider some situations in which we can