Variational problem in the non-negative orthant ofR3: reflective faces and boundary influence cones
Ahmed El Kharroubi·Abdelhak Yaacoubi· Abdelghani Ben Tahar·Kawtar Bichard
Received: 25 June 2009 / Published online: 23 February 2012
© Springer Science+Business Media, LLC 2012
Abstract In this paper we consider the variational problem in the non-negative or- thant ofR3. The solution of this problem gives the large deviation rate function for the stationary distribution of an SRBM (Semimartingal Reflecting Brownian Motion).
Avram, Dai and Hasenbein (Queueing Syst. 37, 259–289,2001) provided an explicit solution of this problem in the non-negative quadrant. Building on this work, we char- acterize reflective faces of the non-negative orthant ofRd, we construct boundary in- fluence cones and we provide an explicit solution of several constrained variational problems inR3. Moreover, we give conditions under which certain spiraling paths to a point on an axis have a cost which is strictly less than the cost of every direct path and path with two pieces.
Keywords Reflected Brownian motion·Positive recurrence·Skorokhod problems·Variational problems·Queuing networks·Large deviations Mathematics Subject Classification (2000) 60F10·60J60·60J65·60K25
1 Introduction
This paper is about the solution of the variational problem in the non-negative orthant ofRd(d≥3). Actually, we continue a previous work, by Avram, Dai and Hasenbein [1], on the same problem in the non-negative quadrant. The variational problem arises from the study of the diffusion processZ=(Zt)t≥0called semimartingale reflected Brownian motion (SRBM) with data(θ, Δ, R, S). Here θ is the drift vector,Δ is
A. El Kharroubi ()·A. Yaacoubi·A. Ben Tahar·K. Bichard
Faculté des Sciences, Ain Chock, Universite Hassan II, B.P. 5366 MAARIF, Casablanca, Maroc e-mail:a.elkharroubi@fsac.ac.ma
A. El Kharroubi ()
e-mail:a_elkharroubi@yahoo.fr
a non-singular covariance matrix,R is ad×d regulation matrix andS is the non- negative orthant ofRd. In the interior of S, the processZ behaves as a Brownian motion with drift vectorθ and covariance matrix Δ and it is confined toS by a
“reflection” mechanism at the boundary where the directions of “reflection” are given by the columns of the regulation matrixR. There is a large body of literature on the study of SRBMs. Concerning the existence, uniqueness and recurrence of this process one can consult references [4–6,8,11,12,16,17].
The large deviation principle (LDP) for a stationary distribution of an SRBMZis formulated as follows (see [1,7,13]):
LetPπbe a probability measure under whichZis stationary. For every measurable setA⊂S:
lim sup
u→∞
1 ulogPπ
Z0
u ∈A
≤ − inf
v∈AcI(v) (1)
and
lim inf
u→∞
1 ulogPπ
Z0 u ∈A
≥ − inf
v∈A◦I(v), (2)
whereAcandA◦are, respectively, the closure and interior of the setA.
Forv∈S,I(v)is the solution of the following variational problem (3):
LetHdbe the space of all absolutely continuous functionsx(·):R+−→Rdwith x(0)=0. Define
I(v)= inf
x∈Hd inf
z∈ψ (x):τv(z)<+∞
τv(z) 0
L
˙ x(t )
dt, (3)
where
L(β)=1
2(β−θ )TΔ−1(β−θ ), (4) τv(z)=inf
t≥0:z(t )=v
, (5)
with inf∅ = ∞and whereψ (x)is the set of images ofxunder the Skorokhod Map (SM) that is associated withR(Definition2below).
Dupuis and Ramanan gave in [7] (Theorem 2.5) conditions under which the in- variant distribution of the(θ, Δ, R, S)-SRBM satisfy the LDP.
Forv∈S, if a given triple of paths (x, y, z) is such that x ∈Hd, (y, z) is an R-regulation ofx (Definition2below),τv(z) <+∞and
I(v)= τv(z)
0
L
˙ x(t )
dt
then(x, y, z)is called an optimal triple for the variational problem (3). The func- tionx is called an optimal path and the functionzis an optimal reflected path (see Definition 2.4 in [1]).
In [13] it is shown that, under certain conditions on the Skorokhod map, one can find an optimal path by looking at piecewise linear paths. Avram, Dai and Hasenbein gave in [1] a complete analysis of the solution of the variational problem (3) in the non-negative quadrant. A further elaboration of [1] is given in the article of Harrison and Hasenbein [10].
Dupuis and Ramanan used another representation for I(v) in terms of a time- reversed representation to identify the minimizing large deviation trajectories for a class of ad-dimensional (withd≥2) SRBM (see [7]).
In this paper, for the most part we use the same notation and terminologies as Avram, Dai and Hasenbein [1]. First we characterize, under suitable conditions on the data(θ, Δ, R), the reflective faces of the non-negative orthant ofRd. A reflective face of the orthant is a face along which the optimal escape path for (13) uses a strictly positive amount of regulation (Definition9). To obtain this characterization we analyze the variational problem (13) when the pathzis confined in a given face.
Our approach is based on the existing link between the linear Skorokhod problem LSP(θ, R)(Definition3) and the linear complementary problem associated with the vectorθand the matrixR. This last problem is stated as follows: FindvandwinRd such that
⎧⎪
⎨
⎪⎩
w=θ+Rv, w≥0, v≥0,
viwi=0 for alli∈ {1, . . . , d}.
(6)
This approach enables us to replace the constrained variational problem (13) by an equivalent finite-dimensional minimization problem (Theorem2). Moreover we give in Proposition1an explicit solution of this constrained variational problem and we identify an optimal path. In [1] it is shown that for a target pointv∈Sthe optimal path tovis influenced by a reflective boundary of the quadrant ifv is contained in a cone related to this boundary. In this paper we characterize reflective faces and we identify each of the boundary influence sets of the non-negative orthant ofR3.
Our main motivation, in this paper, is to give intermediate results which will fa- cilitate the access to the explicit solution of the variational problem (3). We consider and we solve successively the following constrained variational problems:
1. In thed-dimensional case,
• Interior escape path (8)
• Single segment boundary escape path (13) 2. In the three-dimensional case,
• Two single segment escape path (50) and (51)
• Gradual escape path (56)
• Path that spirals on the boundary ofS(Sect.5)
The structure of the paper is as follows: In Sect.2we introduce some notations and preliminary results. In Sect.3we consider and solve several constrained variational problems. Section4 is devoted to the variational problem which constrained to all gradual triples. In Sect.5we give conditions under which certain spiraling paths to a point on an axis have a cost which is strictly less than to the cost of every direct path and path with two pieces. Finally we provide some illustrative examples in Sect.6.
In particular, connection with the work of Dupuis and Ramanan is highlighted in Example1.
2 Notations and preliminaries
Letd be a positive integer. LetI be the set{1, . . . , d}. All vectors should be envi- sioned as column vectors and theith component of a vectoru∈Rd is denotedui. Inequalities involving vectors will be understood componentwise. LetR=(Rij)be a d×dmatrix.Ri denotes theith column ofR. We useRT to denote the transpose of the matrixRandIdto represent thed×didentity matrix. The non-negative orthant ofRdisS= {u∈Rd|u≥0}. IfKis a subset ofI,|K|is the cardinality ofKand we denote by
• uK the vectoruwith components(ui, i∈K)anduT the transpose ofu.
• RK the sub matrix obtained by deleting in the matrix R rows and columns in K=I\K.
• FK= {u∈S|uK=0 anduK>0}the face of S corresponding to the partition {K, K}ofI.
• ΓK the convex cone generated by the whole of the vectors (ei)i∈K, (−Rj)j∈K
,
where(ei)i∈I is the canonical Euclidean basis ofRd. In the literature the coneΓK is called a complementary cone [14].
WhenK= ∅we haveΓ∅=Sand whenK=I we setΓ =ΓI. Note that Γ =
−
i∈I
λiRi|λi≥0∀i∈I
.
• Γ◦denotes the interior of the coneΓ.
LetΔbe ad×dsymmetric and strictly positive definite matrix. For vectorsv∈Rd andw∈Rdwe define the inner product
v, w =vTΔ−1w, with the associated norm
v = v, v and we denotev⊥= {w∈Rd|v, w =0}.
C(R+,Rd)andC(R+, S)denote, respectively, the space of continuous functions fromR+inRdand inS.HSddenotes the space of all absolutely continuous functions x(·):R+−→Rdwithx(0)∈S.
The following classes of matrices are of interest in the theory of SRBMs and Skorokhod problem.
Definition 1 LetRbe ad×dmatrix.
CompletelyS-matrix: R is said to be an S-matrix if there exists a non-negative vectoru∈Rd, such thatRu >0, and R is said to be completely-S if each of its principal sub-matrices is anS-matrix.
P-matrix: Ris said to be aP-matrix if all of its principal minors are positive.
Admissible matrix: Ris said to be admissible if there is a positive diagonal matrix Dsuch thatDR+RTDis positive definite.
Definition 2 (Skorokhod problem) Letx∈C(R+,Rd)withx(0)∈S. Then a pair (y, z)∈C(R+, S)×C(R+, S)is said to solve the Skorokhod problem forxor simply said to solve SP(R, x)if they jointly satisfy
1. z(t )=x(t )+Ry(t )∈S for allt≥0. (7)
2. for allj =1, . . . , d,thejth componentyjofyis non-decreasing withyj(0)=0 and increases only on the set{t≥0|zj(t )=0}.
Throughout this paper a pair (y, z) solving SP(R, x) is referred to as an R-regulation of x, and z as an R-regulated path of x. Let ψ (x) be the set of all R-regulated paths ofx. We refer to the (in general multi-valued) mappingψ as the Skorokhod Map (SM). Bernard and El Kharroubi [3] proved that there exists an R-regulation for everyx∈C(R+,Rd)withx(0)∈S if and only if the matrixR is completely-Sas defined in Definition1.
Definition 3 (Linear Skorokhod problem) When the function x is linear x = (x0+t u, t≥0)withx0∈Sandu∈Rd, the problem SP(R, x)is said to be a Linear Skorokhod problem LSP(u, R).
Now we recall the definition of an SRBM (see [1,16]).
Definition 4 (SRBM) A semimartingale reflecting Brownian motion associated with the data(θ, Δ, R, S)(abbreviated as(θ, Δ, R, S)-SRBM) is an{Ft}-adapted, d-dimensional processZtogether with a family of probability measures{Px, x∈S}
defined on some filtered space(Ω,F,{Ft})such that, for eachx∈S, the statements 1–4 hold.
1. Px-a.s.,Zhas continuous paths andZ(t )∈Sfor allt≥0, 2. Z=X+RY,Px-a.s.,
3. underPx
(a) Xis ad-dimensional Brownian motion with drift vectorθ, covariance matrix ΔandX(0)has a point distribution atx
(b) {X(t )−X(0)−θ t,Ft, t≥0}is a martingale
4. Y is an {Ft}-adapted, d-dimensional process such that Px-a.s. for each j = 1, . . . , d,
(a) Yj(0)=0,
(b) Yj is continuous and non-decreasing,
(c) Yj can increase only on the set{t≥0|Zj(t )=0}.
It is well known that a processZexists if and only if the matrixR is completely- S as defined in Definition1 (Taylor and Williams [16]). Dupuis and Williams [6]
proved that an SRBM with data(θ, Δ, R, S)such thatR is completelyS-matrix is positive recurrent and has a unique stationary distribution if the corresponding linear Skorokhod problem LSP(θ, R)is stable (Definition5below).
Definition 5 Letθ ∈Rd. The linear Skorokhod Problem LSP(θ, R))is said to be stable if for everyx0∈Sand for allR-regulation(y, z)ofx=(x0+t θ , t≥0)we have
t→+∞lim z(t )=0.
There is a large body of literature on the study of the recurrence of an SRBM and the stability of LSP(θ, R)[4,6,8,9,12].
In this paper we restrict our study to the case in whichRis a P-matrix. Recall that for this important class of matrices we have the following result:
The class of the complementary cones{ΓK;K⊂I}partitionsRd [14,15].
Through all this paper we suppose that Condition1is satisfied.
Condition 1 The matrixR is aP-matrix and the vectorθbelongs to the interior of the coneΓ (i.e.θ∈Γ ).◦
3 Local variational problem
In this section we provide the analysis of the variational problems (8) and (13). We consider several constraints on the pathsx,zand on a target pointvin the variational problem (3).
3.1 Interior escape paths
Consider two pointsvandwsuch thatv∈Rd\ {0},w∈Sandv=w. Define I0(w, v)= inf
x∈HdS:x(0)=w,τv(x)<∞
τv(x)
0
L
˙ x(t )
dt, (8)
whereL(·)is defined by formula (4) andτv(x)=inf{t≥0|x(t )=v}. An optimal path of (8) is a trajectoryx∈HdS such thatx(0)=w,τv(x) <∞and
τv(x) 0
L
˙ x(t )
dt=I0(w, v).
In the sequel we set
I0(v)=I0(0, v).
In the following theorem we give the explicit solution of (8) and a corresponding optimal path. Define the vectorb0(w, v)and the trajectoryx0by
b0(w, v)= θ
v−w(v−w) (9) and for allt∈ [0,+∞[
x0(t )=w+t b0(w, v). (10) We haveτv(x0)=v−θw.
Theorem 1 Letvandwbe such thatv∈Rd\ {0},w∈Sandv=w. Then we have I0(w, v)= θv−w − θ, v−w (11) and the functionx0is an optimal path of (8).
First we prove the following lemma.
Lemma 1 Letvandwsuch thatv∈Rd\ {0},w∈S andv=w. Then we have the result:
I0(w, v)=inf
α>0
1
2αα(v−w)−θ2. (12)
Proof Fixv andwsuch thatv∈Rd\ {0},w∈S andv=w. Letx∈HdS such that x(0)=wandτv(x)=inf{t≥0|x(t )=v}<+∞. Since the function Lis convex, we obtain by applying Jensen’s inequality the following formula:
τv(x)
0
L
˙ x(t )
dt≥τv(x)L
x(τv(x))−x(0) τv(x)
= 1
2αα(v−w)−θ2, withα=τv1(x). We conclude that
I0(w, v)≥ inf
α>0
1
2αα(v−w)−θ2.
Conversely let α >0. Let x(t )=w+αt (v−w) for all t ≥0. Then x∈HSd, x(T )=vwithT =1α and
T
0
L
˙ x(t )
dt= 1
2αα(v−w)−θ2, thus
I0(w, v)≤ inf
α>0
1
2αα(v−w)−θ2.
Proof of Theorem1 By Lemma1, it suffices to solve the following problem:
α>0inf 1
2αα(v−w)−θ2.
This problem has a unique minimum reached atα∗=v−θw. Thus I0(w, v)= 1
2α∗α∗(v−w)−θ2= θv−w − θ, v−w. Since
τv(x0)
0
L
˙ x0(t )
dt=I0(w, v)
we conclude thatx0is an optimal path of (8).
3.2 Single segment boundary escapes
In this section we provide the analysis of the constrained variational problem defined below (13). We assume thatvbelongs to a given faceFKof the orthantSand that all R-regulated pathsztravel alongFKto reach a pointv.
LetJandKbe two subsets ofIsuch thatK⊂Jand 0<|K|<|J| ≤d. Consider two pointsvandwsuch thatv∈FK,w∈FJ. Note that ifJ=I thenw=0. Define
IK(w, v)= inf
x∈HdS;x(0)=w inf
z∈ψ (x):τv(z)<∞,z(t )∈FK,∀t∈(0,τv(z)]
τv(z)
0
L
˙ x(t )
dt.(13) In the sequel we set
IK(v)=IK(0, v).
Definition 6 An optimal triple for the variational problem (13) is a triple of paths (x, y, z)such thatx ∈HSd,x(0)=w,(y, z)is anR-regulation ofx,τv(z) <+∞, z(t )∈FK,∀t∈(0, τv(z)]and
IK(w, v)= τv(z)
0
L
˙ x(t )
dt.
The functionx is called an optimal path and the function zis an optimal reflected path of (13).
3.2.1 Finite-dimensional minimization problem
The following theorem reduces the problem (13) to the finite-dimensional minimiza- tion problem (14). For a non-empty subsetKofI we denoteRI K ad× |K|matrix obtained fromRby deleting column indices inK. Define
JK(w, v)= inf
α>0, λ∈R|+K|
1
2αα(v−w)−RI Kλ−θ2, (14) whereλ∈R|+K|means thatλ∈R|K|andλi≥0 for alli∈K.
Theorem 2 LetJandKbe two subsets ofIsuch thatK⊂Jand 0<|K|<|J| ≤d. Fix two pointsvandwsuch thatv∈FK,w∈FJ. Then
IK(w, v)=JK(w, v). (15)
Proof Letx∈HSd such thatx(0)=w and let z∈ψ (x) such that τv(z) <∞ and z(t )∈FK for allt∈(0, τv(z)]. By Jensen’s inequality we have
τv(z)
0
L
˙ x(t )
dt≥τv(z) 2
(x(τv(z))−w) τv(z) −θ
2 using (7) we obtain
z τv(z)
=v=x τv(z)
+Ry τv(z)
and
x(τv(z))−w
τv(z) =v−w
τv(z) −Ry(τv(z)) τv(z) . Sincez(t )∈FK for allt∈(0, τv(z)]we haveyK¯(τv(z))=0. Thus
x(τv(z))−w
τv(z) =α(v−w)−RI Kλ
and τv(z)
0
L
˙ x(t )
dt≥ 1
2αα(v−w)−RI Kλ−θ2, whereα=τv1(z) andλ=yKτ(τv(z)v(z)). We conclude that
IK(w, v)≥JK(w, v).
Conversely letα >0,λ∈R|+K| andb=α(v−w)−RI Kλ, then from the linear complementary problem (6), the pair of trajectories(y, z)defined for allt≥0, by
z(t )=w+t α(v−w) and
yi(t )=
λit ∀i∈K, 0 ∀i∈K,
is an R-regulation of x(t )=w+t b on [0,+∞[. Furthermore z(t )∈FK for all t∈ ]0, τv(z)]withτv(z)=1α. Thus
τv(z)
0
L
˙ x(t )
dt= 1
2αα(v−w)−RI Kλ−θ2 and therefore for allα >0 and for allλ∈R|+K|
IK(w, v)≤ 1
2αα(v−w)−RI Kλ−θ2. We conclude that
IK(w, v)≤JK(w, v).
3.2.2 Explicit solution of the finite-dimensional minimization problem
LetR∗be the set{x∈R|x=0}andR∗+be the set{x∈R|x >0}. LetK andK be two subsets ofI such thatK⊂Kand 0<|K|<|K| ≤d. Fix two pointsvand wsuch thatv∈FK,w∈FK. DenoteΦK the function from U=R∗+×R|+K| inR defined by
ΦK(α, λ)= 1
2αα(v−w)−RI Kλ−θ2. From formula (14) and Theorem2we have
IK(w, v)= inf
(α,λ)∈UΦK(α, λ). (16)
Our goal in this section is to solve the following minimization problem:
(α,λ)inf∈UΦK(α, λ). (17)
We prove in Theorem3below that the problem (17) has a unique solution and we give in Theorem4an explicit expression of this solution. First we need to prove the following two lemmas.
Lemma 2 The functionΦKas defined above is twice-continuously differentiable and strictly convex onR∗+×R|+K|.
Proof The first-order and the second-order partial derivatives ofΦKare
∂ΦK(α, λ)
∂α =v−w2
2 −RI Kλ+θ2
2α2 , (18)
∂ΦK(α, λ)
∂λj = 1 α
Rj, RI Kλ−α(v−w)+θ
, j∈K, (19)
∂2ΦK(α, λ)
∂2α = 1
α3RI Kλ+θ2,
∂2ΦK(α, λ)
∂α∂λj = 1
α2Rj,−RI Kλ−θ, j∈K,
∂2ΦK(α, λ)
∂λi∂λj = 1
αRi, Rj, i∈K, j∈K.
All the second-order partial derivatives ofΦK are continuous onR∗+×R|+K|. Now we prove that the Hessian matrix∇2ΦK(α, λ)of the functionΦKis positive definite onR∗+×R|+K|. For allx= xK
x|K|+1
∈R|K|+1we have
xT∇2ΦK(α, λ)x=
i∈K
xi
√αRi− 1 α32
i∈K
λiRi+θ
x|K|+1 2.
Since the matrixR is invertible and the vectorθ∈Γ◦, the vectors{θ, Ri, i∈K}are linearly independent thus
xT∇2ΦK(α, λ)x=0 ⇔ x=0
and the Hessian matrix∇2ΦK(α, λ)is positive definite for all(α, λ)∈R∗+×R|+K| In the following lemma we show that the functionΦK is coercive. The proof is given inAppendix.
Lemma 3 The functionΦK as defined above is coercive onU=R∗+×R|+K|. More precisely we have