• Aucun résultat trouvé

A Risk Measurement Framework Based on a HMRS

6.6 Higher-Order Markov Regime-Switching

6.6.2 A Risk Measurement Framework Based on a HMRS

The main motivation for considering a HMRS model for risk measurement is that many real-world economic and financial time series possess memories and these memories in economic and financial time series may have significant economic consequences. The empirical phenomenon of (long-term) memories, is coined as the Joseph effect, see for example [78, 153]. It is known that Markov chains serve as reasonable approximations to (continuous state) time series models. In the same vein, the higher-order Markov chain, which is also called the weak Markov chain (see for instance [201, 202, 205]), provides a feasible and convenient way to approximate time series models with memories.

In this subsection we present a HMRS model which was introduced in [189].

We only highlight the main results here, and interested readers may refer to [189]

for details. The central tenet of the HMRS model is that the expected rate of return and the volatility of a risky portfolio are modulated by a discrete-time, finite-state, higher-order Markov chain. Note that instead of modeling returns of individual risky assets as in the last subsection, we consider the model at a portfolio level and describe the dynamics of the portfolio’s return. The rationale of the HMRS model is to incorporate a regime-switching effect with long-term memory in modeling the dynamics of the portfolio’s returns. This is different from some existing time series models with long-term memories, where the effect of long-term memories is incorporated in the innovations terms, or the error terms. In what follows, we describe the mathematical set up of the HMRS.

LetT be the time parameter setf1; 2; ;gof the economy. Unlike the previous subsection, we consider an infinite-horizon, discrete-time situation for the sake of generality. To describe uncertainty, we consider a complete probability space .˝;F; P /, whereP is a real-world probability measure. Note that for the purpose of risk measurement, a real-world probability measure should be used.

LetV WD fVt; t 2Tgbe a discrete-time, higher-order Markov chain defined on .˝;F; P /with state space

V WD fv1;v2; : : : ;vMg:

Here we suppose that the Markov chain is hidden and that the states of the chain represent different states of a hidden economy. We may interpret v1 as the “best”

economic state, v2 as the second “best” economic state and vM as the “worst”

economic state, etc.

We consider here the situation thatV is anlth-order Markov chain. For each lD1; 2; and eacht l1, let

i.t; l/WD.it; it1; : : : ; itlC1/;

whereit; it1; : : : ; itlC12 f1; 2; : : : ; Mg.

We note that i.t; l/represents the indices of the states of the Markov chain from timetlC1tot inclusively. In other words, given that

i.t; l/WD.it; it1; : : : ; itlC1/;

we have

Vt Dvit; Vt1Dvit1: : : ; VtlC1DvitlC1:

To specify the probability laws of thelth-order Markov chain, we define a set of state transition probabilities by putting:

P .itC1ji.t; l//WDP ŒVtC1 DvitC1jVt Dvit; : : : ; VtlC1DvitlC1;

itC1D1; 2; ; M: (6.40)

The order l represents the degree of the long-term memory of the states of the economy.

To completely determine the probability laws of the chainV, we must specify its initial distributions as follows:

P .ilC1ji.l//WD ilC1ji.l/; for 0t < l; ilC1D1; 2; : : : ; M (6.41) where ilC1ji.l/is the probability thatVlC1DvilC1 given that

Vl WDvil; Vl1Dvil1; ; V1Dvi1 and i.l/D.i1; i2; : : : ; il/.

We now specify the HMRS model modulated by thelth-order hidden Markov chainV. LetfYt; t 2 Tgbe a sequence of logarithmic returns of a risky portfolio, whereYtdenotes the logarithmic return of the portfolio in thetthperiod. To simplify our notation, we write Vt;l for.Vt; Vt1; : : : ; VtlC1/, for each t l 1, l D 1; 2; : : :.

Lett andt be the expected rate of return and the volatility of the portfolio in thetthperiod, respectively. We suppose that both the expected rate of return and the volatility are modulated by thelth-order hidden Markov chainV as follows:

t WD.Vt;l/; t WD.Vt;l/:

In other words, both the expected rate of return and volatility of the portfolio in the tthperiod depend on the current and past values of the chainV up to lagl.

172 6 Higher-Order Markov Chains

Letft; t 2 Tgbe a sequence of independent and identically distributed (i.i.d.) random variables defined on .˝;F; P /, with common distribution N.0; 1/, a standard normal distribution with zero mean and unit variance. We assume that andV are stochastically independent underP. Then we suppose that the evolution of the logarithmic returns of the portfolio over time is governed by the following HMRS model:

Yt D.Vt;l/C.Vt;l/t: (6.42) Note that the structure of the HMRS model resembles that of the continuous-state observation process in [87] and that the HMRS is a generalization of the simple Markov, regime-switching, model discussed in the last subsection. In particular, whenl D 1 and the chainV has two states, the above HMRS model reduces to the Markov, regime-switching, model in the last subsection.

To simplify our discussion, we focus on the situation wherelD2, (i.e., a second-order hidden Markov chain). The method presented below can be extended to a general orderl. However, the notation in the general case is tedious. Whenl D 2, the dynamics of the logarithmic returns of the portfolio become:

Yt D.Vt; Vt1/C.Vt; Vt1/t; t2T: (6.43) Instead of handling the second-order hidden Markov chain directly, we consider a two-dimensional first-order hidden Markov chainX which embeds the second-order chain. By doing so, we can adopt the filtering method for the first-second-order hidden Markov chains to derive filters for the second-order hidden Markov chains.

Consider now the following two-dimensional hidden Markov chainXdefined on .˝;F; P /, which embedsV:

XtWD.Vt; Vt1/: (6.44)

LetX be an.MM /-matrix with the.i; j /-element xij WD.vi;vj/; i; j D1; 2; : : : ; M;

so that X is the state space of the two-dimensional first-order hidden Markov chainX.

DefineXQ WDvec.X/where vec./is the column-by-column vectorization func-tion. ThenXQ is anM2-dimensional column vector. In particular, the..j1/M C i/th-elementxQ.j1/MCi ofXQ is given by xij WD .vi;vj/. Consequently, we can define a one-dimensional first-order hidden Markov chainXQ, induced by the two-dimensional first-order hidden Markov chainX, such that

XQt D Qx.j1/MCi

wheneverXt Dxij.

Following the treatment in [83], without loss of generality, we identify the state space of the chainXQwith a set of standard unit vectors inRM2:

E WD fe1;e2; : : : ;eM2g

with theithcomponent ofejbeing the Kronecker delta functionıij, for eachi; j D 1; 2; ;RM2.

The use of E to be the state space of the chain XQ facilitates the use of mathematics, and this state space is called the canonical state space of the chainXQ. Again, to specify the probability laws of the chainXQ, we define an.M2M2 /-matrix representing the time-independent, (homogeneous), transition probability matrix of the first-order Markov chainXQ. The.j; k/-elementaj k ofA, (j; k D 1; 2; : : : ; M2), is given by

aj kWDP .XQt Dejj QXt1Dek/: (6.45) LetFXQ WD fFtXQjt 2Tgbe the right-continuous,P-completed, natural filtration generated by the chain X, whereQ FtXQ is the minimal -field generated by the information aboutXQup to and including timet and allP-null sets inF. Then with the canonical state spaceEof the chainX, the following semimartingale dynamicsQ for the chainXQ are obtained in [84]:

XQt WDAXQt1CLt: (6.46)

HereLis anRM2-valued,.FXQ; P /-martingale difference process.

We now specify the structure of information in our model. For eacht 2T, letFtY andFtV be the-fields generated by the return processY and the hidden Markov chainV up to and including timet, respectively. Note thatFtY represents observable information at timet.

For eachi; j D1; 2; : : : ; M, let ij.x/WD q 1

2 ij2 exp

1

2ij2x2

:

This is the probability density function of a normal distributionN.0; ij2/with mean zero and varianceij2.

Then it has been shown in [189] that the predictive distribution ofFYtC1.yjFtY/ ofYtC1givenFtY underP is given by:

FYtC1.yjFtY/D XM iD1

XM jD1

P .XQt D Qx.j1/MCijFtY/ Z yij

1 ij.x/dx;

and so

fYtC1.yjFtY/D XM iD1

XM jD1

P .XQt D Qx.j1/MCijFtY/ij.yij/:

174 6 Higher-Order Markov Chains

For eacht 2 T, letXQtY WD E.XQtjFtY/, whereE is the expectation under the measureP. Then

FYtC1.yjFtY/D XM iD1

XM jD1

˝XQtY;e.j1/MCi˛ Z yij

1 ij.x/dx;

and

fYtC1.yjFtY/D XM iD1

XM jD1

˝XQtY;e.j1/MCi˛

ij.yij/;

whereh;idenotes a scalar product inRM2.

Using a version of the Bayes’ rule, a recursive filter forXQtY can be obtained as follows:

XQtC1Y WDE.XQtC1jFtC1Y / D

PM

iD1

PM

jD1

˝XQtY;e.j1/MCi˛

ij.ytC1ij/Ae.j1/MCi PM

iD1PM

jD1

˝XQtY;e.j1/MCi˛

ij.ytC1ij/ : (6.47) This filtered estimateXQtC1Y is optimal among all linear estimates in the sense of mean-square loss. This is left as an exercise.