• Aucun résultat trouvé

Essays in Risk Management: Conditional Expectation with Applications in Finance and Insurance

N/A
N/A
Protected

Academic year: 2021

Partager "Essays in Risk Management: Conditional Expectation with Applications in Finance and Insurance"

Copied!
167
0
0

Texte intégral

(1)

Conditional Expectation with

Applications in Finance and Insurance

Mateusz Maj

Faculty of Economic, Political and Social Sciences and Solvay Business School, Vrije Universiteit Brussel

Faculty of Sciences, Universit´ e Libre de Bruxelles

A thesis submitted for the degree of

Docteur en Sciences de l’Universit´ e Libre de Bruxelles

&

Doctor in de Toegepaste Economische Wetenschappen van de Vrije Universiteit Brussel

June 2012

(2)
(3)

Conditional Expectation with Applications in Finance and

Insurance

Supervisors:

prof. dr. Griselda Deelstra (Universit´e Libre de Bruxelles) prof. dr. Steven Vanduffel (Vrije Universiteit Brussels)

Members of the Ph.D. commission:

prof. dr. Carole Bernard (University of Waterloo) prof. dr. Ann De Schepper (Universiteit Antwerpen) prof. dr. Griselda Deelstra (Universit´e Libre de Bruxelles) prof. dr. Pierre Patie (Universit´e Libre de Bruxelles)

prof. dr. Jean-Marie Reinhard (Universit´e Libre de Bruxelles) prof. dr. Paul Van Goethem (Vrije Universiteit Brussels) prof. dr. Steven Vanduffel (Vrije Universiteit Brussels)

(4)
(5)

The length of the way makes us think that we have arrived at our journey’s end - Baron de Montesquieu.

This dissertation is the result of more than 5 years of academic re- search I conducted during my doctoral studies at Katholieke Uni- versiteit Leuven, Vrije Universiteit Brussel and Universit´e Libre de Bruxelles. It was a long and winding but incredibly interesting jour- ney that could not have been possible without the passionate and continued support of advisors, colleagues, friends and family.

First and foremost, I would like to thank my supervisor Steven Van- duffel from Vrije Universiteit Brussel, who encouraged me to start this Ph.D. and gave me opportunity to combine academic activities with business projects. Steven, thank you for the research environment you created, for your time, ideas, criticism, constructive discussions and all the support I received from you. Furthermore, I would like to thank my co-supervisor Griseld Deelstra from Universit´e Libre de Bruxelles for her kind encouragement and useful remarks on the dis- sertation. Next, I would like to thank Jan Dhaene from Katholieke Universiteit Leuven for engrossing discussions on various aspects of risk management. It was a short but interesting cooperation. Finally, I am grateful to all my colleagues, papers’ co-authors and all the members of my Ph.D. committee for their support and comprehen- sive discussions. As always, neither of them should be held responsible for the errors that remain.

(6)

nights and weekends. I am also very grateful to my parents and my sister who gave me the freedom and opportunity to develop my own interests and who supported me in all my pursuits. This undertaking would not have been possible without their love. Special acknowl- edgements are also given to Jacek who introduced me to the world of mathematics (requiescat in pace). This section would be incomplete without mentioning my family-in-law who always supported and en- couraged me from afar to go forward.

Finally, I would like to thank all friends and comrades who made my life in Leuven and Belgium (and beyond...) enjoyable and socia- ble. Adam, Cami, Luci, Mada, Vika, Henrik, Maciek, Kuba, Valerie, Marysia, Fernando, Omid, Elham, Dominik, Aldona, Daniel, Ag- nieszka, Micha l, Justa, Pawe l, Dzidek, Tu´ska, Bocian, Zosia, Adam, Patyk - big thanks for your optimism, zest for life, hospitality, all the good times and our travels that allowed to recharge my batteries.

Thank you all.

(7)

In this work we study two problems motivated by Risk Management:

the optimal design of financial products from an investor’s point of view and the calculation of bounds and approximations for sums in- volving non-independent random variables. The element that inter- connects these two topics is the notion of conditioning, a fundamental concept in probability and statistics which appears to be a useful de- vice in finance. In the first part of the dissertation, we analyse struc- tured products that are now widespread in the banking and insurance industry. These products typically protect the investor against bear- ish stock markets while offering upside participation when the markets are bullish. Examples of these products include capital guaranteed funds commercialised by banks, and equity linked contracts sold by insurers. The design of these products is complex in general and it is vital to examine to which extent they are actually interesting from the investor’s point of view and whether they cannot be dominated by other strategies. In the academic literature on structured products the focus has been almost exclusively on the pricing and hedging of these instruments and less on their performance from an investor’s point of view. In this work we analyse the attractiveness of these products. We assess the theoretical cost of inefficiency when buying a structured product and describe the optimal strategy explicitly if possible. Moreover we examine the cost of the inefficiency in practice.

We extend the results of Dybvig (1988a, 1988b) and Cox & Leland (1982, 2000) who in the context of a complete, one-dimensional mar- ket investigated the inefficiency of path-dependent pay-offs. In the dissertation we consider this problem in one-dimensional L´evy and

(8)

cision makers with a fixed investment horizon, and they should buy path-independent structures instead. In these market settings we also demonstrate the optimal contract that provides the given distribu- tion to the consumer and in the case of risk-averse investors, we are able to propose two ways of improving the design of financial prod- ucts. Finally we illustrate the theory with a few well-known securities and strategies e.g. dollar cost averaging, buy-and-hold investments and widely used portfolio insurance strategies. The second part of the dissertation considers the problem of finding the distribution of a sum of non-independent random variables. Such dependent sums appear quite often in insurance and finance, for instance in case of the aggregate claim distribution or loss distribution of an investment portfolio. An interesting avenue to cope with this problem consists in using so-called convex bounds, studied by Dhaene et al. (2002a, 2002b), who applied these to sums of log-normal random variables. In their papers they have shown how these convex bounds can be used to derive closed-form approximations for several of the risk measures of such a sum. In the dissertation we prove that unlike the log-normal case the construction of a convex lower bound in explicit form appears to be out of reach for general sums of log-elliptical risks and we show how we can construct stop-loss bounds and we use these to construct mean preserving approximations for general sums of log-elliptical dis- tributions in explicit form.

(9)

Contents viii

1 Introduction 1

2 Basic Concepts 5

2.1 Probability Theory Preliminaries . . . 5

2.2 Decision Making Under Uncertainty . . . 10

2.3 Risk Ordering . . . 12

2.4 Comonotonicity . . . 16

2.5 Principles of Asset Pricing . . . 19

2.5.1 Stochastic processes preliminaries . . . 19

2.5.2 Asset pricing . . . 23

2.5.3 Black-Scholes markets . . . 28

2.5.4 Multidimensional Black-Scholes markets . . . 30

2.5.4.1 Description . . . 30

2.5.4.2 The market portfolio . . . 31

2.5.4.3 Pricing in a Multidimensional Black-Scholes Market 34 2.5.4.4 Fair pricing with Growth Optimal Portfolio . . . 36

2.5.5 L´evy markets . . . 38

2.5.5.1 Esscher transform . . . 39

3 The suboptimality of path-dependent pay-offs in L´evy markets 43 3.1 Introduction . . . 43

3.2 Inefficiency of Path-dependent Pay-offs . . . 45

3.2.1 Main Results . . . 45

(10)

3.2.2 Inefficiency of Geometric Averaging . . . 48

3.3 Optimal Path-independent Pay-offs . . . 49

3.4 Numerical Illustrations . . . 52

3.4.1 Click fund . . . 52

3.4.2 Constant Proportion Portfolio Insurance (CPPI) . . . 54

3.5 Final Remarks . . . 57

4 An explicit option-based strategy that outperforms Dollar Cost Averaging 59 4.1 Introduction . . . 59

4.2 Dominating DCA in L´evy Markets . . . 62

4.3 The Case of Brownian Motion . . . 66

4.3.1 The dominating pay-off . . . 66

4.3.2 Is the dominating pay-off unique? . . . 68

4.3.3 Discussing the performance of DCA . . . 69

4.3.4 Some motivation to support DCA . . . 71

4.4 DCA with a Minimal Guarantee . . . 72

4.5 The Continuous Setting . . . 76

4.6 Final Remarks . . . 78

5 Improving the Design of Financial Products in a Multidimen- sional Black-Scholes Market 81 5.1 Introduction . . . 81

5.2 Optimal pay-offs . . . 83

5.2.1 The case of profit-seeking investors . . . 84

5.2.2 The case of risk-averse investors . . . 87

5.2.3 Examples . . . 90

5.3 Illustrations . . . 94

5.3.1 Buy-and-hold strategies . . . 94

5.3.2 Margrabe option . . . 98

5.3.3 Constant Proportion Portfolio Insurance . . . 100

5.3.3.1 Model setup . . . 100

5.3.3.2 A single-asset CPPI . . . 101

(11)

5.3.3.3 A multi-asset CPPI . . . 101

5.4 Conclusions . . . 104

6 Bounds and Approximations for Sums of Dependent Log-Elliptical Random Variables 107 6.1 Introduction . . . 107

6.2 Elliptical and Spherical Distributions . . . 111

6.2.1 Definition of elliptical distributions . . . 111

6.2.2 Spherical distributions . . . 114

6.2.3 Conditional distributions . . . 116

6.3 Log-Elliptical Distributions . . . 117

6.4 Convex Order Bounds for Sums of Random Variables . . . 121

6.4.1 Convex Order Bounds . . . 121

6.5 Convex Order Bounds for log-Elliptical Sums . . . 122

6.6 Closed-Form Approximations for log-Elliptical Sums . . . 127

6.6.1 Approximations based on stop-loss order . . . 127

6.6.2 Optimal choice of Λ . . . 129

6.7 Numerical Illustrations . . . 130

6.8 Concluding Remarks . . . 134

Appendix A to Chapter 5 137 A.1 Moments of elliptical distributions . . . 137

A.2 Multivariate densities of elliptical distributions . . . 138

Bibliography 143

(12)
(13)

Introduction

This chapter is a brief introduction to problems we studied in this thesis and it de- lineates the thesis motivation, scope and its contribution to the existing literature.

The purpose of this dissertation is twofold. First, the text investigates the opti- mal design of financial products from an investor’s point of view. Secondly, this study examines bounds and approximations for sums involving non-independent random variables. The element that interconnects these two topics is the notion of conditioning, a fundamental concept in probability and statistics which appears to be a useful device in finance. This idea, indeed, is near-ubiquitous in many research areas. Multiple important concepts in statistics are defined in terms of conditional probability i.e. maximum likelihood estimation, sufficient statistics, significance level among others. The density operator in quantum mechanics that describes the statistical state of a quantum system is analogous to the conditional expectation operator in probability theory (Gustafson & Sigal (2006)). The con- cept of a martingale is used for describing a fair game. Loosely speaking, it means that the expected outcome at timet > s conditional on the information available at time s is equal to the observation at time s. The martingale theory is always behind the scenes in Stochastic Control, Dynamic Programming, Game Theory and Asset Pricing. In this thesis we extensively use the martingale theory for stochastic asset pricing and we also make use of the conditional expectation op- erator E(X|Y) that can be interpreted as the best estimate of random variable X given the available information of Y. In that sense by adding extra informa- tion about Y we reduce the uncertainty regarding X. Finally, it is interesting to

(14)

note that the conditional expectation can be considered as ‘an approximation’ for the original random variable. This idea of considering conditional expectations to cope with difficult random variables, such as sums of random variables, can be traced back to Curran (1994) and Rogers & Shi (1995). They showed in the context of a Black-Scholes framework how this could be useful to derive sharp bounds for the value of an Asian option. In our study we will employ this tech- nique for deriving bounds for sums of dependent log-elliptical random variables.

In the first part of the dissertation, we analyse structured products that are now widespread in the banking and insurance industry. These products typically protect the investor against bearish stock markets while offering upside partic- ipation when the markets are bullish. A structured product is a pre-packaged financial instrument which combines standard financial products such as stocks and bonds with at least one derivative. As such they greatly enlarge the set of investments that are available to investors. Examples of these products include capital guaranteed funds commercialised by banks, and equity linked contracts sold by insurers. The design of these products is complex in general. Financial companies, life and pension providers themselves use these products in the man- agement of their assets and liabilities very often. For example, so-called Constant Proportion Portfolio Insurance (CPPI) which aims to protect the value of com- pany’s investment portfolio while taking benefit of increasing markets, can be seen as yet another example of a structured product.

These investment strategies are in general complicated and it is vital to exam- ine to which extent they are actually interesting from the investor’s point of view and whether they cannot be dominated by other strategies. In the academic liter- ature on structured products the focus has been almost exclusively on the pricing and hedging of these instruments and less on their performance from an investor’s point of view. In this work we analyse the attractiveness of these products. We assess the theoretical cost of inefficiency when buying a structured product and describe the optimal strategy explicitly if possible. Moreover we examine the cost of the inefficiency in practice. We try to answer what kind of investors are willing to buy complex structured products and whether it is possible to propose new designs to improve the current available designs from the consumer’s perspective.

(15)

Dybvig (1988a, 1988b) introduced an interesting concept to compare strategies without having to refer explicitly to the individual’s preferences. In the context of a complete, one-dimensional market he showed that there could be several strategies with the same distribution at maturity but with different initial costs. A strategy is cost-efficient if it is not possible to find another strategy that provides the same distribution of wealth at maturity at a strictly lower cost. The key result is that amongst all pay-offs that have the same terminal wealth distribution, there is one path-independent pay-off that will have a strictly lower cost than all others. Clearly this pay-off will be preferred by all profit seeking investors.

For more details, we refer to Dybvig (1988a, 1988b) and Bernard et al. (2011a) who provide an explicit representation of the optimal pay-off to achieve a given distribution. Using other techniques, Cox & Leland (1982, 2000) showed that path-dependent strategies are never optimal for risk-averse investors who have a fixed investment maturity. We also refer to the work of Vanduffel (2005a, p.27) who proved this result in an elegant way by invoking conditional expectations.

In this thesis we first give a brief overview of the relevant probabilistic, stochas- tic and financial concepts that are germane to our studies. InChapter 3 we extend the previously mentioned results to one-dimensional L´evy markets and we pro- vide evidence that path-dependent pay-offs should not be preferred by decision makers with a fixed investment horizon, and they should buy path-independent structures instead. Chapter 4 examines Dollar cost averaging (DCA) which is a widely employed investment strategy in financial markets. This is a very good example of a policy that is sub-optimal from the point of view of risk-averse de- cision makers with a fixed investment horizon T >0. In this section we propose an optimal strategy that outperforms DCA in L´evy markets. We also discuss a market governed by a Brownian motion in more detail and analyse DCA in presence of a minimal guarantee, explore the continuous setting and discuss the (non) uniqueness of the dominating strategy. In Chapter 5 we extend some of the previously presented results to a multidimensional Black-Scholes market, i.e.

when the price processes are governed by correlated Geometric Brownian Mo- tions. In such a market, we again discuss optimal contracts for investors who prefer more to less and have a fixed investment horizon T >0. We demonstrate the optimal contract that provides the given distribution to the consumer and in

(16)

the case of risk-averse investors, we are able to propose two ways of improving the design of financial products. Finally we illustrate the theory with a few well- known securities and strategies e.g. buy-and-hold investments and widely used portfolio insurance strategies.

The second part of the dissertation, consisting of Chapter 6, considers the problem of finding the distribution of a sum of non-independent random variables.

Such dependent sums appear quite often in insurance and finance, for instance in case of the aggregate claim distribution or loss distribution of an investment portfolio. In classical risk theory, the individual risks are typically assumed to be mutually independent, mainly because computation of the aggregate claims becomes more tractable in this case. For special families of individual claim dis- tributions, one may determine the exact form of the distribution for the aggregate claims. Note that most standard actuarial methods for determining the aggregate claims distribution are only applicable in case the individual risks are assumed to be mutually independent. However, there are situations where the independence assumption is questionable, for instance in a situation where the individual risks are influenced by the same economic or physical environment. Therefore a lot of effort has been made recently to develop new techniques enabling to determine the distribution function of sums of dependent random variables. In general, this task is difficult to perform or even impossible because the dependence between the components involved in the sum is unknown or too cumbersome to work with.

An interesting avenue to cope with this problem consists in using so-called convex bounds, studied by Dhaene et al. (2002a, 2002b), who applied these to sums of log-normal random variables. In their papers they have shown how these convex bounds can be used to derive closed-form approximations for various risk measures of such a sum.

In Chapter 6, we extend the abovementioned results to sums of general log- elliptical distributions. Firstly, we prove that unlike the log-normal case the construction of a convex lower bound in explicit form appears to be out of reach for general sums of log-elliptical risks. Secondly, we show how we can construct stop-loss bounds and we use these to construct mean preserving approximations for general sums of log-elliptical distributions in explicit form.

(17)

Basic Concepts

2.1 Probability Theory Preliminaries

In this section we give a brief overview of the probabilistic concepts that will be used throughout the thesis. There is, of course, no shortage of excellent books introducing the probability theory. A few among them are Billingsley (1995), Chow & Teicher (1988), Chung (2001), Rosenthal (2006), Shiryaev (1996) and Williams (1991).

We start with a model for an experiment involving randomness that has the form of a probability space (Ω,F,P) where

• Ω is a non-empty set, called sample space.

• F is a collection of sub-sets of Ω closed under all countable set operation.

This collection is a σ-field and its elements are calledevents.

• P:F→[0,1] is theprobability measure such that

– P is countably additive i.e. if Ai ⊆ F, i = 1,2, . . ., is a countable collection of pairwise disjoint sets, then P(∪i=1Ai) = P

i=1P(Ai), – P(Ω) = 1

A random variable is X is a measurable function mapping Ω to the real num- bers i.e. X : Ω→R, such thatX−1((∞, x])∈F, for any x∈R, where

(18)

X−1((∞, x]) ={ω ∈Ω|X(ω)≤x}.

Next we define the cumulative distribution function(cdf) FX that contains all information about the random variable as:

FX(x) = P(X ≤x), ∀x. (2.1)

A distribution function FX(·) :R→Rhas the following properties:

• FX(x) is right continuous.

• lim

x→−∞ FX(x) = 0.

• lim

x→∞ FX(x) = 1.

If the derivative of the distribution function exists then we say that its cu- mulative distribution function is absolutely continuous. In this case we denote fX(x) =FX0 (x), andfX(x) is called the probability density function (pdf) or density in short.

We make often use of the inverse of the distribution function F which we define as:

FX−1(p) = inf

x∈R

{ FX(x)≥p }, 0< p <1. (2.2) FX−1(p) is also called thequantile function(Qp(X)) and it can be used to trans- late results obtained for the uniform distribution to other distributions.

Further, for t = (t1, t2, ..., tn) and x = (x1, x2, ..., xn) ∈ Rn let Ft(x) denote the multivariate distribution function of the random vector (Xt1, Xt2, ..., Xtn):

Ft(x) = Pr(Xt1 ≤x1, Xt2 ≤x2, ..., Xtn ≤xn). (2.3) When (Xt1, Xt2, ..., Xtn) is an absolutely continuous random vector we denote its density in case this exists by ft(x). When (Xt1, Xt2, ..., Xtn) is discrete we define ft(x) as

ft(x) = Pr(Xt1 =x1, Xt2 =x2, ..., Xtn =xn). (2.4)

(19)

The next important concept is a characteristic functiondefined for z ∈R as:

ϕX(z) =E eizX

= Z

−∞

eizXdP . (2.5)

The resulting complex valued function is called the Fourier-Stieltjes transform of the random variable X and as the Laplace transform it determines the distri- bution. The characteristic function could be replaced by mX(z), the moment generating function (mgf)1, but it has advantage that it always exists because the function eizX is bounded. The moment generating function (mgf) of Xt will be denoted bymt(u). We also have thatmt(z) = (m1(z))t (under a mild continu- ity condition (see Breiman (1968), Section 14.4)) and we will use the short-hand notation m(z) instead of m1(z).

In the remainder of the thesis we will always tacitly assume that all ft(x) and mt(z) mentioned exist.

A random variable X is said to be infinitely divisible if, for any n ≥ 1, there exists i.i.d. random variables X1(n), . . . , Xn(n) such that

X =d X1(n)+. . .+Xn(n). (2.6) The Poisson distribution, the negative binomial distribution, and the Gamma distribution are examples of infinitely divisible distributions. This concept plays an important role in probability limit theorems and in the theory of L´evy pro- cesses that will be introduced later in this chapter.

Finally, we discuss the idea of conditioning and introduce a notion of the conditional probability and conditional expectation, which is loosely speaking the estimate of a random variable in the presence of partial information.

We start with two continuous random variables X and Y with joint probabil-

1ϕX(z) =mX(iz)

(20)

ity density function fX,Y(x, y), marginal probability functions fX(x), fY(y) and the necessary condition thatfY(y)>0. Theconditional probability function of X given Y can be defined as:

fX|Y(x|y) = fX,Y(x, y)

fY(y) , (2.7)

We then can easily define the mean of the above-mentioned probability distribu- tion and call it the conditional expectation:

E(X|Y =y) = Z

xfX|Y(x|y)dx. (2.8)

The more formal definition of the conditional expectation is presented below.

Definition 2.1.1(Conditional expectation).Let us considerX, the integrable random variable on the probability space (Ω,F,P) and let G be a sub-σ-field con- tained in F. Then there exists an almost surely unique random variable E[X|G], called the conditional expectation of X given G, which satisfies the following con- ditions:

• E[X|G] is G-measurable.

• E[X|G] satisfies:

Z

G

E[X|G]dP= Z

G

XdP, ∀G∈G. (2.9)

This general definition is based on the observation that E[X|Y] depends on the sub-σ-field σ(Y) generated by the random variable Y, rather than on the actual values of Y.

Next we present the properties of the conditional expectation. We assume thatG is a sub-σ-algebra ofFand that the random variablesXandY areF-measurable.

The constants a, b, c are real numbers and all presented conditional expectations are assumed to exist.

The conditional expectation has the following properties:

(21)

• Monotonicity: X ≤Y a.s. ⇒E(X|G)≤E(Y|G) a.s.

• Linearity: E(aX+bY +c|G) = aE(X|G) +bE(Y|G) +c a.s.

• Monotone convergence theorem:

Xn≥0, Xn %X a.s. ⇒ E(Xn|G)%E(X|G) a.s.

• Fatou’s lemma: Xn ≥0 ⇒ E(lim inf

n→∞ Xn|G)≤lim inf

n→∞ E(Xn|G) a.s.

• Dominated convergence theorem:

If Xn → X a.s. and |Xn| ≤ Y for some integrable random variable Y

⇒E(Xn|G)→E(X|G) a.s.

• Jensen’s inequality: f convex⇒E(f(X)|G)≥f(E(X|G)) a.s.

(22)

2.2 Decision Making Under Uncertainty

The standard framework in economics to make decisions under uncertainty is the Expected Utility Theory from von Neumann & Morgenstern (1947). This is the most canonical and the most well-known among all theories that construct the decision maker’s preferences.

The expected utility paradigm is a member of a much wider decision mak- ing class where preferences are assumed to be law-invariant, i.e. decision mak- ers only care about the distribution function of terminal wealth. This class is very general and contains a wide range of decision-making theories, including the above-mentioned expected utility theory, mean-variance model (Merton(1971)), goal reaching (initiated by Kulldorff (1993) and Heath (1993)), Yaari’s dual model (Yaari (1987)) and behavioral Cumulative Prospect Theory (Kahneman & Tver- sky (1979) and Tversky & Kahneman (1992)).

In the utility paradigm, every decision-taker assigns a utilityu(w) to each pos- sible wealth-level w. This real-valued function u(·) is called the utility function.

When such a decision-maker has to choose between random wealths X and Y, she compares the expected utilities E[u(X)] with E[u(Y)], provided these exist, for some specific utility function u and chooses the random wealth that gives rise to the highest expected utility. Von Neumann & Morgenstern (1947) have also proven that such a utility function exists if and only if the decision maker’s preferences obey a set of axioms. Note that while these axioms imply the ex- istence of a utility function they do not determine the exact specification of it.

Different decision makers can have different utility functions and, consequently, make different decisions.

However, when utility functions satisfy ’reasonable’ conditions it is possible to identify situations where all decision makers will make the same decisions. A first class of ‘reasonable’ decision makers appears when assuming that more wealth is preferred to less, and this is akin to saying that the utility function u(·) will be non-decreasing in these instances. This gives rise to the following definition.

Definition 2.2.1 (Class of profit seeking decision makers). A decision- maker will be said to be profit seeking if her utility function u(·) is strictly in- creasing.

(23)

Besides assuming that the decision makers are maximizing expected utility functions, it makes sense to accept the hypothesis that every reasonable decision- maker prefers a certain gain above a random gain with the same mean. By virtue of Jensen’s inequality this means that the utility function is concave. Indeed, Jensen’s famous inequality states that for any concave function u(·) it holds that

E[u(X)]≤u(E[X]) (2.10)

so that the certain wealth E[X] is preferred to a variable incomeX in this case.

This leads to the following definition:

Definition 2.2.2 (Class of risk-averse decision makers). A decision maker will be said to be risk-averse if her utility function u(·) is strictly increasing and concave.

Let us now consider the conditional expectation E[X |Z =z] which intu- itively provides the best estimate for X given the available information Z = z.

By varying z we obtain the random variable E[X |Z] which is a function of Z only. Adding information reduces uncertainty and intuitively one expects that E[X |Z] is more appealing to risk-averse agents than X itself. Indeed, from the law of total expectations and Jensen’s inequality (2.10) one derives the following result immediately. For any random variableX andZ it holds that risk-averse de- cision makers with utility function u(·) will prefer the random variable E[X |Z]

i.e.

E[u(E[X |Z])]≥E[u(X)]. (2.11)

(24)

2.3 Risk Ordering

In this section we present notions of random variable ordering which is a very useful concept in probability theory and statistics. The stochastic ordering is a mathematical concept that allows to compare random variables and it is a well established topic of research that attracted a lot of attention in the academic community. It appears in various fields like economics, insurance, finance, queue- ing theory or epidemiology and is especially useful when we only possess a partial information about the model distribution at hand.

For a given class F of functions f : R → R we say that a random variable X is smaller than a random variable Y in the ≤F - sense if for all functions f in F it holds that E[f(X)]≤ E[f(Y)] (provided these expectations exist). If we take forF the class of non-decreasing functions we find the well-known stochastic dominance ordering whereas if F contains all convex functions we find the so- called convex order. This leads to the following definitions.

Definition 2.3.1(Stochastic ordering - first order stochastic dominance).

A random variable X is said to precede the random variable Y in the stochastic order sense, written as X ≥st Y (or X ≥F SD Y), if and only if the distribution function of Y always exceeds that of X :

FX(x)≤FY(x), ∀x∈R. (2.12)

This criterium can be seen as a quantile ordering system and it is defined below in terms of the quantile function:

X ≥F SD Y ⇔ ∀p∈(0,1) Qp(X)≥Qp(Y). (2.13) When X and Y represent returns, X ≥st Y means that any utility maximizing investor (with an increasing utility function) prefers thereturn Xto thereturn Y. Let us now introduce the stop-loss premium E

(X−d)+

defined as:

E

(X−d)+

= Z

d

FX(x)dx, (2.14)

(25)

whereFX is thesurvival functionofX(= 1−FX). The stop-loss premium can be interpreted as a measure for the weight of the upper-tail of the distribution forX fromdon. In actuarial sciences, it is common to replace a random variable by another one which is “less attractive”, hence “more save”, and with a simpler structure so that the distribution function is easier to determine. The notion of

“less attractive” can be translated in terms of stop-loss and convex orders, as defined below.

Definition 2.3.2 (Stop-loss order). A random variable X is said to precede another random variable Y in stop-loss order, written as X ≤slY, if

E

(X−d)+

≤E

(Y −d)+

, for all d. (2.15)

It can be proven that

X ≤slY ⇔E[v(X)]≤E[v(X)] , (2.16) holds for all increasing convex functions v(x), which also explains why stop-loss order is also called increasing convex order, denoted by ≤icx. In the decision theory X ≤sl Y means that any risk-averse decision-maker will prefer risk (or loss) X over the risk (or loss) Y. If in addition the random variables X and Y have equal means we obtain the convex order.

Definition 2.3.3 (Convex order). A random variable X is said to precede another random variable Y in convex order, written as X ≤cx Y, if X ≤slY and E(X) = E(Y).

It can be proven that

X ≤cx Y ⇔E[v(X)]≤E[v(X)] (2.17) for all convex functions v(x). The convex ordering reflects the common prefer- ences of all risk adverse decision makers when choosing between random variables with equal mean. This holds in both the classical utility theory from von Neuman

& Morgenstern as in Yaari’s dual theory for decision making under risk (see for instance Denuit et al. (1999) for more details).

(26)

For instance, if we assert that decision makers prefer more wealth than less, then we find in both the Expected Utility Theory from von Neumann & Morgen- stern (1947) as well as the Dual Theory of Choice under Risk from Yaari (1987) that X ≤st Y implies that the random gain Y will be preferred above X by all these decision makers. Indeed X ≤st Y means that 1−FX(x) ≤ 1−FY(x) for all x so that the random gain Y provides ‘more chance’ of a higher gain.

It is also well-known that in both mentioned theories X ≤cx Y means that the X will be preferred above Y by those decision makers who prefer a certain gain above a random gain with equal expectation. In both theories such risk decision makers are called risk-averse. IndeedX ≤cx Y intuitively means that X is less volatile or ‘more stable’ than Y, and since it also holds that E[X] =E[Y] risk-averse decision makers will preferX aboveY; see also Wang & Young (1998) for more information.

The related ordering concept to the stop-loss order is the increasing concave ordering useful in the context of returns rather than losses. It is also called the second stochastic order dominance and can be defined as follows:

Definition 2.3.4 (Second order stochastic dominance). A random vari- able X is said to dominate the random variable Y in the second order stochastic dominance sense, written as X ≥SSD Y, if and only if the

∀t ∈R E[(t−X)+]≤E[(t−Y)+]. (2.18) In terms of the quantile functionthis criteria looks as follows:

X ≥SSD Y ⇔ ∀q∈(0,1) Z q

0

Qp(X)dp≥ Z q

0

Qp(Y)dp. (2.19) In a financial setting X ≥SSD Y means that any risk-averse decision maker will prefer return X over return Y. Also note that X ≥SSD Y holds, if and only if

−Y ≥sl −X.

For more details and properties of stochastic ordering relations we refer for ex- ample to Shaked & Shanthikumar (1994). In our context these orderings are of interest because they can be related to theories for decision making with regards

(27)

to random variables X and Y representing returns or risks.

(28)

2.4 Comonotonicity

We introduce here a concept of comotonicity which proved to be very useful in many financial and actuarial applications when one has to determine the aggre- gated risks distribution. Typical example is a portfolio of n investment positions (a collection of stocks and bonds, a book of derivatives, a collection of risk loans, etc.) facing potential losses L1, L2, ..., Ln over a given reference period, e.g. one month or one year. The total potential loss Lfor this portfolio is then given by

L=

n

X

k=1

Lk. (2.20)

As the returns on the different investment positions will in general be non- independent, it is clear thatLwill be a sum of non-independent random variables.

In insurance, we can consider a portfolio of n insurance risks X1, X2,..., Xn. The aggregate claim amount S is defined to be the sum of these individual positions:

S =

n

X

k=1

Xk, (2.21)

where generally the risks are non-negative random variables, i.e. Xk ≥0. Knowl- edge of the distribution of this sum provides essential information for the insur- ance company and can be used as an input in the calculation of premiums and reserves.

In these applications, we are very often facing a situation where only the indi- vidual marginal risk distributions are known. The standard and mathematically tractable approach in this situation is the assumption of mutual independence between the risks which unfortunately does not comply with reality and may lead to false conclusions and underestimation of risk. In order to overcome this problem, Dhaene et al. (2002a) introduced a concept of comonotonicity which is an extreme form of positive dependence that leads to prudent or conservative decisions and can be used to determine easy to compute and accurate upper and

(29)

lower bounds of the aggregate risk distribution.

Consider an n-dimensional random vectorX = (X1, ..., Xn)T with multivari- ate distribution function given by FX(x) = Pr (X1 ≤x1, . . . , Xn≤xn), for any x = (x1,· · · , xn)T. It is well-known that this multivariate distribution function satisfies the so-called Fr´echet bounds:

max

n

X

k=1

FXk(xk)−(n−1),0

!

≤FX(x)≤min FX

1 (x1),· · · , FXn(xn) , (2.22) see Hoeffding (1940) or Fr´echet (1951).

Definition 2.4.1 (Comonotonicity). A random vectorXis said to be comono- tonic if its joint distribution is given by the Fr´echet upper bound, i.e.,

FX(x) = min FX

1 (x1),· · · , FXn(xn)

. (2.23)

Alternative characterisations of comonotonicity of a random vector are given in the following theorem, the proof of which can be found in Dhaeneet al. (2002a).

Theorem 2.4.2 (Characterisation of Comonotonicity). Suppose X is an n-dimensional random vector. Then the following statements are equivalent:

1. X is comonotonic.

2. X =d FX−1

1 (U), ..., FX−1

n(U)

for U ∼ Uniform(0,1) where FX−1

k (·) denotes the quantile function.

3. There exists a random variable Z and non-decreasing functions h1, ..., hn such that

X = (hd 1(Z), ..., hn(Z)). (2.24) From now on we will use the superscript c to denote comonotonicity of a random vector. Hence, the vector Xc = (X1c, ..., Xnc) is a comonotonic random vector with the same marginals as the vector (X1, ..., Xn). The former vector is called the comonotonic counterpart of the latter.

Consider the comonotonic random sum

Sc =X1c+· · ·+Xnc. (2.25)

(30)

In Dhaeneet al. (2002a), it is proven that each quantile ofSc is equal to the sum of the corresponding quantiles of the marginals involved:

FS−1c (q) =

n

X

k=1

FX−1

k (q), 0< q <1. (2.26) Furthermore, they showed that in case all marginal distributions FXk are strictly increasing, the stop-loss premiums of a comonotonic sum Sc can easily be com- puted from the stop-loss premiums of the marginals:

E

(Sc−d)+

=

n

X

k=1

E

(Xk−dk)+

, (2.27)

where the dk’s are determined by dk =F−1

Xk (FSc(d)). (2.28)

This result can be generalised to the case of marginal distributions that are not necessarily strictly increasing, see Dhaene et al. (2002a).

(31)

2.5 Principles of Asset Pricing

2.5.1 Stochastic processes preliminaries

Stochastic processes play a fundamental role in financial modeling and in this sec- tion we will briefly outline some important facts useful in the asset pricing. Stan- dard references are Bj¨ork (1996), Cerny (2009), Durret (1996), Hunt & Kennedy (2004), Karatzas & Shreve (1991), Merton (1990), Shreve (2004) among others.

Definition 2.5.1(Stochastic process). A stochastic process indexed byt∈R+, taking its values in (R, B), is a family of measurable mappings (Xt)t∈R+, from a probability space (Ω,F,P) into (R, B). The measurable space (R, B) is called the state space and B is the Borel σ-field on R.

A stochastic process X = (Xt)t≥0 isstationary if for any integer k ≥ 1 and real numbers 0 ≤t1 ≤ t2 ≤ · · · ≤ tk <∞ the distribution of the random vector (Xt1+t, Xt2+t, . . . , Xtk+t), does not depend on t. Let us now define anincrement of the process (Xt)t≥0 between time s and t, t > s, as the differenceXt−Xs. A stochastic process (Xt)t≥0 is said to have stationary increments if the law of the incrementXt−Xsdepends only ont−s, hence is invariant by translation in time.

A non-decreasing family Ft of σ-fields on Ω parametrized by t ≥ 0 is called a filtration if

Fs ⊂Ft⊂F f or 0≤s < t. (2.29) The filtration is the information structure that completely specifies the evolution of information in time and hence Ft represents the information present at time t. A stochastic process X is called (Ft)t≥0 - adapted if Xt is Ft-measurable for all t≥0. It means that Xt is known at time t. The natural filtration(FXt )t≥0

of a continuous stochastic process (Xt)t≥0 is the smallest filtration such that the process is adapted.

Definition 2.5.2 (Martingale). A stochastic process X = (Xt)t≥0 is a martin- gale with respect to a filtration (Ft)t≥0 if

• X is (Ft)t≥0-adapted.

(32)

• E(|Xt|)<∞ for all t≥0.

• E(Xt|Fs) = Xs, P-a.s., for every pair s,t such that 0≤s≤t.

The third property is of vast importance and it means that the best forecast of an unobserved future value of a martingale is its last observation. Martingales are very useful in the context of asset pricing because if we can convert financial assets into martingales then we can model them consistently as riskless assets under the equivalent martingale measure.

The process X = (Xt)t≥0 is called a supermartingale if the last martingale property is replaced with E(Xt|Fs) ≤ Xs, P−a.s., 0 ≤ s ≤ t, and in which case X is a process with negative drift. We obtain a process with positive drift, calledsubmartingale, when we replace the third martingale property with E(Xt|Fs)≥Xs, P−a.s., 0≤s≤t.

Let us now introduce a Wiener process that is a continuous stochastic process with stationary, independent increments, first used by Bachelier (1900) for modeling options on French government bonds.

Definition 2.5.3 (Wiener process). We define the standard Wiener process W = (Wt)t≥0 as an (Ft)t≥0-adapted process with Gaussian stationary independent increments and continuous sample paths for which

W0 = 0, E(Wt) = 0, V ar(Wt−Ws) = t−s, for all t ≥0 and s∈[0, t].

The Wiener process is also known as Brownian motion.

A more general class of stochastic processes are L´evy processes, of which the Brownian Motion is a special case. They have been widely used in recent years by many researchers in financial and actuarial modeling and in a natural way they are related to infinitely divisible probability distributions.

Definition 2.5.4 (L´evy process). We say that the cadlag stochastic process X = (Xt)t≥0 is a L´evy process if

(33)

• X0 = 0 a.s.;

• X has independent and stationary increments;

• X is stochastically continuous, i.e. for all a >0 and for all s≥0 limt→sP(|Xt−Xs|> a) = 0.

The distribution of an increment over [s, s+t], s, t ≥ 0, i.e. Xt+s−Xs, has (ϕX1(u))t as its characteristic function.

An important property of the characteristic function of a L´evy process is that it can be expressed in the form of etψ(z) for some continuous function ψ(z). More precisely we have the following.

Theorem 2.5.5. If X is a L´evy process then there exists a unique function ψ ∈ C(R,C) such that ψ(0) = 0 and

ϕXt(z) = etψ(z), t≥0, z ∈R, (2.30) where the function ψ is called the characteristic exponent of X and ψXt(z) = tψ(z).

The L´evy process provides a natural generalisation of the sum of indepen- dent and identically distributed (i.i.d.) random variables and this class can be characterised in terms of a triplet (γ, σ2, ν) which leads us to the celebrated L´evy-Khintchine representation (for instance Schoutens (2003)):

ψXt(u) =t

iγu− σ2 2 u2+

Z

−∞

exp(iux)−1−iuxI{|x|<1}

ν(dx)

, (2.31) where u∈R, γ ∈R, σ2 ≥0 and ν is a measure on R\ {0} with

R

−∞(1∧x2)ν(dx) < ∞ and I is an indicator function. The triplet parameters are a drift, a diffusion component and a jump component. This representation suggests that it is easier to work with L´evy processes via Laplace transforms and for numerical calculations with inverted Laplace transforms.

(34)

We also state here an important result on the existence of exponential moments for L´evy processes (Sato (1999), Theorem 25.17).

Proposition 2.5.6. Let X be a L´evy process on R with characteristic triplet (γ, σ2, ν). The exponential moment E ezXt

, z ∈R, is finite if and only if Z

|x|≥1

ezxν(dx)<∞. (2.32)

In this case

E ezXt

=etψ(−iz), (2.33)

where ψ is the characteristic exponent of X.

Below we define the Laplace exponent L which will be needed later in the derivations

L(z) =ψ(−iz), z ≤0. (2.34)

Since L(iu) = ψ(u) for all u ∈ R, L(z) is well defined at least for z ∈ C with Re(z) = 0 and E ezXt

=etL(z).

The most commonly used process in continuous stock price modeling is the Ge- ometric Brownian Motion (GBM) with the SDE of the form

dSt=St(µdt+σdWt) (2.35) and the solution given by:

St =S0exp

µt− σ2

2 t+σWt

, (2.36)

where (Wt)t≥0 is a Wiener process and µ and σ >0 are drift and volatility con- stants.

In the settings of a L´evy market, the exponential model for the asset S = (St)t≥0

is determined through:

St=S0exp (Xt),

(35)

where (Xt)t≥0 is a L´evy process and St denotes the asset price at time t.

The underlying assumptions of the GBM model is the normality of the log- returns which plays a central role in many useful financial theories, including the seminal Markowitz optimal portfolio model (Markowitz (1952)), Capital Asset Pricing Model (Sharpe (1964), Treynor (1961), Lintner (1965)) and as a conve- nient choice for Value at Risk calculations (Jorion (2006)). The normality of the asset returns combined with the continuity of the trajectories exhibited by the geometric Brownian model is very often inappropriate since it ignores the fat tails and the fact that real asset prices exhibit jumps. This is especially true when we consider high frequency asset returns i.e. daily, weekly or monthly. How- ever, many studies showed that when choosing a model for a low frequency data (quarterly, yearly returns) the multivariate normality can be justified because the central limit theorem effect is taking place in these situations (McNeil et al.

(2005)).

2.5.2 Asset pricing

In this section we provide a brief description of the asset pricing methods. Be- fore we start evaluating our financial strategies we have to define the underlying economy in which we are going to work. The financial market is arbitrage-free, it is equipped with the physical (empirical) probability measure P, there exists a money market account B = {Bt=B0ert, t≥0} with a constant risk-free in- terest rate r > 0 and there are n traded risky assets S1, S2, . . . , Sn governed by the price processes (Sti)t≥0 (i = 1,2, . . . , n). We will formulate all the results in continuous time in the case that the market is frictionless and there are no taxes, no transaction costs, no dividends, no restriction on borrowing or short sales and the risky assets are perfectly divisible. In particular we will present the basic classical one-dimensional Black-Scholes model as well as the extension to pricing in a multidimensional Black-Scholes market. We will introduce also the market models based on L´evy processes that give more flexibility and potential accuracy when modeling high frequency stochastic returns.

The key concept in asset pricing is an arbitrage which is loosely speaking a

(36)

trading strategy that generates profits from nothing with no risk involved. Given the absence of arbitrage in the economy it follows that the value of the asset or the derivative is the value of the portfolio that replicates it. In order to repli- cate the derivative X we need to find a self-financing strategy where there is no money inflow into or money outflow from a trading portfolio other than the initial amount at time zero. Moreover the portfolio value changes only because the underlying asset prices change.

Let us first consider a financial security with stochastic pay-off Hg

Hg =g(Sti |0≤ti ≤T, i= 1,2, ..., n), (2.37) for some function g. This pay-off depends on the dynamics of the stochastic process (St)t≥0. It is well-known that the absence of arbitrage opportunities es- sentially amounts to determining the priceC(Hg) forHgby taking the discounted expectation of Hg,not with respect to the physical (market) probability measure P, but with respect to another probability measureQcalled equivalent martingale measure or risk neutral measure. We will use the notation EQ when expectations are taken with respect to this new probability measure Q. Furthermore,Qhas to be determined such that the discounted process (e−rtSt)t≥0becomes a martingale, which implies that for all t ≥0 :

EQ e−rtSt

=S0, t ≥0. (2.38)

The bridge that connects the physical and the risk-neutral measure is theRadon- Nikodym derivative. Let us first define the probability measure equivalence.

Definition 2.5.7 (Measure equivalence). LetPandQbe two probability mea- sures on the measurable space (Ω,F). The measure Q is absolutely continuous with respect to measure P (with respect to the σ-algebra F), (P ≺ Q), if for all F ∈F

P(F) = 0 ⇒Q(F) = 0. (2.39)

If both (P≺Q)and (Q≺P) thenP and Qare said to be equivalent (with respect to F) ((P∼Q)).

(37)

The Radon-Nikodym theorem shows how the measures are connected.

Theorem 2.5.8 (Radon-Nikodym theorem). Let us consider two equivalent probability measures P and Q defined on (Ω,F). Then there exists a strictly positive random variable Z ∈ L1(Ω,F,P) which is a.s. unique and is such that, for all F ∈F

Q(F) = Z

F

ZdP=EP[Z·IF]. (2.40) Further, Z−1 ∈L1(Ω,F,Q) and

P(F) = Z

F

Z−1dQ=EQ

Z−1·IF

. (2.41)

Conversely, given any strictly positive random variable Z ∈ L1(Ω,F,P) with EP[Z] = 1there exists some unique equivalent measureP∼Qfor which equations (2.40) and (2.41) hold (see Hunt & Kennedy (2004)).

Now we can defineRadon-Nikodym derivative:

Definition 2.5.9 (Radon-Nikodym derivative). The measurable function Z = dQ

dP a.s. (2.42)

is called the Radon-Nikodym derivative of Q with respect to P.

For a random variable X it follows that EQ[X] = EP[XZ]. We will state now the Girsanov theorem that is a useful tool when replacing the empirical measure P with the equivalent probability measureQ.

Theorem 2.5.10 (Girsanov theorem). Let (WtP)t≥0 be a Brownian process under the P measure with natural filtration (Ft)t≥0 generated by Wt. Consider a (Ft)t≥0 - adapted stochastic process (Xt)t≥0 satisfying Novikov’s condition

EP

exp 1

2 Z t

0

Xs2ds

<∞ (2.43)

and consider the Radon-Nikodym derivative process:

Zt= dQ

dP

t

= exp Z t

0

XsdWsP− 1 2

Z t 0

Xs2ds

. (2.44)

(38)

It follows then that under the equivalent probability measure Qthe process defined by

WtQ =WtP− Z t

0

Xsds, (2.45)

is a standard Brownian motion for t ≥0.

Let us consider now a contingent claim with dynamics modeled by a geometric Brownian motion with constant drift:

dSt =µStdt+σStdWtP. (2.46) It is easy to show that the process (St)t≥0 is aP - martingale if and only ifµ= 0.

Hence with help of the Girsanov theorem we have:

dSt = µStdt+σStdWtP =µStdt+σStdWtP+σStXtdt−σStXtdt

= (µ+σXt)Stdt+σSt(dWtP−Xtdt)

= (µ+σXt)Stdt+σStdWtQ.

As a result St is a Q - martingale if and only if Xt = −µσ and the change of measure with given Radon-Nikodym derivative is as follows:

Zt = exp

−µ

σWtP−1 2

µ2 σ2t

(2.47) Hence this constant process (Xt)t≥0 defines, on (Ω,FT), the equivalent measure Q (with respect to the original measure P) under which the process (St)t≥0 is a martingale.

For more information about the theory on arbitrage-free pricing please refer to the famous papers of Harrison & Kreps (1979) and Harrison & Pliska (1981).

We recall from before that the priceC(Hg) of the financial pay-offHg is given by the discounted expected value of the future pay-off at time T under the unique risk-neutral measure Q:

C(Hg) =e−rTEQ[Hg]. (2.48) Now we will introduce two concepts, num´eraire and state-price density that are very popular in the continuous time finance and can be suggested as

(39)

a viable alternative to the standard pricing method with probability measure change presented previously.

Definition 2.5.11 (Num´eraire). A num´eraire is a strictly positive (Ft)t≥0 - adapted stochastic process (Nt)t≥0 that can be taken as a unit of reference when pricing an asset or a claim.

The num´eraire is a unit of account in which other assets can be denominated.

For instance, we can use the currency of the country as the unit of measurement.

The relative price process of asset (St)t≥0 is defined then by Sbt= St

Nt, t ∈R+. (2.49)

The equivalent martingale measure QN, dependent on the num´eraire choice, is equivalent to the physical measurePand has to yield martingale prices such that the relative price process is a martingale underQN. For instance, the risk neutral probability is associated to the money market account and therefore the relative price process is simply the discounted security price.

Let us now present, in the Black-Scholes market setting, another important concept, the num´eraire portfolio, that was first introduced by Long (1990).

The num´eraire portfolio is a strictly positive self-financing portfolio which, taken as a num´eraire, makes any portfolio a martingale with respect to the physical measureP. If this num´eraire exists it is essentially unique and can be obtained as the optimal portfolio in an utility maximization problem with logarithmic utility.

The logarithmic utility is myopic 1 which means that the num´eraire portfolio is instantaneously mean-variance efficient. This is a very interesting observation because it links the num´eraire concept with the mean-variance efficiency that is a cornerstone of the standard portfolio theory.

1The myopic utility function is given byu(x) =x1−α/(1α) forα6= 1 and log utility is a specific case forα= 1. For a myopic utility function the investment horizon has no effect on choices as long as returns are i.i.d., and hence for n-period investment portfolio optimization problem, the same diversification strategy is optimal for any choice of n

(40)

Another interesting concept is a state-price density process, also called pricing kernel or a deflator and we introduce it below.

Definition 2.5.12 (State-price process). If the stochastic process (ξt)t≥0 is such that the process (ξtSt)t≥0 is a martingale with respect to the filtration(Ft)t≥0 and the measure P, then we call

ξt =e−rt dQdP

t, t≥0 a state-price process.

The idea of the state-price density is linked closely to the concept of Arrow- Debreu contingent contracts (see Arrow (1953), Debreu (1959)) that pay one unit of num´eraire in one specific state of nature and nothing in any other state. For this reason Arrow-Debreu prices are also known as state prices and the continu- ous state equivalent of Arrow-Debreu securities constitutes a state-price density.

These contracts are building blocks of the modern financial asset pricing theories.

The pricing concepts that we presented have close relations to each other and in many cases it can be shown that they are equivalent. We will present them in the Black-Scholes market setting. For instance the inverse of the num´eraire portfolio defines the state-price density, for more details see Long (1990). If, in the risk neutral pricing, one replaces the bank account discount factors by the num´eraire discount factors then the discounted price process is a martingale under the physical measure P. Hence one replaces the change of measure (P toQ) by a change of num´eraire. In one of the next sections we will present a strong relation between the Growth Optimal Portfolio that maximizes the expected value of the logarithm of the terminal wealth and the market portfolio. It will be shown that if N ∈ Π is a num´eraire portfolio then N is also a growth optimal portfolio, see Long (1990). The opposite is not true, the existence of a growth-optimal portfolio will not imply the existence of a num´eraire portfolio.

2.5.3 Black-Scholes markets

The option pricing models date back at least to Bachelier who is credited as the first person to model Brownian motion. In this doctoral thesis, The Theory of Speculation (Bachelier (1900), (2006)), he proposed how to evaluate options on French government bonds. Nowadays the most famous option pricing model

(41)

is the Black-Scholes-Merton model developed by Samuelson (1965, 1973), Black and Scholes (1973) and Merton (1973) for the pricing of a European option on a stock. The model is formalised in continuous time and assumes the existence of a constant continuously compounded risk-free rate r >0. It assumes also that the price process of the risky assetS = (St)t≥0 is modeled by the geometric Brownian motion which evolves according the following stochastic differential equation:

dSt

St =µdt+σ dBt, (2.50)

where Bt is a standard Brownian motion under the physical measure P, µ is a drift parameter and σ denotes the volatility of the price process. The solution to the price process equation is given by

ST =S0exp

µ− 1 2σ2

T +σBT

. (2.51)

As we discussed before there are different, equivalent methods of pricing.

The Black-Scholes model was developed such that the market model is complete.

Hence there exists only one equivalent martingale measure Q. Given the Gir- sanov theorem, it is easy to prove that under the Q measure the price process follows the Geometric Brownian motion with the same volatility as the original process (under physical measure P) but with the instantaneous risk free rate as the instantaneous return. Hence the pricing formula for the wealth obtained at time T by using a self-financing strategy X is as follows:

C(XT) =e−rTEQ[XT]. (2.52) Equivalently, for pricing purposes, we could use the concept of the state-price process that allows us to evaluate the price under the physical probability measure P. Given the unique state-price process

ξt=exp (

−rt− 1 2

µ−r σ

2

t−

µ−r σ

Wt

)

, (2.53)

where Wt is a standard Brownian motion under the physical measureP, the cost

Références

Documents relatifs

Accordingly, in our model, longer maturities favor larger movement of bond prices that mitigate the rise in public debt and hence the size of the fiscal consolidation that follows

CDS Spread is the daily five-year composite credit default swap spread; Historical Volatility is the 252-day historical volatility; Implied Volatility is the average of call and

derbundskrieg 1847 ins Ausland, von wo er erst nach 30 Jahren wieder ins Wallis zurückkehrte&#34;, während der eifrige Saaser Chronist Peter Josef Im- seng (1796-1848) 1848 in

critical role in the early innate immune response to Chlamydophila abortus infection in

ceci nous permet de dire que les génotypes qui ont levés en premiers sont les premiers à mûrir avec des C°J différents pour les deux phases de développement dans le

An interesting consequence of the fact that mutual arrangements under cooperation replicate the first-best allocation when pool size is infinite is that, if the number of

In contrast, this research provides evidence that a common governance infrastructure can create a ‘common mechanism’ that would prioritise risks, support

In this review, we reported that many researches have linked food intake, body and fat weight, and reproductive function to plasma adipokines levels or tissue expression,