We introduce a class of stochasticvolatilitymodels (X t ) t≥0 for which the absolute moments
of the increments exhibit anomalous scaling: E (|X t+h − X t | q ) scales as h q/2 for q < q ∗ , but
as h A(q) with A(q) < q/2 for q > q ∗ , for some threshold q ∗ . This multi-scaling phenomenon is observed in time series of financial assets. If the dynamics of the volatility is given by a mean-reverting equation driven by a Levy subordinator and the characteristic measure of the Levy process has power law tails, then multi-scaling occurs if and only if the mean reversion is superlinear.
Modelling of the financial variable evolution represents an important issue in financial econometrics. Stochastic dynamic models allow to describe more accurately many features of the financial variables, but often there exists a trade-off between the modelling accuracy and the complexity. Moreover the degree of complexity is increased by the use of latent factors, which are usually introduced in time series analysis, in order to capture the heterogeneous time evolution of the observed process. The presence of unobserved components makes the maximum likelihood inference more difficult to apply. Thus the Bayesian approach is preferable since it allows to treat general state space models and makes easier the simulation based approach to parameters estimation and latent factors filtering. The main aim of this work is to produce an updated review of Bayesian inference approaches for latent factor models. Moreover, we provide a review of simulation based filtering methods in a Bayesian perspective focusing, through some examples, on stochasticvolatilitymodels.
pendent standard one-dimensional Brownian motions, ρ ∈ [−1, 1] is the correlation between the Brownian motions respectively driving the asset price and the process (Y t ) t∈[0,T ] which solves a one-dimensional au-
tonomous stochastic differential equation. The volatility process is (f (Y t )) t∈[0,T ] where the transformation
function f is usually taken positive and strictly monotonic in order to ensure that the effective correlation between the stock price and the volatility keeps the same sign (the function σ usually takes nonnegative values). In the literature, the development of specific discretization schemes for stochasticvolatilitymodels has only received little attention. We mention nevertheless the work of Kahl and J¨ackel  who discussed various numerical integration methods and proposed a simple scheme with order 1/2 of strong convergence like the standard Euler scheme but with a smaller multiplicative constant. Also the numerical integration of the CIR process and of the Heston model received a particular attention because of the inadequacy of the Euler scheme due to the fact that both f and σ are equal to the square root function (see for example Deelstra and Delbaen , Alfonsi , Kahl and Schurz , Andersen , Berkaoui et al. , Ninomiya and Victoir , Lord et al. , Alfonsi ). An exact simulation technique for the Heston model was also proposed by Broadie and Kaya .
As we shall observe, in addition to their use in the MLE approach, the Fil- ters above could be applied to a direct estimation of the parameters via a Joint Filter (JF) . The JF would simply involve the estimation of the param- eters together with the hidden state via a dimension augmentation. In other words, one would treat the parameters as hidden states. After choosing initial conditions and applying the filter to an observation data set, one would then disregard a number of initial points and take the average upon the remaining estimations. This initial rejected period is known as the “burn in” period. We will test various representations or State Space Models of the StochasticVolatilitymodels such as Heston’s . The concept of Observability  will be introduced in this context. We will see that the parameter estimation is not always accurate given a limited amount of Daily data.
In this work, we follow a third Bayesian approach, which has been recently developed and which reveals efficient for general dynamic models. This is sequential simulation based filtering, called Particle Filter,which is particularly useful in financial applications, when processing data sequentially. As a new observation becomes available, the hidden states and the parameters of the dynamic model can be updated and a new prediction can be performed. Particle filter allows also to perform model diagnostic and parameter inference. For a review of the state of the art see Doucet, Freitas and Gordon . Pitt and Shephard  improve standard Sequential Importance Sampling filtering techniques by introducing the Auxiliary Particle Filter (APF). They apply APF to stochasticvolatilitymodels and find that the method performs better than other simulation based techniques and that it is particularly sensitive to outliers. Kim, Shephard and Chib  and Chib, Nardari and Shephard  apply particle filter for stochasticvolatility extraction but not for parameter estimation. Polson, Stroud and M¨ uller  apply a practical filter for sequential parameter estimation and state filtering. They show the superiority of their method when compared to the APF with the sequential parameter learning algorithm due to Storvik . Lopes and Marino  and Lopes  apply APF to a MSSV model for sequential parameter learning and state filtering.
0 + R 0 t K n (t − s)θ(s)ds with the initial conditions S 0 n = S 0 and V 0 n,i = 0. Note that the factors (V n,i )
1≤i≤n share the same dynamics except that they mean revert at different speeds (γ i n ) 1≤i≤n . Relying on existence results of stochastic Volterra equations in [ 1 , 2 ], we provide in Theorem 3.1 the strong existence and uniqueness of the model (S n , V n ), under some general conditions. Thus the approximation ( 1.4 ) is uniquely well-defined. We can therefore deal with simulation, pricing and hedging problems under these multi-factor models by using standard methods developed for stochasticvolatilitymodels.
Stochasticvolatilitymodels for asset returns are popular among practitioners and academics because they can generate implied volatility surfaces that match option price data to a great extent. They resolve the shortcomings of the Black–Scholes model [ 12 ], where the return has constant volatility. Among the the most widely used stochasticvolatilitymodels is the Heston model [ 33 ], where the squared volatility of the return follows an affine square-root diffusion. European call and put option prices in the Heston model can be computed using Fourier transform techniques, which have their numerical strengths and limitations; see for instance Carr and Madan [ 15 ], Bakshi and Madan [ 9 ], Duffie et al. [ 23 ], Fang and Oosterlee [ 28 ], and Chen and Joslin [ 16 ].
Modelling the volatility of financial returns has been the purpose of many in- vestigations. See Clark (1973), Nelson (1991), Taylor (1986, 1994), Andersen (1994), and others. There exists a lot of versions of stochasticvolatilitymodels in the literature. Here we are interested in a discrete time version of volatility model introduced first by Taylor (1986). This model appears as a particular model of the Stochastic Autoregressive Volatility (SARV) model introduced by Andersen (1994).
One of the early examples of stochasticvolatilitymodels is Clark . He suggested that asset price movements should be tied to the rate at which transactions occur. To accomplish this he made a distinction between transactions time and calendar time. This framework has hith- erto been relatively unexploited to study derivative security pricing. We study the arbitrage pricing restrictions in economies where trade takes place according to a (discrete) transactions clock which diers from the standard calendar clock. In transaction time, calendar-time ticks are stochastic. Riskfree bonds with calendar-time maturities are traded. There is a single state variable whose process in transaction time is bi- nomial. We are interested in obtaining unique prices for derivatives using arbitrage arguments. In other words, we are investigating conditions for markets to be complete in the sense of Harrison and Pliska .
3 StochasticVolatility versus ARCH Op- tion Pricing Models
The loss of the homogeneity property in usual discrete-time statistical models like ARCH-type models is not as damaging as it appears for several reasons. First, Nelson (1990) has shown that the distinction between \stochasticvolatility" (where the source of randomness in the underlying asset volatility is exogenous) and endogenous volatility is not robust to temporal aggregation. ARCH-type discrete-time models may converge towards stochasticvolatilitymodels in continuous time as the time interval goes to zero. Second, a number of studies (see Drost and Werker (1996), Meddahi and Renault (1996) and Ghysels, Harvey and Renault (1996)) have shown that the class of GARCH processes which is robust to temporal aggregation, namely the weak GARCH class (see Drost and Nijman (1993)), is a sub-class of stochasticvolatilitymodels. In particular, when we sample in discrete time a continuous SV model, we obtain a weak GARCH model. Therefore, ARCH-type models and SV models are not competitors (as it was commonly believed) but rather complements, since the ARCH model oers a useful discrete-time lter for SV models.
presented in 3.1.3 allowed us to derive some sufficient conditions for non explosion. These conditions have been translated in terms of the model parameters so that relevant mar- ket quantities such as the zero-coupon bonds and the moments of zero-coupon bonds are well defined. Proposition 14 gives the exact statement for the existence of the general Fourier/Laplace transform of the model state variables. Also, we were able to derive some constraints on the model parameters so that the state variables verify some ergodicity prop- erty. Proposition 15 gives the exact statement. Given the affine structure of the model, and once established the domain of existence of the MRDE (or at least a subset of this domain) it was straightforward to apply Fourier inversion and the Fast Fourier Transform (FFT) based method and series expansion methods of Gram-Charlier type for pricing vanilla instruments. These methods are efficient in terms of their computational cost, and provide reasonably accurate results in terms for pricing caplets. We are more reserved in using these methods for pricing of swaptions. Not completely satisfied by transform based and series expansion methods, we have investigated an alternative route for pricing vanilla instruments. The pa- per of Bergomi and Guyon [BG12] gives a very straightforward way to derive an expansion of the European option prices and of the implied volatility in stochasticvolatilitymodels in the order 2 of the volatility of volatility. This approach seemed natural given that we view the model as a perturbation of LGM. This expansion methodology cannot be directly applied to ATSM. One of the key elements to derive the expansion is the fact that the payoff of option does not depend on the volatility of the underlying. However, this is not true in ATSM. Even in the simple Vasicek model, the yield curve depends on the volatility of the spot rate, and therefore the payoff of caplets and swaptions also depend on the volatility of the spot rate. Using a change of variable we where able to eliminate the dependence of the payoff on the volatility. With respect to the transform methods and series expansion methods, getting an expansion on the smile is complementary. On the one hand, it is less accurate to calculate a single price since we only calculate here the expansion up to order 2 in the volatility of volatility. On the other hand, it is more tractable for a first calibration of the model and gives a good approximation for key quantities on the smile. Also, it confirms our intuitions on the main drivers of the smile and on the role of the model parameters and factors in terms of the volatility statics and dynamics.
* Corresponding Author: Nour Meddahi, CIRANO, 2020 University Street, 25 th floor, Montréal, Qc, Canada
H3A 2A5 Tel.: (514) 985-4026 Fax: (514) 985-4039 email: email@example.com This is a revision of a part of Meddahi and Renault (1996), “Aggregations and Marginalization of GARCH and StochasticVolatilityModels”. Some other results of this manuscript are now included in two companion papers, Meddahi and Renault (2000a) and Meddahi and Renault (2000b) entitled “Temporal and Cross-Sectional Aggregations of Volatility in Mean Models” and “Conditioning Information in VolatilityModels” respectively. The authors thank Torben Andersen, Bryan Campbell, Marine Carrasco, Ramdam Dridi, Feike Drost, Jean-Marie Dufour, Ola Elarian, Rob Engle, Jean-Pierre Florens, René Garcia, Ramazan Gençay, Christian Gouriéroux, Stéphane Gregoir, Joanna Jasiak, Tom McCurdy, Theo Nijman, Enrique Sentana, Neil Shepard, Bas Werker, Jean-Michel Zakoian, two referees and a co-editor, and the participants of the Econometric Society meetings at Istanbul (1996) and Pasadena (1997), the Fourth Workshop of Financial Modeling and Econometric Analysis, Tilburg, December 1996, for their helpful comments. They also acknowledge fruitful discussions during seminars at CEMFI, CORE, CREST, North Carolina (Triangle seminar), Montréal, Oxford. The authors are solely responsible for any remaining errors. The first author acknowledges FCAR and MITACS for financial support.
tributions of volatility blocks as Metropolis-Hastings proposal distributions. We illustrate using daily return data for ten currencies. We report results for univa- riate stochasticvolatilitymodels and two multivariate models.
In the third chapter ”The information content of Realized Volatility”, we eva- luate the information contributed by (variations of) realized volatility to the es- timation and forecasting of volatility when prices are measured with and without error using a stochasticvolatility model. We consider the viewpoint of an inves- tor for whom volatility is an unknown latent variable and realized volatility is a sample quantity which contains information about it. We use Bayesian Markov Chain Monte Carlo (MCMC) methods to estimate the models, which allow the formulation of the posterior densities of in-sample volatilities, and the predictive densities of future volatilities. We then compare the volatility forecasts and hit rates from predictions that use and do not use the information contained in rea- lized volatility. This approach is in contrast with most of the empirical realized volatility literature which most often documents the ability of realized volatility to forecast itself. Our empirical applications use daily index returns and foreign exchange during the 2008-2009 financial crisis.
The class of stochasticvolatilitymodels has its roots and applications in
nance and nance econometrics. Indeed, volatility plays a central role in
the analysis of a lot of phenomenon in these domains. There exists a lot
of versions of stochasticvolatilitymodels in the literature. Here we are
the wet season. They observe that broken cloud fields create a bimodal distribution for the relative change: shaded areas receive attenuated solar irradiance while sunlit areas may receive higher irradiance than under a clear sky. This effect is caused by radiations scattering and reflections from neighboring clouds. Conducting a spectral analysis on the time series of measured surface irradiance, they observe that clouds are responsible for two different regimes according to their types and density causing either large or small scale fluctuations. This study highlights the effect of clouds and have certainly impacted the development of subsequent models for the solar irradiance.
We now introduce a stochastic process built on the same premise, that is a mean mass balance principle at a given ∆t. This model will have ( 1 ) as a fluid limit as ∆t goes to 0. This latter model suitably features the geometry of the chemostat but, as a limit model, cannot feature all its natural scales. The proposed stochasticmodels will respect both the geometry and the natural scales of the chemostat. We first establish a pure jump process representation of the chemostat at a microscopic scale, then we derive a diffusion process representation which will be valid at mesoscopic and macroscopic scales.
model by means of non-Gaussian processes. Instead of Brownian motion distur- bances are generated by Levy processes (see Barndorff-Nielsen and Shephard, 2001; Eraker et al., 2003; Chernov et al., 2003; Duffie et al., 2003).
Various methods were proposed to incorporate long memory. Breidt et al. (1998); Harvey (1998) build discrete-time models with fractional integration, Comte and Renault (1998) propose a continuous time model with fractional Brownian motion. Chernov et al. (2003) considers models, in which stochas- tic volatility is driven by various factors (components). Such models generate price dynamics with slow decay in sample ACF, a characteristic of long mem- ory models, though the data generating processes themselves do not possess this property (LeBaron, 2001a). In Barndorff-Nielsen and Shephard (2001) long memory effect is produced by superposition if an infinite number of non-negative non-Gaussian OU processes, which incorporates long-range dependence simulta- neously with jumps. Besides, long-range dependence in stochasticvolatility can be achieved using regime-switching models (So et al., 1998; Liu, 2000; Hwang et al., 2007).