To date, no example shows how the existence and the dynamics of assetpricebubbles depend on fundamentals such as endowments and the asset structure (divi- dends, asset supply, and borrowing limits). Our main contribution is to fill this gap. More precisely, Section 4.3 of the present paper provides several models (without any restriction on fundamentals) where bubbles exist. Notice that studying these models is not easy because we have to work with a dynamical system that is non-stationary and has infinitely many parameters (which are our model’s fundamentals). We prove that: when the benchmark economy has low interest rates and verifies the seesaw property, bubbles are more likely to exist in equilibrium if (1) asset supply is low, (2) borrowing limits of agents are low, (3) the level of heterogeneity (proxied by the differences between agents’ fundamentals such as endowments, initial asset holdings, rates of time preferences) is high, and (4) asset dividends are low with respect to agents’ endowments. Consequently, our results suggest that bubbles may appear if (i) the agents’ endowments grow asymmetrically, and (ii) there is a shortage of finan- cial assets (i.e., there is a low supply and assets provide low dividends). 7 We also
One usual explanation is that bad bubbly episodes are associated with credit booms, whereas the benign are not ( Jorda et al. , 2015 ). During the upward phase of a bad episode, the private agents accumulate a lot of debt. Consequently, the bursting depresses investment and output as the firms and the households enter a deleveraging process ( Mian and Sufi , 2014 ). In this paper, I explore an alternative theory and formalize the idea advanced by Summers ( 2013 ) and Krugman ( 2013 ): under particular circumstances, large financial bubbles are necessary to sustain investment and output. Indeed, an asset bubble provides the agents with a greater supply of liquidity, allowing them to overcome shortages of “fundamental” liquidity, i.e the sum of public and private debt. In normal times, the bursting bubble can be accommodated by a drop in the nominal interest rate with virtually no aggregate consequences. But when the shortage of fundamental liquidity is severe, the natural interest rate – consistent with full employment – becomes permanently negative after the bubble bursts. If the inflation expectations are sticky and the central bank hits the zero lower bound (ZLB), the interest rate is pegged higher than its natural value: the economy enters the liquidity trap, and output falls short of its potential. Therefore, the usual explanation mixes up the cure – the bubble – for the disease – the shortage of fundamental liquidity.
Hausdorff dimension of the set of points where the process exhibits this exponent. The fundamental property of this spectrum is that it is the Legendre transformation of the scaling function of the process, which describes the behavior of the moments of the process as the time-step tends to zero. We have afterwards defined three models exploiting multifractal properties : the Markov- Switching Model in discrete time and in continuous time, and the MMAR. The MSM in discrete time introduces the concept of Markov components of the volatility, which can change value at each time step with different probabilities. These probabilities can be associated to the frequency of switchings. The product of these different components is proportional to the volatility, thus the volatility is subject to the cyclic variation of these different components. This mimics the volatility of an asset in the real world, which is subject to the superposition of economic cycles with different time-length: long periods of calm can be locally subject to shorter period of relative unrest, the same way as we can observe long periods of market uncertainty. With the construc- tion of the MMAR, we have used the mathematical properties of the binomial cascade to build a trading-time, which is a deformation of the time and defined as the cumulative distribution func- tion of a binomial measure. This trading-time simulates the accelerations and slow-downs of the trading activity, which can be totally randomized in the case of a canonical measure. We obtain the MMAR log-price by compounding this trading time with a standard Brownian motion. Even- tually with the Continuous-time MSM, we overcame the problem of the definition of the model on a time grid (MSM in discrete time) as the Poisson arrivals on the Markov components take place at random times. This model can have a finite number of components, or countably many components in which case we define a trading-time that weakly converges towards a multifractal process. To obtain the log-price, we can compound this trading time with a fractional Brownian motion. Moreover, we have shown for these three models that the properties of volatility clustering and long-memory are verified.
This thesis has three chapters in which I develop tools for comparisons and dynamic analysis of linear asset pricing models.
In the first chapter, I introduce the notion of dynamically useless factors: factors that may be useless (uncorrelated with the assets returns) at some periods of time, while relevant at other periods of time. This notion bridges the literature on classical empirical asset pricing and the literature on useless factors, where both assume that the relevance of a factor remains constant through time. In this new framework, I propose a modified Fama-Macbeth procedure to estimate the time-varying risk premia from conditional linear asset pricing models. At each date, my estimator consistently estimates the conditional risk premium for every useful factor and is robust to the presence of the dynamically useless ones. I apply this methodology to the Fama-French five-factor model and find that, with the exception of the market, all the factors of this model are dynamically useless, although they remain useful 90% of the time.
acquire (additional) private information at the time when there are plenty of mispricings in the market. In other words, she would pay to acquire private information when the benefit to private information is high.
How can a trader know when there are more opportunities in the market? First of all, let us consider the role of information. It is plausible that the stocks about which there is more information (whether public or private) are better priced. Let us take the example of stocks covered by many analysts and stocks not covered by analysts. Because analysts search for information about the stocks they cover, make earnings estimates and give recommendations (on a five-point scale: strong buy, buy, hold, underperform, sell), one would expect that the stocks covered by many analysts are well priced. Well covered stocks incorporate a large quantity of public information. They are usually bigger firms, that trade often, without fric- tions. On the other hand, neglected stocks incorporate less public information: there are no (or there are very few) analysts covering them. It is still possible that they incorporate some public information (for example, if there is public information about a well covered company in the mining sector, this should also affect the price of a neglected company in the same mining sector). Neglected stocks may also incorporate private information, but it is difficult to know how much private information exists about a stock at a given time. So unless one knows what type of information, and how much of it, exists about neglected stocks, ones prior is that neglected stocks should behave very much like covered stocks and so move up and down with them. An alternative view could be that neglected stocks are always riskier than covered stocks (because investors fear adverse selection), but in this case we should see a risk premium: we should see that a portfolio that goes long neglected stocks and short covered stocks has a positive risk premium. The data does not support this view: a neglected-minus- covered portfolio that is size neutral and equally weighted has an annual mean of -0.64%. This portfolio has a monthly standard deviation of 3.53%, suggesting that neglected stocks comove with other neglected stocks, and covered stocks comove with other covered stocks. This comovement is very important, as it means that neglected stocks all move up and down at the same time, possibly with the same factor(s).
We also plot the rescaled velocity dX/dT as a function of X in figure 8d. While the collapse is again fairly good, some deviations are observed. At long times (X, T 1), the velocity of the smallest bubbles tends to be smaller than the prediction of our model. As they move away from the tip of the wedge, the bubbles recover a spherical shape. They are not confined any longer, and thus do not feel the driving gradient of the gap thickness. The motion then stops, except for a possible very slow drift under gravity due to the slight slope of the upper confining plate. At short times (X, T 1), figure 8d shows significant scatter in the dimensionless bubble velocity. Typically, dX/dT is within a factor of about 2 of the prediction of the model. As for drops, we expect our model to be valid under the assumptions R x x ∗
Models of heterogeneous beliefs can generate rich implications for trading and asset pricing (see Suleyman Basak 2005 for a recent survey). When studying such models, aggregation often leads to difficulty in computing equilibrium outcomes. In this paper, we introduce a flex- ible framework to model heterogeneous beliefs in the economy, which we refer to as “affine’’ disagreement about fundamentals. Affine pro- cesses (see Darrel Duffie, Jun Pan, and Kenneth Singleton 2000 ) are appealing as they provide a large degree of flexibility in modeling the conditional means, volatilities, and jumps for various quantities of interest while remaining analytically tractable. Our affine heterogeneous beliefs framework allows further for stochastic disagreement among agents about growth rates, volatility dynamics, as well as the likelihood of jumps and the distribution of jump sizes.
MIT and NBER December 2008
We characterize equilibria with endogenous debt constraints for a general equilibrium econ- omy with limited commitment in which the only consequence of default is losing the ability to borrow in future periods. First, we show that equilibrium debt limits must satisfy a simple con- dition that allows agents to exactly roll over existing debt period by period. Second, we provide an equivalence result, whereby the resulting set of equilibrium allocations with self-enforcing private debt is equivalent to the allocations that are sustained with unbacked public debt or rational bubbles. In contrast to the classic result by Bulow and Rogo¤ (AER, 1989), positive levels of debt are sustainable in our environment because the interest rate is su¢ ciently low to provide repayment incentives.
This result can be related to previous ones obtained in …nance literature with rank dependent utility or Choquet utility: Tallon (1997) and Epstein and Wang (1994) also obtain the existence of multiple equilibria. The orig- inality of this work is to obtain the result in a production economy with capital and a bubbly asset. In an exchange economy, the dynamics of as- set prices is completely governed by the history of exogenous shocks. In a production economy with capital, the dynamics of asset prices also depend on the dynamics of capital accumulation. As simple parametric forms are used in the model, the dynamics of the bubbly asset can be explicitly de- termined: the price converges with oscillations toward its long run value. Bosi and Seegmuller (2010) have also developed a framework in which there exists an indeterminate bubbly equilibrium. In their model, indeterminacy is due to frictions introduced via a cash-in-advance constraint with …nancial market imperfections. In this work, indeterminacy is obtained in a model in which the only …nancial market imperfection comes from incomplete markets associated with RDU preferences.
This paper shows that the scope for the existence of rational bubbles can be extended when uncertainty and rank-dependent expected utility are intro- duced. In the framework of an overlapping generations model à la Diamond (1965), the seminal article by Tirole (1985) proves that bubbles can arise in economies for which the return on capital at steady state is below the growth rate of output. The bubbleless economy must be in a state of overaccumula- tion that corresponds to dynamic ine¢ ciency. Weil (1987) proposes a model of stochastic bubbles using the same framework as Tirole, and …nds existence conditions that are even stronger. Di¤erent authors have introduced rational bubbles in richer frameworks with endogenous growth (e. g. Grossman and Yanagawa, 1993 and Olivier, 2000). But the existence of bubbles remains linked to the same condition between the growth rate and the interest rate. As empirical observations suggest that this condition is not ful…lled in general (see Abel et alii, 1989), rational bubbles seem unlikely to arise. They may perhaps not be the pertinent explanation to understand bubble phenomena that actually are observed.
The innovation of our modeling approach is to draw upon three separate strands of literature: international asset pricing, open economy macroeconomics and international trade, while differing from each one of them in some important dimensions. On the one hand, while encompassing a rich financial markets structure, the overwhelming majority of international asset pricing models assumes that there is a single commodity in the world, implying that the real exchange rate has to be equal to unity. Nontrivial implications on exchange rates in such a framework have been obtained by either introducing barriers to trade into a real model, or by exogenously specifying a monetary policy and focusing on the nominal exchange rate. On the other hand, the international economics literature typically concentrates either on how different patterns of international trade in goods affect the real exchange rate, or on how the nominal exchange rate is linked to bond markets, typically overlooking the implications on equity markets. Ours is a two-country, two-good model where the countries trade in goods as well as in stocks and bonds. To our knowledge, it is the first asset pricing model in which the terms of trade, exchange rate, and asset prices are jointly determined in equilibrium, thus marrying dynamic asset pricing with Ricardian trade theory. 1
Abstract— The objective of this publication is to
analyze how the variations observed at the coffee world market price level pass on the coffee retail price in Belgium. This research takes place in the framework of a coffee value chain analysis. The formation of the coffee price cycles seems to fade these last years at world market as well as retail price in Belgium levels and since the previous months, these prices rise strongly due to the speculators interest for the commodities market. We observe that when the world coffee price diminishes, this reduction passes on the retail coffee price in Belgium less rapidly than for an increase and moreover, the correlation is stronger when the world price grows than during the deceasing periods. We also notice that variability of the world coffee price from 1998 to 2007, which stretches to reduce itself these latter years, is
s only, such that we can represent the price as the risk-adjusted expectation of dividends of an investor who finds it optimal to hold exactly ¯ D units of the asset when the state is z. 34
Equation (7) generalizes the result that the risk-neutral expectation of dividends differs from the Bayesian posterior conditional on the same public information z. The risk-neutral expectation of dividends processes the price signal twice, once as a public price signal, and once as the private signal of the threshold investor who finds it optimal to purchase exactly ¯ D units of the asset. The intuition for this characterization is the same as the one given in section 3, i.e. shifts in fundamentals or noise trader demand result in price adjustment, due to market-clearing, over and above the mere information content of the price. In the expression for the equilibrium price, these two effects are represented by the sufficient statistic z appearing twice in the conditioning set, once through the price signal, and once through the marginal investor’s private information. This wedge between the market expectation of dividends and the Bayesion posterior is thus a necessary characteristic of any model with noisy information aggregation through asset prices.
My experimental setup, inspired by the adverse selection problem faced by market makers, is further motivated by a number of additional reasons. First, given that modern market making activity takes place in electronic exchanges at very high frequency, an astonishing amount of data is available. This limits the problem of over-fitting and makes it possible to train deep neural networks featuring a large number of parameters. Another consequence of the increasing speed of market making activity has been the surge of academic interest on the impact of high-frequency traders (HFTs) on the functioning of financial markets. The debate has focused mainly on their eﬀect on liquidity. On the one hand, HFT firms argue that their activity increases market liquidity by reducing bid-ask spreads, a claim that is supported, at least conditional on normal market conditions, by empirical evidence. On the other hand, investors and some observers in the financial press blame HFTs for back-running block trades by large institutions. One of the concerns is that, since large trades need to be split into smaller child executions over a non-trivial time frame, HFTs may quickly extract private information from the first part of the order and trade in the same direction of the institutional investors, thus increasing eﬀective transaction costs and imposing a non-trivial externality to the originator. Alternatively, for liquidity-motivated trades, HFTs may recognize their presence in the orderflow, start trading in the same direction and revert the position before the end of the block, speculating on and adding to the temporary component of the price impact. This predatory behavior is costly for institutions, since it increases their eﬀective cost of trading. My analysis sheds light on this important issue by studying the behavior of my model, a forecasting tool which is closely related to the algorithmic activity of HFTs, during the executions of large block trades by institutional investors.
Patrick Maill´ e 1
Abstract. The Progressive Second Price mechanism (PSP), recently introduced by Lazar and Semret to share an infinitely-divisible resource among users through pricing, has been shown to verify very interest- ing properties. Indeed, the incentive compatibility property of that scheme, and the convergence to an eﬃcient resource allocation where established, using the framework of Game Theory. Therefore, that auction-based allocation and pricing scheme seems particularly well- suited to solve congestion problems in telecommunication networks, where the resource to share is the available bandwidth on a link. This paper aims at supplementing the existing results by highlighting some properties of the diﬀerent equilibria that can be reached. We precisely characterize the possible outcomes of the PSP auction game in terms of players bid price: when the bid fee (cost of a bid update) tends to zero then the bid price of all users at equilibrium gets close to the so- called market clearing price of the resource. Therefore, observing an equilibrium of the PSP auction game gives some accurate information about the market clearing price of the resource.
vis-`a-vis the UK. Our model implies that the stock market prices, bond prices and exchange rates are described by a three latent-factor model with time-varying coefficients, where the latent factors correspond to our demand and two supply shocks. The model, as it is, has too many degrees of freedom to fit the data and hence is unlikely to be rejected. Therefore, we test the model using two approaches. First, we take the model literally and estimate the structural equations derived in the theoretical section. We extract the latent factors from this exercise, and test how they perform out-of-sample in forecasting macroeconomic variables. Our model provides economic meaning to each of the factors − and we compare them with macroeconomic variables they should be associated with. In the second approach, we estimate a simplified version of the model where only a minimal structure is used for identification: we set the coefficients to be constant through time and impose some sign restrictions. We then test whether the pattern of interconnections between the asset and foreign exchange markets implied by the model is indeed found in the data, and whether the signs of responses of the markets to demand and supply shocks are consistent with the theory.
An algorithm was developed in order to automate the de- termination of the number N of bubbles present in the cell at a given time. In brief, on each frame of the sequence a threshold is used to highlight the bubble edges. Then, a particle analyzer algorithm is used to count the number of bubbles present in the cell as a function of the number of oscillations p accomplished by the cell. In Figure 2, val- ues of N obtained with the program are compared with those manually counted. The results from the program show a systematic deviation of about 10% from the man- ual readings. This deviation is considered to remain within a reasonable interval of confidence .