• Aucun résultat trouvé

Three essays in financial frictions and international macroeconomics

N/A
N/A
Protected

Academic year: 2021

Partager "Three essays in financial frictions and international macroeconomics"

Copied!
138
0
0

Texte intégral

(1)

Three Essays on Financial Frictions and International

Macroeconomics

Thèse Alexandre Kopoin Doctorat en économique Philosophiæ doctor (Ph.D.) Québec, Canada

(2)
(3)

Résumé

Cette thèse examine le rôle des frictions financières qui émergent des asymétries d’information dans les marchés financiers sur la propagation des chocs, et sur les fluctuations économiques. Le premier essai utilise la modélisation factorielle dynamique ciblée pour évaluer la contribution des données nationales et internationales sur la question de la prévision du PIB des provinces du Canada. Les résultats indiquent que l’utilisation des données nationales et internationales, principalement les données américaines, améliore significativement la qualité des prévisions. De plus, cet effet est présent et significatif pour les horizons de court terme (moins d’un an), mais disparait pour les prévisions de long terme. Ceci suggère que les chocs qui prennent naissance au niveau national ou international sont transmis aux provinces, et donc reflétés dans les séries chronologiques régionales assez rapidement.

Tandis que le premier essai aborde la question de la transmission des chocs à l’aide d’un modèle macro-économétrique non-structurel, les deux derniers essais développent des modèles d’équilibre général dynamique stochastique (DSGE) pour analyser les effets des frictions financières sur la propagation des chocs nationaux et internationaux.

Ainsi, le deuxième essai présente un modèle DSGE dans un cadre international avec des frictions financières. Le cadre théorique comprend le mécanisme d’accélérateur financier, les canaux du capital bancaire et du taux de change. Les résultats suggèrent que le canal du taux de change, longtemps ignoré, joue un rôle primordial dans la propagation des chocs. De plus, en maintenant les trois canaux de transmission, les chocs nationaux et internationaux jouent un rôle impor-tant dans l’explication des fluctuations macroéconomiques. Par ailleurs, les résultats montrent aussi que les économies les mieux capitalisées réagissent mieux que celles avec une capitalisation bancaire plus faible lorsqu’il advient un choc négatif. Ces résultats soulignent l’importance du capital bancaire dans une perspective internationale et peuvent contribuer à enrichir le débat mondial sur la régulation bancaire.

(4)

ana-lyse l’effet des activités bancaires transfrontalières sur la propagation des chocs nationaux et internationaux, et sur la synchronisation des cycles économiques. Les résultats suggèrent que la présence d’activité bancaire transfrontalière amplifie l’effet des chocs de productivité et de poli-tique monétaire. Cependant, l’impact sur la consommation est plus faible à cause des possibilités de lissage à travers les dépôts inter-pays. De plus, les résultats montrent que les corrélations bila-térales entre les agrégats macroéconomiques deviennent plus importantes en présence d’activités bancaires transfrontalières.

(5)

Abstract

This dissertation investigates the role of financial frictions stemming from asymmetric informa-tion in financial markets on the transmission of shocks, and the fluctuainforma-tions in economic activity. Chapter 1 uses the targeted factor modeling to assess the contribution of national and interna-tional data to the task of forecasting provincial GDPs in Canada. Results indicate using nainterna-tional and especially US-based series can significantly improve the forecasting ability of targeted factor models. This effect is present and significant at shorter-term horizons but fades away for longer-term horizons. These results suggest that shocks originating at the national and international levels are transmitted to Canadian regions and thus reflected in the regional time series fairly rapidly.

While Chapter 1 uses a non-structural, econometric model to tackle the issue of transmission of international shocks, the last two Chapters develop structural models, Dynamic Stochastic General Equilibrium (DSGE) models to assess spillover effects of the transmission of national and international shocks.

Chapter 2 presents an international DSGE framework with credit market frictions to assess issues regarding the propagation of national and international shocks. The theoretical framework includes the financial accelerator, the bank capital and exchange rate channels. Results suggest that the exchange rate channel, which has long been ignored, plays an important role in the propagation of shocks. Furthermore, with these three channels present, domestic and foreign shocks have an important quantitative role in explaining domestic aggregates. In addition, results suggest that economies whose banks remain well-capitalized when affected by adverse shock experience less severe downturns. These results highlight the importance of bank capital in an international framework and can be used to inform the worldwide debate over banking regulation.

In Chapter 3, I develop a two-country DSGE model in which banks grant loans to domestic as well as to foreign firms to study effects of these cross-border banking activities in the

(6)

transmis-sion of national and international shocks. Results suggests that cross-border banking activities amplify the transmission of productivity and monetary policy shocks. However, the impact on consumption is limited, because of the cross-border saving possibility between the countries. Moreover, results suggests that under cross-border banking, bilateral correlations become greater than in the absence of these activities. Overall, results demonstrate sizable spillover effects of cross-border banking in the propagation of shocks and suggest that cross-border banking is an important source of the synchronization of business cycles.

(7)

Contents

Résumé iii Abstract v Contents vii List of Tables ix List of Figures xi Acknowledgments xvii Avant-propos xix Introduction 1

1 Forecasting Regional GDP with Factor Models: How Useful are National

and International Data? 5

1.1 Introduction . . . 6

1.2 Dynamic factor model with targeted predictors . . . 7

1.3 Data description and experiments . . . 14

1.4 Design of the comparison exercise . . . 16

1.5 Results . . . 18

1.6 Concluding Remarks . . . 20

2 Bank Capital, Credit Market Frictions and International Shocks Trans-mission 25 2.1 Introduction . . . 26

2.2 The General Macroeconomic Environment . . . 28

2.3 Aggregation and Competitive Equilibrium . . . 42

2.4 Model Calibration . . . 44

2.5 Findings . . . 46

2.6 Concluding Remarks . . . 52 3 Cross-border Banking, Spillover Effects and International Business Cycles 63

(8)

3.1 Introduction . . . 64

3.2 Two-country DSGE Model with Financial Frictions . . . 67

3.3 Model Simulations . . . 86

3.4 Concluding Remarks . . . 94

Conclusion 103 Bibliography 105 A Bank Capital, Credit Market Frictions and International Shocks Trans-mission 113 A.1 General structure of the model . . . 113

A.2 Structure of good distribution2 . . . 114

A.3 Proof of the proposition 1 . . . 114

B Cross-border Banking, Spillover Effects and International Business Cycles 117 B.1 Proof of the proposition 3 . . . 117 B.2 Analytical Expressions for the Variables Appearing in the Credit Contracts . 118

(9)

List of Tables

1.1 National and international dataset . . . 17

1.2 Forecasting Performance: hard threshold witht = 1.28 . . . 21

1.3 Forecasting Performance: hard threshold witht = 1.65 . . . 21

1.4 Forecasting Performance: hard threshold witht = 2.58 . . . 22

1.5 Forecasting Performance: soft threshold with ND = 30 . . . 22

1.6 Forecasting Performance: soft threshold with NQ = 110 and NO = 75 . . . 23

2.1 Parameter Calibration . . . 47

2.2 Steady-state values and ratios . . . 48

3.1 Correlation of change in stock prices . . . 65

3.2 Parameter Calibration: Baseline model . . . 89

3.3 Steady-state values and ratios: Baseline model . . . 90

3.4 Predicted Bilateral Correlations (home productivity shock) . . . 101

(10)
(11)

List of Figures

1.1 Structure of datasets examined . . . 16

2.1 Global economic downturns (Output and Financial index) . . . 29

2.2 IRF from a decrease in bank capital . . . 53

2.3 IRF from a negative technology shock (panel B) . . . 54

2.4 IRF from a negative technology shock (panel B) . . . 55

2.5 IRF from a monetary policy shock (panel A) . . . 56

2.6 IRF from a monetary policy shock (panel B) . . . 57

2.7 IRF from a negative foreign output shock . . . 58

2.8 IRF from a foreign monetary policy shock . . . 59

2.9 IRF from a negative technology shock . . . 60

2.10 IRF from a monetary policy shock . . . 61

2.11 IRF from a a negative government spending shock. . . 62

3.1 International business cycles synchronization . . . 66

3.2 Timing of events within each country . . . 69

3.3 Chained financial contract . . . 70

3.4 Effect of an increase in the home financing relative the foreign financing . . . 74

3.5 IRF of real variables from a home productivity shock . . . 95

3.6 IRF of financial variables from a home productivity shock . . . 96

3.7 IRF of real variables from a home monetary policy shock . . . 97

3.8 IRF of financial variables from a home monetary policy shock . . . 98

3.9 IRF of real variables from a foreign monetary shock . . . 99

3.10 IRF of financial variables from a foreign monetary shock . . . 100

A.1 General structure of the model . . . 114

(12)
(13)

To the God, the Great Architect of the Universe, and to my parents Assi Kopoin Ludovic and Yapi Chiepo Colette for always being there for me, supporting me during hard times.

(14)
(15)

If we knew what it was we were doing, it would not be called research, would it?

(16)
(17)

Acknowledgments

I am deeply indebted to several people for the realization of this thesis. Firstly, I would like to thank my advisor, Kevin Moran, and my co-advisor, Jean-Pierre Paré, for their excellent guid-ance, support, and for all their cogent and critical comments provided throughout the stages of this thesis. Professor Kevin Moran and Dr. Jean-Pierre Paré provided me an opportunity to accomplish such educational feat, guided me in every step and, supported me throughout my graduate studies while being patient for which I cannot thank them enough. I am also grateful to all other Macroeconomics professors and Senior Macroeconomists at Laval University, UQÀM, University of Sherbrooke, International Monetary Fund, Bank of Canada, Department of Finance of Canada, and the Ministry of Finance of Quebec for their comments and feedback on my work and seminar presentations, a special thanks to Benoît Carmichael, Sylvain Dessy, Dalibor Ste-vanovic, Alessandro Barattieri, Césaire Meh, Bruno Feunou, Jean-Sebastien Fontaine, Stéphane Chrétien, Rodrigo Sekkel, Jean-François Rouillard, Alexander Ueberfeldt, Raymond Fournier, Daniel Floréa, Isabelle Poulin, Fulbert Tchana Tchana, Gilles Belanger and Malik Shukayev.

I would like to thank conference participants at Laval University, UQÀM, Bank of Canada, the Organisation for Economic Co-operation and Development (OECD), Department of Finance of Canada, Canadian Economics Association Conference, Journées CIRPÉE, Congrès annuel de la société canadienne de science économique, and discussants of my work at conferences: Dalibor Stevanovic, Alessandro Barattieri, Jean-François Rouillard. I acknowledge financial support from the CIRPÉE, CIRANO and FQRSC (Fonds québécois de recherche sur la société et la culture). This thesis has also benefited from the ongoing feedback from my colleagues and parents during my Ph.D. program. I thank, in particular, Assamoa Kopoin, François Kapuku, Marie-Claire Kashama, Pierre Valère Nketcha, Bouba Housseini, Firmin Vlavonou, Blache Akpoué, André-Marie, Ludovic Kashama, Tapé Bernadin, Yasmina Koné, Jonathan Morneau-Couture for their support and for sharpening my critical thinking skills.

(18)

Many thanks go to my sweet wife, my best friend Candice Kapuku, who has always been a great source of inspiration and whose generous encouragements have generated an amplification of my own research. I am also thankful to the entire faculty and the Economics Department. Finalement, je voudrais aussi remercier profondément mes parents qui m’ont offert un soutien sans faille et qui m’ont inculqué l’esprit de travail acharné et de curiosité.

(19)

Avant-propos

Les chapitres de la présente thèse constituent des articles publiés ou à soumettre à des revues scientifiques avec comité de lecture.

Le chapitre 1 est un article réalisé avec mon directeur de thèse, Kevin Moran, et mon co-directeur, Jean-Pierre Paré. Cet article, dont je suis le principal auteur, a été publié dans le journal Eco-nomics Letters, vol. 121, no. 2, November, 2013, pages 267–270.

Le deuxième article est un article réalisé avec mon directeur de thèse, Kevin Moran, et mon co-directeur, Jean-Pierre Paré. Il fait l’objet de quelques révisions pour être soumis à une revue scientifique avec comité de lecture. Je suis le principal auteur de cet article.

Le troisième article dont je suis l’auteur principal est réalisé sous la direction de mon directeur de thèse, Kevin Moran, et mon co-directeur, Jean-Pierre Paré. Cet dernier article fait aussi l’objet de quelques révisions pour être soumis à une revue scientifique avec comité de lecture. Je suis le principal auteur de cet article.

(20)
(21)

Introduction

The celebrated Modigliani-Miller theorem argues that in a frictionless economy, the capital struc-ture of firms does not affect the dynamics of the real economy. According to this theorem, real economic activity should be totally disconnected from financial activities, because of the ir-relevant condition of investment decisions.1 However, considerable empirical evidence and the

2007-09 financial crisis have made it clear that macroeconomic models need to allocate a more prominent role to the financial frictions for a best understanding of the dynamics of business cycles. Townsend (1979) and Bernanke et al. (1999) are the earliest studies. In international framework, empirical evidence (e.g. Iacoviello and Minetti (2006)) shows that changes in the composition of foreign and domestic credit help explaining one of the important empirical facts in the international business cycles: the co-movement of output across countries, which standard open-economy real business cycles models (e.g. Backus et al.(1995)) fail to predict.

The contribution of this thesis consists of assessing the role played by financial frictions in an international framework by using non-structural and structural macroeconomic models. Spe-cially, I investigate the extent to chick financial frictions affect the propagation of national and international shocks. Chapter 1 uses a new class of non-structural models − targeted dynamic factor modeling− popularized by Stock and Waston (Stock and Watson(2002a,b)) andBai and Ng(2008), to assess the contribution of national and international data to the task of forecasting provincial GDPs in Canada. Results suggest that shocks originating at the national and inter-national levels are transmitted to Canadian regions and thus reflected in the regional timeseries fairly rapidly. While Chapter 1 uses a non-structural, econometric model to tackle the issue of transmission of international shocks, Chapter 2 and 3 develop a structural, Dynamic Stochastic General Equilibrium (DSGE) framework with financial frictions to assess spillover effects of the transmission of national and international shocks.

1

According to this theorem, any tightening of credit standards by banks should lead firms to substitute their financing from external to internal sources, so that investment decisions would remain unaffected.

(22)

In Chapter 2, I present an international DSGE framework with credit market frictions and an active bank capital channel to assess issues regarding the transmission of domestic and foreign shocks. In Chapter 3, I study the linkage between the size of cross-border banking activities and the international propagation of real and financial shocks as well as the synchronization of international business cycles. Specially, I develop a two-country DSGE model with the bank capital and financial accelerator channels, in which banks grant loans to home and foreign firms in an international credit contract. I show that under cross-border banking activities, bilateral correlations become greater than in the absence of those activities.

Introducing financial frictions in business cycles models goes back to the Great Recession. In a seminal work, Townsend(1979) considers a two-agent model in which the borrower’s project has exogenously random output and the verification of the project is costly for the lender but costless for the borrower. In this model, the borrower has the incentive to misreport the project output, and the optimal financial contract resembles a debt contract: the borrower repays a fixed amount in the good situations and announces bankrupt in the bad situations. This paper provides an empirical evidence showing that financial factors (e.g., the decline in bank credit) played a more dominant role in the dynamic of business cycles. Bernanke and Gertler (1989) develop an overlapping generations model in which financial frictions arise from costly state ver-ification à la Townsend(1979). They show that financial frictions can enhance the propagation of productivity shocks. More specifically, shocks affecting entrepreneurial net worth (as in a debt-deflation) can initiate fluctuations.

Carlstrom and Fuerst(1997) embed the core model ofBernanke and Gertler (1989) into a com-putable general equilibrium framework and analyze the quantitative effects of financial frictions on business fluctuations. Bernanke et al. (1999) incorporate the dynamic structure of Carlstrom and Fuerst (1997) into a new Keynesian framework with sticky prices and study the effects of monetary shocks on business cycles. They find that financial frictions help explaining both the strength of the economy’s response to monetary policy and the tendency for policy effects to per-sist even after interest rates have returned to normal, as commonly observed in the VAR analysis.

However, these studies focus on asymmetric information only the demand side. Thus, building on the principal-agent setting of Holmstrom and Tirole (1997), and Chen (2001) analyze the dynamic interactions among bank capital, asset prices and economic activity. In these models, entrepreneurs can choose among projects with different riskiness and finance their projects using bank loans. At the same time, banks can monitor only some of the entrepreneurs’ project choices.

(23)

Thus, entrepreneurial net worth is essential for loans and total investment. As the monitoring activities of banks are costly and unobservable, banks must keep a minimum amount of own capital in order to ensure their depositors that they will monitor the entrepreneurs’ projects. In the event of a negative productivity shock, a fall in asset prices affects both bank capital and entrepreneurial net worth. Thus, bank loans and entrepreneurs’ investment are squeezed by a higher bank capital-asset ratio for lending and a stricter collateral requirement for borrowing.

More recently, Smets and Wouters (2007), Christiano et al. (2010), Dib (2010) and Meh and Moran (2010) embedded Holmstrom and Tirole (1997) framework within a dynamic general equilibrium model. In these models, tighter monetary policy raises the opportunity cost of bank-ing external funds, and this increase is translated into investment borrower cost. Banks seek to reduce this effect by asking borrowers to finance their projects with their own net worth. There-fore, bank lending must decrease to satisfy those market condition, and leading to a contraction in economic activity.

In chapter 2, our setting of the financial contract is based on Meh and Moran (2010) and Dib

(2010), in which banks have incentive to monitor monitor the entrepreneurs’ projects. In these papers, there are two main ways in which financial markets are linked to real economic activity: the financial accelerator mechanism and the bank capital channel. One of the contribution of Chapter 2 is to identifying the exchange rate channel as an important transmission channel. I show that the exchange rate channel together with the financial accelerator and the bank capi-tal channel have an important quantitative role in explaining domestic business cycle fluctuation.

One important contribution of Chapter 3 is to reconcile three strands of the literature: cross-border banking activities, international business cycles fluctuations, and international business cycles synchronization. Devereux and Yetman (2010) study a two-country economy in which investors hold assets in the domestic and the foreign country but are exposed to leverage con-straints. They find that if international financial markets are highly integrated, productivity shocks will be propagated through investors’ financial portfolios. In turn, this will generate a strong output co-movement between the two countries. Mendoza and Quadrini(2010) consider a two-country model with a different degree of financial development in each country, as captured by households’ ability to insure against income shocks. They investigate cross-country spillover effects of shocks to bank capital.

(24)

Kollmann et al.(2011) consider a two-country environment with a global banking sector. When a shock erodes the capitalization of global banks, it reduces credit supply and depresses eco-nomic activity in both countries. In particular, banks’ losses raise bank intermediation costs in both countries, triggering synchronized business fluctuations. Ueda(2012) constructs a two-country model in which financial intermediaries stipulate chained credit contracts domestically and abroad (that is, they engage in cross-border lending by undertaking cross-border borrowing from investors). His analysis reveals that negative shocks to one country propagate to the other, strengthening international co-movement.

I calibrate the model in Chapter 3 to match U.S. and Canadian data. I show that with a positive technology shock and a tightening home monetary policy, cross-border banking activities tend to amplify the transmission channel in both the domestic and the foreign country. Moreover, results suggest that under cross-border banking, predicted bilateral correlations become greater than in the absence of international banking activities. Finally, results show sizable spillover effects of cross-border banking on the dynamics of shock propagation and the synchronization of business cycles between the U.S. and Canada.

The remainder of this dissertation is organized as follows. In Chapter 1, I use the targeted factor modeling to assess the contribution of national and international data to the task of forecasting provincial GDPs in Canada. Chapter 2 presents an international DSGE framework with credit market frictions and an active bank capital channel to assess issues regarding the transmission of domestic and foreign shocks. In Chapter 3, I study the linkage between the size of the cross-border banking and the international propagation of real and financial shocks.

(25)

Chapter 1

Forecasting Regional GDP with Factor

Models: How Useful are National and

International Data?

Abstract

We assess the contribution of national (country-wide) and international data to the task of forecasting the real GDP of Canadian provinces. Using the targeted predictors approach of Bai and Ng(2008) [Journal of Econometrics 146:304-317 ], we find that larger datasets containing regional, national and international data help improve forecasting accuracy for horizons below the one-year-ahead mark, but that beyond that horizon, relying on provincial data alone produces the best forecasts. These results suggest that shocks originating at the national and international levels are transmitted to Canadian regions and reflected in the regional timeseries fairly rapidly.

JEL Classification: C33, C53, C83.

(26)

1.1

Introduction

Forecasting regional economic timeseries is an important policy goal for provincial and state agencies, or for regional branches of central banks. To pursue that goal, data from the same region as the variables to be predicted constitute a natural first source of information, but national (country-wide) and international data could also prove important.

Does regional forecasting require all these data? This question is relevant because incorporating the information contained in several hundred regional, national, and international timeseries may prove computationally intensive. Further, even factor models, specifically designed to manage large numbers of potential predictors, have been shown to lose forecasting ability in some settings where additional timeseries are incorporated to already large datasets (Boivin and Ng,2006). To analyze this question, this paper uses a targeted factor modeling approach to assess whether national and international data help forecast provincial GDPs in Canada. The targeted factor approach, introduced byBai and Ng(2008), preselects timeseries before estimating the common factors used to forecast. It is less susceptible to the problems documented in Boivin and Ng

(2006) because it focuses on timeseries likely to contain relevant information for the variable one wishes to forecast.

We conduct several forecasting exercises for the real GDP of two Canadian provinces.1 We start by using only provincial timeseries to forecast GDP. Next, we add data from other provinces, from Canada as a whole, and from the United States, and compare forecasting ability to the one achieved using only provincial data at each step.

Our results indicate that national and US series can significantly improve the forecasting ability of targeted factor models for Canadian provincial GDPs. This effect is present only at short-term horizons, however: beyond the one-year-ahead mark, relying on provincial data alone produces the best forecasts. These results suggest that shocks originating at the national and international levels are transmitted to Canadian regions fairly rapidly, and thus regional timeseries encompass all relevant information within a short period following a shock.

Although forecasting research typically emphasizes national variables, a small but growing lit-erature studies how best to predict important regional data. Among those, Rapach and Strauss

(2010,2012), for the United States, examine employment forecasting at the state level;Lehman and Wohlrabe(2012), for Germany, study regional GDPs; andKwan and Cotsomitis (2006), for Canada, analyze provincial household spending. Our paper provides an important contribution

1

The two provinces, Ontario and Quebec, are economically the most important in Canada. We focus our analysis on these provinces because of the limited availability of quarterly real GDP measures for other provinces.

(27)

to this growing literature and our results about the significance of national and international data may guide future research on this theme.

Our analysis is also related to contributions that analyze the role of international data for fore-casting country-level data (Cheung and Demers, 2007; Schumacher, 2010; Eickmeier and Ng,

2011). These papers find that using larger datasets that include international data can help improve factor models’ ability to forecast aggregates like national GDP. Our work extends the scope of these results, by showing that regional forecasting can also benefit from the use of larger datasets that include countrywide and international data.2

The remainder of this paper is organized as follows. Section 1.2 describes the targeted factor approach. Section 1.3 presents the data used and our forecasting experiments. Section 1.5 presents our mains findings, while Section 1.6concludes.

1.2

Dynamic factor model with targeted predictors

This section sets up the theoretical framework of the dynamic factor model and the shrinkage methods used to select the targeted predictors. The first subsection provides the general for-malization of the dynamic factors model and in the next subsections, we present the targeted methods and the estimation methodology we employ.

1.2.1 Dynamic factor model

Influential papers byStock and Watson(2002a,b) have helped popularize the use of factor models for forecasting. These models synthesize the information contained in a large number of timeseries into a few factors, which are then used to help forecast the variable of interest. They have been shown to possess significant forecasting ability in a wide variety of settings and are now part of the forecaster’s standard toolkit.

Consider T timeseries observations for N cross-section units, which we denote by

Xt = [X1t· · · Xnt· · · XN t]0. In this formalization, Xt is an N−dimensional multiple timeseries (N × 1 vector) of predictor variables, observed for t = 1, · · · , T .

We are interested inyT+h, the h-step-ahead out-of-sample forecast of yt, which may be, or not, part of Xt. Let X = [X1· · · Xt· · · XT]0 be theT × N matrix of observed variables.

2

Conversely, an interesting literature exemplified by Hernandez-Murillo and Owyang (2006) analyzes how information contained in regional data can help forecast countrywide aggregates.

(28)

Assumption [1] Xnt is a finite realization of a real-valued stochastic process X = {Xit, i ∈ N, t∈ N} indexed by N × N, where the N-dimensional vector process Xt is stationary and stan-dardized with finite first and second moments for any N .

Under Assumption 1, and following Geweke (1977), Sargent and Sims (1977), and Forni et al.

(2005), the dynamic factor model has the following representation: Xit= λ0i0ft+ λ0i1ft−1+· · · + λ0iqft−q+ eit=

q X s=0

λ0isft−s+ eit, (1.2.1)

whereXit is the observed data for theith cross-section at time t (i = 1,· · · , N; t = 1, · · · , T ), ft is a vector (r× 1) of common factors, λi is a vector (r× 1) of loadings and eit is the idiosyn-cratic component ofXit. The subscripts r and q represent the estimated number of factors and the number of associated lags. Using the N -dimensional timeseries form with T observations, equation (1.2.1) can be rewritten as

Xt= ΛFt+ et (t = 1, 2,· · · , T ), (1.2.2)

whereΛ = [Λ0, Λ1,· · · , Λq], et= (e1t, e2t,· · · , eN t)0 and Λj is given by:

Λj = h (λj1)0, (λj2)0,· · · , (λjN)0 i =       λj11 λj12 · · · λj1r λj21 λj22 · · · λj2r .. . ... . .. ... λjN1 λjN2 · · · λjN r       ∈ MN×r(R) 1≤ j ≤ q.

In the same way,Ft is given by:

Ft= [ft, ft−1,· · · , ft−q]0 =       (f1t f1t−1 · · · f1t−q)0 (f2t f2t−1 · · · f2t−q)0 .. . ... ... (frt frt−1 · · · frt−q)0       ∈ Mr×(q+1)(R).

We also use the matrix notation to set the representation of the dynamic factor, since is often more convenient. For this purpose, we use the following mild technical assumption, similar to Assumption [1]:

Assumption [2] Let Ft and Λ respectively be the matrix form of the dynamic common factors and the loadings matrix. Then, we assume that T1 PT

t=1FtFt0 p

(29)

positive definite matrix ΣF as T → ∞ and Λ0Λ/N p

−→ ΣΛ as N → ∞ for a r(q + 1) × r(q + 1) positive definite non-random matrix ΣΛ.

Under Assumption [2], the matrix form of the dynamic factor model is given by:

X = F Λ0+ e, (1.2.3)

whereΛ and F are matrices of size N×r(q+1) and T ×r(q+1) respectively. These unobservables common factors are often interpreted as the driving forces in the economy. Under the form (1.2.3), each dataset (T × N matrix) can be represented as a sum of two components. The first element is the common component and the second is an idiosyncratic component. The common components are driven by a few factors common to all variables in our dataset and the idiosyncratic component is specific to each variable.

Our interest in dynamic factor model is to assess the role of the international information when forecasting Canadian provincial GDPs. For this purpose, we follow refinements to the standard factor method that aim to reduce the influence of uninformative predictors.

1.2.2 The targeted predictors methods

Since factor model theory was developed for a large number of observable variables and obser-vations, the common view is to use as much data as available. Under this approach, the dataset with the largest possible number of potential predictors could be, at first, expected to produce the best estimates of the common factorsFtand thus the superior forecasts. However, note that in the standard dynamic factor model, factors are extracted from the same large dataset regard-less of the variable to be forecasted. This methodology does not, therefore, take into account the specificity of each interest variable and may exclude potential predictors that are particular relevant for a specific variable but have limited explanatory power for the whole dataset. This inconsistency was first documented inBoivin and Ng(2005) and Boivin and Ng(2006), in which it was shown that expanding the sample size simply by adding data that bear little infor-mation about the common componentsFtdoes not necessarily improve forecasts. Consequently, a factor model estimated from a large set of variables may be dominated by another estimated from a smaller set of well-chosen predictors. Taking into account this effect, we discuss and implement three classes of procedures to preselect the targeted subset of variables.3

3

In real time forecasting exercises, Boivin and Ng (2006) show that factors extracted from as few as 40 pre-screened time series often yield better results than using all 147 series in their database.

(30)

The statistical shrinkage method

The implementation of a statistical shrinkage method in a static factor model has already been the subject of several studies by researchers in medical sciences, in particular byBair et al.(2006) in a study on cancer. We consider the implementation of the statistical shrinkage method used inBai and Ng (2008). This statistical method, also known as hard thresholding, uses a formal statistical test to select the best subset of predictors for each target variable and each forecasting horizon.

Consider the following unrestricted linear regression equation for each forecasting horizonh:

Yt+h= αh+ N X i=1 βihXit+ p X k=1 ρhkYt−k+ t+h k∈ {1, 2, 3, · · · , p}, (1.2.4)

and denote byαˆh , ˆβh and ρˆh, the ordinary least squares (OLS) estimates ofαh,βh andρh and let ti(βih) be the t-statistic for the null hypothesis that βih is zero in the unrestricted regression model. Based on the value of ti(βhi), the statistical shrinkage method determines whether the ith variable should be included in the pool used to extract the principal components.

The following algorithm in five steps describes the implementation procedure.

Step 1. For each i = 1,· · · , N, perform the regression (1.2.4) of the interest variable Yt+h on Wt−p and the set of the observables variablesXit. In practice,Wt−p includes a constant and four lags of the interest variable Yt. For each regression, compute ti(βh), the t-statistic associated with the estimated coefficient of Xit.

Step 2. Rank the marginal contribution, or marginal predictive power, of each observableXit by sorting | t1(βh)|, | t

2(βh)|, ..., | tN(βh)| for each h.

Step 3. Let Zτ∗ be the number of series whose | ti(βh)| exceeds a threshold significance level, τ.

Step 4. Let Xt(τ ) = X[1t], X[2t], ..., X[k∗

τt] be the corresponding set of Z ∗

τ targeted predictors. Then, estimate the common factors fromXt(τ ) by the method of principal components analysis.

Step 5. After selecting the number of the common factors as well as their lags, estimate the forecasting equation (1.2.4).

(31)

In the implementation of the statistical shrinkage, we specified three thresholds to select the set of the informative variables as Bai and Ng (2008): first threshold ⇐⇒ | t_statistic |> 1.28, second threshold ⇐⇒ | t_statistic | > 1.65 and third threshold ⇐⇒ | t_statistic |> 2.58. The three last cutoff points are linked to the threshold values of the t-statistics as they are critical values of thet test at the two-tailed 10%, 5% and 1% percent levels.

Consider the stochastic selection matrixSN that preselects only the potential predictors with a t-statistic higher than the fixed threshold (τ ). Algebraically, the stochastic selection matrix SN is defined as: SNh(β) =       I(|t1(β1h)| > τ) 0 · · · 0 0 I(|t2(β2h)| > τ) · · · 0 .. . ... . .. ... 0 0 · · · I(|tN(βNh)| > τ)       (1.2.5)

Note that with the stochastic selection matrix, the state-space representation of the targeted dynamic factor model may be expressed as:

X× Sh

N(β) = ˜F ˜Λ0+ ˜e, (1.2.6)

where ˜F and ˜Λ are the factor and the loadings matrix obtained by principal component analysis from the subspace of the relevant variables.

The weakness of this statistical shrinkage method is related to the fact that the informative pool of variables is selected with a discrete decision rule. As consequence, this method can be sensitive to small changes in the data or to small changes in the value of the shrinkage threshold. The following selection methods (LASSO and LARS) also known as soft thresholding methods perform the subset selection simultaneously.

The least absolute shrinkage selection operator (LASSO)

The LASSO method, proposed byTibshirani (1996), is a popular technique for model selection and estimation in linear regression models. It involves dropping uninformative predictors via a penalized regression and is widely used in the statistics literature. Denote by βLASSO, the least squares estimates obtained from a regression of the target variableYh on all available potential regressors. The LASSO method performs the following quadratic programming problem.

ˆ

βLASSO = arg min β 1 2kY h− Xβk2 2 subject to N X j=1 | βj |< τ, (1.2.7)

(32)

where τ > 0 is the tuning parameter that controls the amount of shrinkage. In the statistical literature, the LASSO estimator is a special case of bridge estimators proposed by Hoerl and Kennard (1970) that solve the following general programming problem:

ˆ

βBridge = arg min β 1 2kY h− Xβk2 2+ N X j=1 | βj |η< τ. (1.2.8)

Algebraically, the LASSO estimator employs an L1-type penalty function (η = 1) on the regres-sion coefficients which tends to produce sparse models. However, when we consider theL2-type penalty function, we obtain the well-known ridge estimator. Hence, the main difference between the ridge and the LASSO estimators is the use ofL2-type instead of anL1-type penalty function. As consequence of this difference, the shrinkage under LASSO can set some estimates to zero, whereas the shrinkage method under ridge only shrinks coefficients to zero but never sets them to zero exactly. As discussed in Bai and Ng (2008), a large value of η tends to favour models with regressions coefficients of small but non-zero values or coefficients with small absolute values from a short tailed density, while a small value of η favors models either with many coefficients set to zero or from a long tailed density. Since the LASSO penalty function is convex, but not strictly convex, it tends to pick one without taking account the rest of the correlated predictors.

The least angle regression (LARS)

The Least Angle Regression (LARS) relates to the classic model-selection method known as forward stagewise selection method described in Efron et al. (2004). Here, we briefly describe the procedure, for more details about the procedure, seeEfron et al. (2004).

The LARS method, or forward stagewise selection method, is an iterative procedure whereby the (k + 1)th potential predictor is added to the informative set of predictors if it has the maximum correlation with the residual vector from the kth step. After k steps, this results in a set of k informative predictors that are then used to perform the principal components analysis. Let c

φk = X[1,··· ,k]β be the current estimate of Y with the first k selected predictors and denote byˆ ˆ

c = X0(Y − φk), the current correlation.

Formally, we start with all parameters equal to zero, and we select the one having largest absolute correlation with the target variableY , Xj, and perform simple linear regression ofY on Xj. This leaves a residual vector orthogonal toXj and repeat the selection process. The next step is taken in the direction of the greatest correlation between the current residual and theN− k remaining predictors.

(33)

Define the active setA as the set of indices corresponding to variables with the largest absolute correlations: b C = max j {| ˆcj |} and A = n j :| ˆcj |=| bC | o . We also define the active matrix corresponding to the active setA as

XA = (sjXj)j∈A where sj = sign(ˆcj).

LetGA = XA0 XA and∆A = 10AGA−11A−1/2

, where1A is a vector of ones of length being| A |, the size of A. A unit equiangular vectors with columns of the active set matrix XA can be defined as

µA= XA$A, where $A = ∆AGA−11A,

So that, XA0 µA = ∆A1A and Ak2= 1. For the next step, LARS updates φ as

ˆ

φ = ˆφ + ˆγµA,

whereγ is the smallest positive number such that one and only one new index joins the activeˆ setA. Following Efron et al. (2004),ˆγ is given by

ˆ γ = min j∈AC + ( b C− ˆcj ∆A− aj , C + ˆb cj ∆A+ aj )

wheremin+indicates that the minimum is taken over only positive components within choice of j and aj is the jth component of the vectora = XAµA.

1.2.3 Estimation methods and the optimal number of factors

After the selection step of the targeted predictors, the common factors matrix are estimated as well as the loadings matrix. Stock and Watson(2002a), Forni et al. (2005) and several others authors used the principal component analysis method to estimate the loadings matrix and the common factors. In their papers they show that the common factors can always be estimated by using the asymptotic principal component analysis in presence of a large size of observations. In this framework, the size of available data are relatively large, therefore, following the result

(34)

in Stock and Watson (2002b), we use a principal component-ordinary least squared estimator for estimating the factors and the loadings matrix. In such case, the maximal number of factors which can be estimated by using this method is thenmin{N1, T}, with N1, the number of relevant variables, and T, the numbers observations of the observable variables. This method involves the estimation of the eigenvalues and the eigenvectors decomposition of the spectral density matrix ofXt. The spectral density matrix of Xt , which is estimated using the frequencies−π < ω < π can be decomposed into the spectral of the unobservable common factors and the idiosyncratic component. The decomposition of the spectral density matrix ofXt is given by

Σ(ω) = ΣX(ω)⊕ Σe(ω) (1.2.9)

where ΣX(ω) = λ(e−iω)ΣFλ(e−iω)0 is the spectral density matrix of the unobservable common factors. The principal component-ordinary least squared estimators of the factor and the loadings Λ are the matrix that minimize the following residual sum of squares:

V (s) = min Λ,F (N1 X i=1 T X t=1 (Xit− λ0iFt)2 F0F/T = Is where s = r(q + 1) ) , (1.2.10)

The principal component ordinary least squares is computationally convenient, even for very large N1. Moreover, it can be generalized to take an account somme data irregularities like missing observations by using the estimation moment algorithm (EM). According to Stock and Watson (2002a), the system should be expressed in a state-space form and estimated a Kalman filter. As formulated in the minimizing program, the estimated factors are a linear combination of the variables in a selected subspace of relevant variables where the coefficients can be positive or negative reflecting the sense of the correlation between the interest variable and each factor. In practice, factors are extracted in a sequential way, with as first factor, the factor which explains most the variability of all the data. The second factor is the one which assists the biggest variability of the data and so on.

1.3

Data description and experiments

We study regional data from two economically important Canadian provinces, Quebec and On-tario. For these provinces, the "regional-only" datasets contain 373 and 70 times series, re-spectively, covering the sample 1983Q1 to 2011Q1 at a quarterly frequency. In each dataset, the available timeseries include the real GDP for each province, as well as various real activity indicators, monetary and financial indicators, GDP components on the expenditure side, and

(35)

retail trade and price indicators. We also use aggregate (country-wide) data for Canada and international (US) data, arranged in two additional datasets containing 480 and 199 timeseries, respectively. The aggregate datasets also include various GDP components, price, employment, and financial data. The four separate datasets are labeledQc, On, Ca, and U s. With these four subsets of regional, national and international data, we construct five subsamples of data which are all the combination including Quebec and Ontario’s macroeconomic data as given in Figure 1.1. The distribution of series by group and by region is given in Table 1.1. Since our goal is to assess the relative contribution of the international information in forecasting of provincial GDPs, we introduce the concept of qualitative selection method.

Contrary to the statistical selection methods proposed by Bai and Ng (2008), the qualitative selection method that we propose involves keeping only in the informative dataset, the variables that are geographically close to the target region. This qualitative selection method allow us to aggregate our five subsamples of data into three clusters. For the first cluster, the qualita-tive selection method consists to keep only the regional macroeconomic data in the informaqualita-tive dataset. The second cluster consists in adding to the regional data, only one block from national or international data. The last cluster is compounded of data from Quebec, Ontario and Canada as described in Figure1.1.

After the clustering step, a transformation of data is standard to work with the stationary factor models. Indeed, as pointed byNelson and Ploser(1982), many of the macroeconomic time series used containI(1) components, and following Marcellino et al.(2003), the data are preprocessed in three stages before being used for the state-space representation. First, all the series are seasonally adjusted using a linear approximation to X-11. Second, the series are screened for large outliers and each outlying observation is recoded as missing data. Augmented Dickey-Fuller, Phillips Perron and KPSS tests are performed to suitably transform all the series. More generally, after determining the integration order of all variables, series are transformed according to their order of integration.4 Overall, all series are seasonally adjusted, screened for outliers and appropriately differenced to obtain stationary time series.

After these transformations, series are further standardized to have a sample mean of zero and a sample variance of 1. The standardization of variables consists of in the normalization of the variable following the procedure bXt = (Xt− ¯X)/σX, where ¯X and σX denote respectively the mean and the variance of the variable X. The standardization of variables is standard in the estimation of the dynamic factors model and allows to put series on the same scale by ignoring the specific dimension of each variable. To obtain the predicted value of the interest variable, we just

4

(36)

apply to the standardized variables forecasted the opposite transformation, i.e. Xt= σXXbt+ ¯X.

Figure 1.1: Structure of datasets examined

First cluster: benchmarks

{Quebec (Qc)} {Ontario (On)}

Second cluster {Qc+Ca} {Qc+On} {Qc+Us} {On+Qc} {On+Ca} {On+Us} Third cluster {Qc+Ca+On} {On+Qc+Ca}

Notes: In this data scheme,Qc≡ Quebec economic indicators, Ca ≡ Canada economic indicators, On ≡ Ontario economic indicators, Qc + Ca≡ Quebec and Canada economic indicators, Qc + On ≡ Quebec and Ontario economic indicators, Qc + U s ≡ Quebec and Us economic indicators, Qc + Ca + On ≡ Quebec, Canada and Ontario economic indicators.

1.4

Design of the comparison exercise

A standard way to quantify out-of-sample forecasting performance is the mean squared prediction error (MSEP) for each forecast horizon. This statistic is frequently used to assess the perfor-mance of regressions based on principal components estimation and is close to the loss function based on the mean squared forecast error proposed byChristoffersen and Diebold(1998). Since our goal is to assess the relative contribution of the international information when using the selection methods, we use the targeted dynamic factor model estimated with Quebec and On-tario’s GDPs as benchmark. Thus, the predictive accuracy of a selection method is evaluated by comparing the estimated residuals of the estimated model using each level of international data

(37)

Table 1.1: National and international dataset

Regions On Qc Qc+Ca Qc+On Qc+Us On+Us On+Ca Qc+On+Ca

Number of series 70 373 853 443 572 269 550 923

Notes: This describes the number of series of each class of data segmentation and for each region. The national data includes only data from Quebec economy and international macroeconomic indicators includes data from Ontario, Canada and and Us.

with those obtained with the dynamic factor model estimated with the regional data (Quebec and Ontario). At each level of data, the target model and the benchmark model are estimated using the same selection methods. This process allows us to control both for the dataset and for the selection method errors and only highlight the relative contribution of the international information. In addition, we use both a direct and a sequential forecasting strategies to control for the forecasting process. The direct strategy consists to estimate the parameters from the following linear projection using OLS regression:

b yh

d,T+h|T = bAh(L)yT (1.4.1)

Note that when the direct forecast method, the initial series and the transformed series forecasts are related by : ybd,T+h|T = by h d,T+h|T if yt is I(0), ybd,T+h|T = yT +yb h d,T+h|T if yt is I(1) and b yd,T+h|T = yT + h∆yT +by h d,T+h|T if yt isI(2).

For the sequential forecast method, the parameters are estimated recursively by OLS and the forecasts ofYtare constructed recursively as:

b

yhs,T+h|T = bA(L)ys,Th +h−1|T (1.4.2)

For the sequential forecast method, the parameters are estimated recursively by OLS and the forecasts of yt are constructed recursively as: ybs,T+h|T = yb

h s,T+h|T if yt is I(0), ybs,T+h|T = yT +Phl=1yb h s,T+h|T if yt is I(1) andybs,T+h|T = yT + Ph i=1 Pi l=1yb h s,T+h|T if yt isI(2).

The forecast performance comparison is performed in a simulated out-of-sample framework where theM SEP calculation are done using a fully recursive methodology. In the first step, both the targeted dynamic factor model and the standard factor model are estimated on data from 1983:Q2 to 2004:Q4 and a simulated real-time forecasting is done from 2005:Q1 through 2011:Q1-h. In the second step, we compute anh-step ahead forecasts and the estimation sample is augmented by one quarters and the corresponding h-step ahead forecast is computed. With m ≡ d for a direct forecasting strategy and m ≡ s for a sequential forecasting strategy, the relative mean

(38)

squared errors RM SEP of a forecasting model is calculated with regard to the M SEP of the benchmark model for each horizonh following this equation:

RM SEP = M SEP(Reg+Int)

M SEP(Reg) (1.4.3)

withyt denote the series of interest after the stationary and the standardization transformation of the initial variable. Reg +Int denotes the estimated dynamic factor model using both regional and international data, whereasReg is the estimated of the dynamic factor model using only the regional data. Then, the predictive accuracy of a selection method is evaluated by comparing the residues of the estimated model using each level of international data with the residues of the dynamic factor model estimated with the national data. At each level of data, the target model and the benchmark model are estimated using the same selection methods.

All forecasting models are specified and estimated as a linear projection of an h-step-ahead variable, yh

t onto predictors including the common factors as well as its lags. Based on Forni

et al. (2005), the general forecast model is specify as: yh T+h|T = αh+ r X l=1 q X m=0 βh lmfl,T−m+ p X j=1 ρh jyT−j+1+ υTh+h|T. (1.4.4)

1.5

Results

Table 1.2 to 1.6 present our results. Each table illustrates the implications of a specific imple-mentation of the targeting approach in Bai and Ng (2008). In all tables, the entries report the mean squared error (MSE) of an individual forecasting experiment, relative to the MSE achieved using only the provincial data. As the sequential strategy tends to outperform the direct one, we only report results from the sequential strategy. Entries lower than 1 thus suggest that adding other regional, country-wide, or international data helps produce better forecasts for the provin-cial GDP. For example, the first element of Table 1.2, 0.823 reports that forecasting Quebec’s GDP one-quarter ahead when data from Quebec and from Canada are used lowers the MSE by almost18% relative to the case using only information from Quebec. The symbols *, ** and *** indicate statistically significant differences in forecasting ability according to theGiacomini and White(2006) test.

1.5.1 Hard Thresholding

The first targeting method advocated by Bai and Ng (2008) constructs the factors using only timeseries whose individual significance for the targeted variable is higher than a threshold

(39)

t-statistic t∗. Tables 1.2, 1.3, and 1.4 depict results obtained using t∗ = 1.28, 1.65, and 2.58, respectively (these correspond to10%, 5% and 1% critical values for two-tailed t-tests).

Table 1.2 illustrates the case when t∗ = 1.28. The main result of the table is that for one-quarter-ahead and two-one-quarter-ahead forecasts (h = 1 and h = 2) larger datasets, containing data from other provinces, country-wide aggregates or from the US, can improve the performance of our regional forecasting model in a statistically significant way. In addition, improvements are economically meaningful, reaching19% (a relative MSE of 0.81) when forecasting Quebec’s GDP and up to 38% for Ontario’s GDP.5 However, additional information from the other province,

from Canada-wide variables or from the United States does not improve forecasting performance for forecasting horizons of four-quarters ahead or above: instead, relying on provincial data alone is sufficient.

Table 1.3 (for t∗ = 1.65) and Table 1.4 (t∗ = 2.58) confirm these results. Forecasts for Quebec and Ontario’s GDP can be improved by the use of extra-regional information for h = 1 and h = 2, but for longer-term horizons, regional data alone obtains the best results. For Quebec’s GDP, some of the improvements in short-term forecasting accuracy are important and reach25% in some cases.

Taken together, the results in Table1.2- 1.4are coherent with the message fromBoivin and Ng

(2006): larger datasets can, but do not necessarily improve the forecasting performance of factor models. In our setting, national and international data appear to contain relevant information for forecasting at short-term horizons but not at longer-term ones. One interpretation of this general result is that shocks originating at the national and international levels are reflected relatively rapidly into regional indicators, because of close economic integration (intranational and international) in Canada.6

1.5.2 Soft Thresholding

Table 1.5 and 1.6 report results arrived at by the LARS − EN soft thresholding approach. Recall that this selects variables to be included in the pool from which factors are estimated by pursuing a system-wide approach. Two parameters, κ1 andκ2, need to be calibrated. Since the choice ofκ1 can be relabeled as the maximum number of variables included in the pool, we first pursue a conservative strategy where ND = 30 variables are selected from each provincial dataset. Meanwhile, κ2 is set to 0.25 following Bai and Ng (2008). The results of this experiment are

5

The relatively small size of our Ontario-only dataset might explain the larger improvements.

6Similar results are obtained by Schumacher (2010), who shows that international data can help forecast

national (German) GDP better, but only at horizons of less than a year. Beyond that horizon, the information contained in national data is enough to produce the best forecasts.

(40)

reported in Table1.5. Next, Table1.6reports the results of allowing considerably more variables to enter the pool: 110 for the Quebec dataset and 75 for Ontario.

Overall, the results in Table1.5and 1.6are consistent with those from Table1.2 -1.4: at hori-zons shorter than four-quarters-ahead, adding national and international variables can improve the forecasting performance of factor models with targeted predictors; for longer horizons by contrast, no improvement is gained from larger datasets. However, even for horizons at which larger datasets help, Table 1.5 and 1.6report more modest improvements in forecasting ability than those depicted in Tables 1.2 to 1.4 . This may seem puzzling, as the soft thresholding approach is designed to offer a more flexible and thus efficient approach. We interpret this finding as suggesting that soft thresholding does indeed improve noticeably forecasting ability, but by helping the provincial-only dataset, it lessens the need to add information from national and international data. Said otherwise, absolute forecasting accuracy is always best with soft thresholding, but because it improves the performance of provincial datasets more, the relative MSEs reported in the tables are closer to 1.

1.6

Concluding Remarks

Regional economic performance has important implications for state, provincial or regional policy makers. However, the development of forecasting tools aimed at predicting regional economic variables has lagged the rapid advancements that have taken place in the literature on national-level forecasting methods. The present paper contributes to bridge that gap and reports that a factor modeling approach with targeted predictors and data drawn from international, national and regional datasets can significantly help forecasting accuracy for regional GDPs in Canada. This improved performance is present only at short horizons however, suggesting that a longer-term, regional data forecasting can be conducted with regional data alone with a careful approach and a relatively large regional dataset.

(41)

Table 1.2: Forecasting Performance: hard threshold with t = 1.28 Relative to forecasts using regional data only

Targeting Method : Hard Threshold with t∗ = 1.28 Forecasting horizon

Dataset h = 1 h = 2 h = 4 h = 8 h = 12

Panel A: Forecasting Quebec’s GDP

Qc + Ca 0.823∗∗∗ 0.892∗∗∗ 1.067 1.058 0.994

Qc + On 0.911∗∗∗ 1.001∗∗ 1.039 1.049 0.992

Qc + U s 0.871∗∗∗ 0.982 1.072 1.037 0.992

Qc + Ca + On 0.814∗∗∗ 1.004 1.073∗ 1.072 0.992 Panel B: Forecasting Ontario’s GDP

On + Ca 0.660∗∗∗ 0.874∗∗∗ 1.055∗ 1.021 1.001 On + Qc 0.858∗∗∗ 0.847∗∗∗ 1.036∗ 1.934 1.010 On + U s 0.691∗∗∗ 0.798∗∗∗ 1.009 1.033 1.002 On + Ca + Qc 0.819∗∗∗ 0.822∗∗∗ 0.930 1.032 0.994

Note : Each entry is the ratio of the mean-squared error of the forecasts obtained with a targeted factor model using the larger dataset to that obtained when the forecasts are obtained with data from Quebec only (first panel) or Ontario only (second panel). Entries under 1 suggest the larger dataset has superior forecasting performance. The symbols ∗,∗∗ and ∗∗∗ indicate rejection of the null of equal predictive accuracy at the 10%, 5% and 1% level, respectively, according Giacomini and White(2006). The model is estimated and forecasts are computed separately for each horizon using a rolling window of 21 periods.

Table 1.3: Forecasting Performance: hard threshold with t = 1.65 Relative to forecasts using regional data only

Targeting Method : Hard Threshold with t∗ = 1.65 Forecasting horizon

Dataset h = 1 h = 2 h = 4 h = 8 h = 12

Panel A: Forecasting Quebec’s GDP

Qc + Ca 0.748∗∗∗ 0.849∗∗∗ 1.016 1.110∗ 0.993

Qc + On 0.909∗∗∗ 0.956∗∗∗ 1.034 1.074 0.992∗

Qc + U s 0.889∗∗∗ 1.015 1.076∗∗ 1.058 0.993

Qc + Ca + On 0.743∗∗∗ 0.993 1.090∗ 1.103∗ 0.991 Panel B: Forecasting Ontario’s GDP

On + Ca 0.739∗∗∗ 0.918∗∗∗ 1.049 1.019 1.000

On + Qc 0.906∗∗∗ 0.909∗∗∗ 1.068∗∗∗ 1.009 0.999 On + U s 0.730∗∗∗ 0.854∗∗∗ 1.166∗∗∗ 1.081∗ 0.997 On + Ca + Qc 0.762∗∗∗ 0.904∗∗∗ 1.030 1.015 1.000

(42)

Table 1.4: Forecasting Performance: hard threshold with t = 2.58 Relative to forecasts using regional data only

Targeting Method : Hard Threshold with t∗ = 2.58 Forecasting horizon

Dataset h = 1 h = 2 h = 4 h = 8 h = 12

Panel A: Forecasting Quebec’s GDP

Qc + Ca 0.810∗∗∗ 0.954∗∗∗ 1.054∗∗∗ 1.073 1.001 Qc + On 0.900∗∗∗ 0.995∗∗∗ 1.026∗∗∗ 1.042 1.002 Qc + U s 0.774∗∗∗ 0.983∗∗∗ 1.146∗∗∗ 1.105 1.007 Qc + Ca + On 0.978∗∗∗ 1.007∗∗∗ 1.059∗∗∗ 1.085 1.003∗

Panel B: Forecasting Ontario’s GDP

On + Ca 0.739∗∗∗ 0.918∗∗∗ 1.049 1.019 1.000

On + Qc 0.906∗∗∗ 0.909∗∗∗ 1.068∗∗∗ 1.009 0.999 On + U s 0.730∗∗∗ 0.854∗∗∗ 1.166∗∗∗ 1.081∗ 0.997 On + Ca + Qc 0.762∗∗∗ 0.904∗∗∗ 1.030 1.015 1.000

Table 1.5: Forecasting Performance: soft threshold with ND= 30 Relative to forecasts using regional data only

Targeting Method : soft threshold with ND= 30 (Lars-EN) Forecasting horizon

Dataset h = 1 h = 2 h = 4 h = 8 h = 12

Panel A: Forecasting Quebec’s GDP

Qc + Ca 0.972∗∗∗ 0.996∗∗∗ 1.005 1.026∗∗∗ 0.991

Qc + On 0.934∗∗∗ 0.944∗∗∗ 1.028∗ .994 0.992∗

Qc + U s 0.929∗∗∗ 1.004∗∗ 1.147∗∗∗ 1.065∗ 0.982

Qc + Ca + On 0.893∗∗∗ 1.085 1.014∗ 1.083 1.007

Panel B: Forecasting Ontario’s GDP

On + Ca 0.841∗∗∗ 0.898∗∗∗ 1.086∗∗∗ 1.081∗∗ 1.002 On + Qc 0.981∗∗∗ 0.975∗∗∗ 0.922 1.128∗∗∗ 0.969 On + U s 0.950∗∗∗ 0.905∗∗ 1.331∗∗∗ 1.432∗∗∗ 1.047 On + Ca + Qc 0.884∗∗∗ 0.881∗∗∗ 1.066∗∗∗ 1.052 0.999

(43)

Table 1.6: Forecasting Performance: soft threshold with NQ = 110 and NO = 75 Relative to forecasts using regional data only

Targeting Method : Soft threshold with NQ = 110 and NO = 75 Forecasting horizon

Dataset h = 1 h = 2 h = 4 h = 8 h = 12

Panel A: Forecasting Quebec’s GDP

Qc + Ca 0.936∗∗∗ 1.014∗∗∗ 0.918 1.011 1.033∗∗∗

Qc + On 1.077 1.076∗∗∗ 0.864∗∗∗ 1.016 0.984

Qc + U s 0.881∗∗∗ 0.912∗∗ 0.912 1.012 0.988

Qc + Ca + On 0.945∗∗∗ 1.035∗∗∗ 0.928 1.009 1.031∗∗∗ Panel B: Forecasting Ontario’s GDP

On + Ca 0.977∗∗∗ 1.031∗∗∗ 1.054∗∗∗ 1.063∗∗∗ 0.982

On + Qc 0.981∗∗∗ 0.975∗∗∗ 0.922 1.004 0.969

On + U s 0.919∗∗∗ 0.841∗∗∗ 1.163∗∗∗ 1.317∗∗∗ 1.068 On + Ca + Qc 0.951∗∗∗ 1.028∗∗∗ 1.021∗∗∗ 1.025∗ 0.993

(44)
(45)

Chapter 2

Bank Capital, Credit Market Frictions

and International Shocks Transmission

Abstract

Recent empirical evidence suggests that the state of banks’ balance sheets plays an impor-tant role in the transmission of monetary policy and other shocks. This paper presents an open-economy DSGE framework with credit market frictions and an active bank capital channel to assess issues regarding the transmission of domestic and foreign shocks. The theoretical framework includes the financial accelerator mechanism developed byBernanke et al.(1999), the bank capital channel and the exchange rate channel. Our simulations show that the exchange rate channel plays an amplification role in the propagation of shocks. Fur-thermore, with these three channels present, domestic and foreign shocks have an important quantitative role in explaining domestic aggregates like output, consumption, inflation and total bank’s lending. In addition, results suggest that economies whose banks remain well-capitalized when affected by adverse shock experience less severe downturns. Our results highlight the importance of bank capital in an international framework and can be used to inform the worldwide debate over banking regulation.

J.E.L. Classification: E44, E52, G21

(46)

2.1

Introduction

The recent financial turmoil, which started with the meltdown of the U.S. subprime mortgage market, spread rapidly around the world and affected the world’s economic system through a series of cross-country contagion mechanisms. As a consequence, GDP dropped around the world and global malfunctioning occurred in financial markets. Figure2.1illustrates these recent global downturns in the United States, Canada, Japan and the United Kingdom. The high degree of interdependence between the real economy and the financial markets in several countries simultaneously suggests a strong degree of international transmission of domestic and external shocks. This high interconnectedness between economic and financial markets may be viewed as a consequence of financial markets integration, globalization of trade, and the higher volume of cross-border assets held by economic agents.

Recent empirical and theoretical evidence has highlighted the importance of credit market im-perfections in the transmission of shocks (Bernanke et al.(1999),Christiano et al.(2010),Gertler and Kiyotaki (2011), Meh and Moran (2010), and Dib (2010)). In these papers, credit market imperfections can be of two types: (i) corporate balance sheet (financial accelerator) channel models, which focus on the demand side of the credit market and (ii) bank balance sheet channel models, which focus on the supply side of the credit market. Together, they suggest that the financial health of banks and firms may significantly alter the transmission of monetary policy and others shocks.

This evidence underscores the need to develop a general equilibrium model with real-financial linkages in an international framework. Indeed, understanding and quantifying these real-financial linkages is an important step towards the identification of the best policy response to international developments. For example, understanding these linkages would allow Canadian authorities to examine whether international trade in goods and financial markets can explain the observed spillover effects of U.S. business cycles on the Canadian economy. In addition, a better knowledge of these linkages will allow central banks to assess the contribution of internal and external sources to the fluctuations observed in various OECD countries.

While the international transmission mechanism and the bank capital channel have both gener-ated a large body of research with well-established contributions, the analysis of these two issues simultaneously has received less attention. This paper aims to bridge this gap by proposing a Dynamic Stochastic General Equilibrium (DSGE) for a small open economy with an active bank balance sheet channel to analyze the relative contribution of the bank balance sheets channel, the exchange rate channel, and the financial accelerator channel in the propagation of internal and external shocks. Specifically, this paper contributes to the growing literature aimed at

Figure

Figure 1.1: Structure of datasets examined First cluster: benchmarks
Table 1.3: Forecasting Performance: hard threshold with t = 1.65 Relative to forecasts using regional data only
Table 1.5: Forecasting Performance: soft threshold with N D = 30 Relative to forecasts using regional data only
Table 1.6: Forecasting Performance: soft threshold with N Q = 110 and N O = 75 Relative to forecasts using regional data only
+7

Références

Documents relatifs

Notre m´ethode de pr´evision est ` a d´erouler pour chaque type d’´ev´enement `a pr´evoir et se pr´esente en 3 ´etapes : (1) les occurrences des pr´emisses des FLM-r`egles

The  abandonment  of  mountain  agriculture  in  the  Mediterranean  area  results  in  a  closed 

Éléments temps réel Synchronisation sélective loop select accept A do instructions_A ; end A ; or accept B do instructions_B; end B; or delay D ;. instructions_D ; -- si

Introduction Plan 1 Introduction Assembleur Machine Processeur 2 Notions générales Instructions Mémoire Interruptions Directives Premier programme Registre d'état

Beyond this threshold, the He atoms di ffuse into the UO 2 matrix or for bubble diameters higher or equal to 5 nm nano-cracks or dislocations appear at the bubble surface..

Maëva L&amp;apos;héronde, Muriel Bouttemy, Florence Mercier-Bion, Delphine Neff, Arnaud Etcheberry, Philippe Dillmann.. To cite

LAFRACEPhilippe : Les garanties du procès équitable quelle politique pénal pour l’Europe ? Travaux du colloque international organisé par l’association de recherches

Diffusion and forced convection mechanisms be- tween two phases have been studied in the context of heat extraction from a fractured hot dry rock (HDR) reservoir. A model describing