In policy circles, increasing attention has been given to fat tail events, particularly since the outbreak of the recent crisis and during the ensuing uncertainty surrounding the political and economic environment. Many argue that recent events can hardly be explained by models that are based on a Gaussian shock structure (Mishkin, 2011). This has been recognised by recent eﬀorts of the DSGE literature including Curdia et al. (2013) and Chib and Ramamurthy (2014) who found evidence that models with a multivariate t-distributed shock structure are strongly favoured by the data over standard Gaussian models. This project seeks to complement these eﬀorts by focusing on VARmodels with Student-t distributed shocks (Student, 1908). We build on previous work on univariate models with Student-t distributed shocks by Geweke (1992, 1993, 1994, 2005); Koop (2003), and the seminar paper of Zellner (1976) on the Bayesian treatment of multivariate regression models. In addition, we draw on the DSGE literature (Fernandez-Villaverde and Rubio-Ramirez, 2007; Justiniano and Primiceri, 2008; Liu et al., 2011) by incorporating stochastic volatility of the error structure, because by solely focusing on fat-tails and ignoring lower-frequency changes in the volatility of the shocks (as in Ascari et al., 2012) tends to bias the results towards ﬁnding evidence in favour of fat tails, as pointed out by Curdia et al. (2013). Our work is therefore closely related to Curdia et al. (2013). However the focus of our analysis is very diﬀerent from Curdia et al. (2013). Our main aim is to investigate if allowing for fat tails and stochastic volatility can help improve the empirical performance of Bayesian VARs, both in terms of model ﬁt and forecasting performance.
data correlations and correlations do not indicate causal relationships. The second one, which Sims (1986) mentions is pointed out by Sargent (1984). Sargent (1984) observes that VARmodels usually incorporate policy variables, treating them as ran- dom variables. However, choosing a policy seems to be a unique choice, which has deterministic character. According to Sims (1986) both versions of the argument, if interpreted properly, are correct. For Sims (1986) it is impossible to use a statistical model for analysing policy decisions without discovering the underlying economic interpretation and going beyond mere correlations. Finding such an interpretation is called identification. Sims (1986) further explains the terms “reduced-form” and “structure” in the context of the VAR model. The reduced-form is a model, which describes how historical data is generated and the structure, or structural model, is used for decision making. For identification it is necessary to connect the reduced- and the structural-form. Over the years, several methods of identification in the con- text of the VAR model have been established, the main focus of the thesis lies on the classical Cholesky decomposition and the more modern sign restrictions approach. To identify the structural-form, the Cholesky decomposition was first used by Sims in his early work from 1986. The idea behind this approach is that not all variables have contemporaneous effects on the other variables. At the beginning, various practitioners thought that this was the only identification strategy. This was clearly a misconception, because there a many other possible identification restrictions to identify the system (Christiano, 2012).
Our second main contribution is to derive estimators of the joint credible set under the same loss function used in deriving the estimator of , building on the work of Bernardo (2011). It is well known that conventional Bayesian error bands, defined as vectors of upper and lower quantiles of the marginal impulse response posterior distributions misrepresent the estimation uncertainty about estimates of impulse response vectors. This point is neither new nor controversial. 1 It is well documented in the Bayesian literature, and it applies to both point identified and set- identified models (e.g., Sims and Zha 1999; Inoue and Kilian 2013). 2 The only reason that the use of pointwise Bayesian error bands has persisted in applied work has been the lack of an easy to implement alternative that can be applied to a wide range of structural VARmodels. In this paper, we propose a new approach to the construction of joint credible sets for structural impulse responses that can be adapted to any of the loss functions of interest. 3
Please click this link for the most recent version of the paper
Structural VARmodels are frequently identified using sign restrictions on impulse responses. Moving beyond the popular but restrictive Normal-inverse- Wishart-Uniform prior, we develop a methodology that can handle almost any prior distribution on contemporaneous responses. We then propose a new sam- pler that explores the posterior just as efficiently as done by the existing al- gorithm for the Normal-inverse-Wishart-Uniform case. We use this flexible and tractable framework to combine sign restrictions with information on the volatil- ity of the data, giving less prior mass to impulse effects that are inconsistent with the data from a training sample. This approach sharpens posterior bands and makes sign restrictions more informative. We apply the methodology to the oil market and show that oil supply shocks have a non-negligible effect on oil price dynamics.
The second half of this thesis takes a different approach. It moves away from structural modelling and ventures into the empirical realm of data-driven models, where non-linearities are once more introduced by means of time-varying parameters. Chapter three, titled “The credibility of Hong Kong’s currency board system: Looking through the prism of MS-VARmodels with time-varying transition probabilities”, is a natural continuation of the issue of credibility by addressing a limitation of the MS-DSGE models. Due to their complexity, the probabilities governing the switching parameters have to be constant. This drawback has yet to be resolved in the literature and imposes a serious limitation in scenarios where self-fulfilling expectations fuel the crises. Believing that a regime change may be near could very well influence the likelihood of a shift. This calls for endogenising the transition probabilities between states, which can be achieved in a Markov-switching VAR framework (MS-VAR). The advantage of this setup is that one can pose a set of questions: What captures a loss of credibility in a system? Which are the trigger variables? Does the damage to the confidence in the exchange rate regime stem from fears of the global financial market’s or is it solely coming from domestic volatility? In this chapter, we construct a conditional volatility index for Hong Kong and show that uncertainty on the domestic stock markets, as well as the swings of the foreign exchange market for domestic currency, have predictive power over the investor’s confidence. Moreover, global uncertainty indicators remain uninformative.
I extend on the VAR model of Bjørnland and Jacobsen (2010) by adding a credit variable to the model. This is motivated by the key role of credit measures in the indicators of financial instability proposed by BIS and Norges Bank. 7 In addition, the inclusion of a
credit variable may capture possible multidimensional links between credit, house prices and interest rates. I also identify a monetary policy shock in a large set of competing models and identifying assumptions. Each model has its shortcomings and there are various conflicting views as to which types of assumptions is most plausible. The VAR literature on this topic also shows that results may change drastically depending on the model specifications. Overall, the evidence supports Bjørnland and Jacobsen’s finding and suggests that the effect of monetary policy on house prices is quite large. In con- trast, for household credit the response to a monetary policy shock seems modest. These results are relatively robust across different identification assumptions, although the ef- fect of monetary policy on house prices is larger when simultaneous effects are allowed for. The rest of the paper is organised as follows. The next section describes the VARmodels and how the monetary policy shock is identified. In Section 3, some data issues are discussed. The results are presented in Section 4, while Section 5 concludes.
Note: This paper has previously been circulated with the title “Predictive Likelihood Comparisons with DSGE and DSGE-VARModels”. We are particularly grateful to Marta Bańbura who has speciﬁed the large Bayesian VAR model we have used in the paper. We are also grateful for discussions with Gianni Amisano (ECB), Michal Andrle (IMF), Jan Brůha (Czech National Bank), Herman van Dijk (Tinbergen Institute), Juha Kilponen (Suomen Pankki), Bartosz Maćkowiak (ECB), Frank Schorfheide (University of Pennsylvania), Mattias Villani (Linköping University), and comments from members of the Working Group on Econometric Modelling, and participants of the Tinbergen Institute workshop on “Recent Theory and Applications of DSGE Models” at Erasmus University Rotterdam, and the CEF 2012 conference in Prague. The opinions expressed in this paper are those of the authors and do not necessarily reﬂect views of the European Central Bank or the Eurosystem. Any remaining errors are the sole responsibility of the authors.
two-country Cointegrated VARmodels the limitation of the number of variables is even more binding, because it is getting very difficult, with an increase in dimension of the model, to impose long-run mean-
ingful structure on the unrestricted cointegration relations. 1 To overcome this problem we introduce a
modelling approach suggested by Aoki (1981) for dynamic Macroeconomic modelling in empirical re- search. Aoki showed for a system of linear differential equations that, when assuming symmetry on the two-country model, the variables can be transformed into a set of country averages and country dif- ferences and these two sets being orthogonal to each other, can be analysed separately. We apply this idea to determine the long-run properties of our empirical model. This solution divides the size of the sets of variables for the cointegration part into two, offering the possibility to study much larger Macro dynamics. We assume advanced economies to behave similar in the long-run. There is no reason why large economies like the UK or the US should differ in their aggregate behaviour in a systematic way. In contrast the speed of adjustments to equilibria may vary markedly, due to unequal sizes of the countries or structural differences. We allow for different contemporaneous effects, different short-run dynamics, and different speeds of adjustment for the two economies, but determine the long-run equilibria to be symmetric over the two countries, like the international parity conditions are established. The symme- try assumption in the long-run allows to apply the method, proposed by Aoki (1981) and applied by others, for example Turnovsky (1986), to determine the long-run properties of the model in two smaller subsystems. It makes the cointegration analysis in smaller subsystems feasible. Symmetry is rejected for the short-run, thus for the given cointegration vectors further modelling of the short-run is based on the full two-country system.
1 Introduction and Literature Review
Since the pioneer work of , Vector-autoregressive (VAR) models have become one of the most applied models for the analysis of multivariate time series. These models have proven to be useful for describing and forecasting the dynamic behavior of economic and financial time series. Common standard econometric software like EViews and JMulti, for instance, allow for modeling VARmodels. According to , the t-values of estimated VARmodels’ parameter matrices have their standard asymptotic distributions if the lag order is chosen to be equal or larger than two, even if the variables employed are I 1 , as shown by  and . This property makes time series analysis in a VAR model framework to a flexible tool for researchers.
Structural VARmodels are frequently identified using sign restrictions on contemporaneous impulse responses. We develop a methodology that can han- dle a set of prior distributions that is much larger than the one currently al- lowed for by traditional methods. We then develop an importance sampler that explores the posterior distribution just as conveniently as with traditional ap- proaches. This makes the existing trade-off between careful prior selection and tractable posterior sampling disappear. We use this framework to combine sign restrictions with information on the volatility of the variables in the model, and show that this sharpens posterior inference. Applying the methodology to the oil market, we find that supply shocks have a strong role in driving the dynamics of the price of oil and in explaining the drop in oil production during the Gulf war.
The issue of informational insufficiency has been addressed by augmenting small- scale VARmodels by latent factors (see Bernanke et al. , 2005 ). These latent factors are extracted from a large set of informational series and serve to alleviate informational deficiency issues. The issue of identification has been addressed via the use of Proxy VARmodels (see Stock and Watson , 2012 , Mertens and Ravn , 2013 ). These models have the advantage that, contrary to the widely used recursively identified models, they do not rely on short-run exclusion restrictions which tend to be hard to defend. Instead, Proxy VARmodels employ external instruments which, if they are appropriately chosen, lead to a more credible identification scheme. However, until now the informational content of Bayesian Proxy VARmodels has received little attention.
CESifo Working Paper No. 8153
The Econometrics of Oil Market VARModels
Oil market VARmodels have become the standard tool for understanding the evolution of the real price of oil and its impact in the macro economy. As this literature has expanded at a rapid pace, it has become increasingly difficult for mainstream economists to understand the differences between alternative oil market models, let alone the basis for the sometimes divergent conclusions reached in the literature. The purpose of this survey is to provide a guide to this literature. Our focus is on the econometric foundations of the analysis of oil market models with special attention to the identifying assumptions and methods of inference. We not only explain how the workhorse models in this literature have evolved, but also examine alternative oil market VARmodels. We help the reader understand why the latter models sometimes generated unconventional, puzzling or erroneous conclusions. Finally, we discuss the construction of extraneous measures of oil demand and oil supply shocks that have been used as external or internal instruments for VARmodels.
This paper develops a global VAR (GVAR) model to study the transmission of shocks between the US, the UK, the euro area and Japan. We first estimate cointegrated VARmodels for the different countries/regions including a set of rest- of-the world variables in each to take account of first-round effects of shocks. The set of foreign variables is chosen to reduce dimensionality, yet allowing for the existence of key international parity relations derived from the Dornbusch-Frankel model. We are able to replicate a set of economically meaningful relations but these I(1) country models all have large roots when the rank is set according to the economic prior. This points to the existence of temporary, yet persistent, disequilibria during the sample, likely a result of asset-price bubbles. Lowering the rank when linking the country models ensures that the combined model is stable. The GVAR allows us to assess the dynamic effects of liquidity and asset-price spill-overs between countries, taking second-round effects of shocks into account. Shocks analysis shows that stock markets have a tendency to move in sync across regions whereas this is not always the case for housing markets. For simulations of the credit crunch, we argue that the GVAR should be used with care however.
(ii) Large VAR dimension, lag length, near-I(2) ness, weak mean reversion are all complicating factors for the use of asymptotic results.
In more detail, Table 1 records the acceptance frequency at 5% significance level of the trace test, using p-values from the Gamma approximation of ( Doornik 1998 ); the null is that the rank is less or equal to r against the alternative of unrestricted rank up to p, where the true rank equals p/2. For T = 1000 and p = 6, the tests behave as expected. When p = 12, they tend to favour lower rank values for slow mean-reversion and higher ranks for near-I(2) behaviour.
The upper panel of the figure shows that the ratio of volatilities is similar across models and exhibits nonlinear patterns over time. For the parameter of contemporane- ous relation, however, there are marked differences across models. Especially, estimates of the DC-CMSV model exhibit nonlinear dynamics, which are linked to the move- ment of the ratio of volatilities. Moving to the lower panel, estimated correlations and covariances of the CMSV model exhibit systematic differences across alternative order- ings. The estimates especially diverge when the ratio of volatilities suddenly moves substantially. This happens as the CMSV model cannot properly capture the non- linear dynamics of the parameter of contemporaneous relation. Moreover, estimated correlations and covariances of the DC-CMSV model lie somewhere between alterna- tive estimates of the CMSV model. According to Property 4, this is to be expected as estimates of the CMSV model can be regarded as an upper and lower bound of the true comovement parameters when data is generated from a dynamic correlation model.
The estimation of the VAR model in step 1 may be performed using least squares (see, e.g., L¨ utkepohl (2005)) or some other suitable estimation method. In practice, bias-corrected least squares is used in many appli- cations. The bias correction can be based either on the asymptotic mean bias formula presented by Nicholls & Pope (1988) and Pope (1990) or the, more generally applicable, bootstrap estimator described by Kilian (1998b). The second approach is less often included in Monte Carlo comparisons of methods for constructing confidence bands due to the high computational complexity related to the implementation of the double bootstrap.
We analyze and compare the patterns of economic growth and development in China, Korea, and Japan in the post-war period. The geographical proximity and cultural affinity between the three countries, as well as the key role of the development state in the economies, suggest that an analytical comparison would be a meaningful and valuable exercise. Furthermore, Korea and Japan are two of the few economies that have jumped from middle income to high income in a short period and thus offer potentially valuable lessons for China. China is following a structural change that Korea and Japan underwent decades ago. We use Cobb–Douglas production functions to assess the long-run equilibrium relationships between per capita GDP, capital, and labor as well as the features of structural change by means of cointegrated vector autoregressive (CVAR) models. We show that such equilibrium relationships cannot be rejected for all three countries, while the evidence is stronger for China and Korea than for Japan. Our hypothesis tests show that the estimated Cobb–Douglas production functions display coefficients of capital and employment that sum up to one and broken linear trends that can be attributed to structural breaks and (changes in) total factor productivity (TFP) growth. We observe a striking similarity between the Korean and the Chinese experience, which gives some optimism that China may be capable of graduating to high income, like Korea.
or the speed of convergence is slow, ordinary least squares (OLS) can be used (the standard approach in GVAR models). In the paper we relate the concept of sparsity/density to a number of conditions regarding the weights which imply the choice of the corresponding estimation method.
Our third contribution is to carefully relate the concepts of global and local dominant units and to again discuss the implications for the choice of estimation method. Units that do have one or more sizeable weight(s) in the matrix W are said to dominate other units either locally or globally, depending on the number of sizeable weights. The concept of global dominant unit has been introduced in the early stages of the GVAR literature (Pesaran et al. (2004)), while the notion of a local dominant unit is more recent (Chudik and Straub (2017)). Given the difficulties they pose for estimation, global dominant units have been addressed either by direct exclusion from estimation (e.g. in early GVAR studies of Pesaran et al. (2004) and Dees et al. (2007)), or by direct modelling via global common factors (e.g. as the infinite dimensional VARs (IVARs) methodology in Chudik and Pesaran (2011, 2013)). While dominant units have not been prevalent in the spatial econometrics literature, we show that this is partly a matter of terminology and in this context the concept of local dominance is indeed relevant. We also show that local dominance structures that are mutual (e.g. two countries affecting each other) imply endogeneity, while non-mutual structures do not.
Open Market Committee (FOMC) to lower policy interest rates twice in January 2008 and the collapse of Lehman Brothers in mid-September 2008), as well as following the London bombings on July 7, 2005 and FOMC’s unexpected policy interest rate hike on May 10, 2006, the U.S. bond market flash crash in mid-October 2014 and August 2015 financial market troubles in China. We, therefore, argue that a more carefully specified econometric model, can be quite instrumental in overcoming the major obstacle in the accurate measurement of connectedness across assets and/or financial institutions. By allowing VAR parameters to vary over time, the proposed TVP-VAR model relieves the researcher from the necessity to roll a fixed-length sample window in order to capture the dynamics of the connectedness. The methodology we propose incorporates the benefits of Bayesian shrinkage for estimating high-dimensional systems, without the need to rely on computationally intensive simulation methods. The resulting dynamic connectedness index and the directional connectedness measures would not be subject to the persistence observed in the rolling-sample windows estimation.
In chapter two, we introduce an encompassing framework based on generalized linear restric- tions that facilitates estimation of many differently specified ACR cointegrated models. An EM algorithm for estimating the parameters under these restrictions is presented and its performance is evaluated through a small Monte Carlo simulation study. We further discuss testing based on likelihood ratio statistics for two separate cases; a regular and an irregular case. The irregular case refers to statistics where nuisance parameters are unidentified under the null hypothesis, while the regular case refers to tests where no such problems occur. We show for the regular case that the asymptotic theory developed in chapter one can be applied give convergence in distribution of the likelihood ratio test. In the irregular case, a uniform central limit theory is needed. While such theory is not provided here, we conjecture that it exists as it does other, sim- ilar models. Given the non-standard asymptotic distributions, simulation based techniques are required for inference and we propose a bootstrap algorithm to simulate the distributions of the test statistics. The performance of the bootstrap algorithm is investigated through simulations. Chapter three considers an application of the ACR cointegrated framework to the prices of two of the major crude oil benchmarks, the West Texas Intermediate (WTI) and the Brent. Moreover, the chapter discusses an alteration of the ACR cointegrated model. That is, in chapter one, the asymptotic theory is derived under the assumption that the constant in the cointegration relations is not included in the switching probability functions. Chapter three shows that with a few modifications to the theory of chapter one, such a specification is indeed easily covered. The results from the empirical analysis supports the presence of non-linearities related to a decoupling of the WTI from historical benchmarks observed around 2011. Evidence in favor of non-linearities is less pronounces when this period is excluded from the sample and we use a linear cointegrated VAR to find that the WTI historically and until 2011 has been weakly exogenous.