Due to the exceptional circumstances concerning COVID-19, Érudit wishes to reassure its users and partners that all of its services remain operational. However, in order to comply with government directives, the Érudit team is now working remotely, and some operational activities may be slower than usual. Thank you for your understanding. More information
Confidence intervals based on cluster-robust covariance matrices can be constructed in many ways. In addition to conventional intervals obtained by inverting Wald (t) tests, the paper studies intervals obtained by inverting LM tests, studentized bootstrap intervals based on the wild cluster bootstrap, and restricted bootstrap intervals obtained by inverting bootstrap Wald and LM tests. It also studies the choice of an auxiliary distribution for the wild bootstrap, a modified covariance matrix based on transforming the residuals that was proposed some years ago, and new wild bootstrap procedures based on the same idea. Some procedures perform extraordinarily well even with the number of clusters is small.
We propose double bootstrap methods to test the mean-variance efficiency hypothesis when multiple portfolio groupings of the test assets are considered jointly rather than individually. A direct test of the joint null hypothesis may not be possible with standard methods when the total number of test assets grows large relative to the number of available time-series observations, since the estimate of the disturbance covariance matrix eventually becomes singular. The suggested residual bootstrap procedures based on combining the individual group p-values avoid this problem while controlling the overall significance level. Simulation and empirical results illustrate the usefulness of the joint mean-variance efficiency tests.
This paper proposes and discusses an instrumental variable estimator that can be of particular relevance when many instruments are available and/or the number of instruments is large relative to the total number of observations. Intuition and recent work (see, e.g., Hahn, 2002) suggest that parsimonious devices used in the construction of the final instruments may provide effective estimation strategies. Shrinkage is a well known approach that promotes parsimony. We consider a new shrinkage 2SLS estimator. We derive a consistency result for this estimator under general conditions, and via Monte Carlo simulation show that this estimator has good potential for inference in small samples.
We review several exact sign-based tests that have been recently proposed for testing orthogonality between random variables in the context of linear and nonlinear regression models. The sign tests are very useful when the data at the hands contain few observations, are robust against heteroskedasticity of unknown form, and can be used in the presence of non-Gaussian errors. These tests are also flexible since they do not require the existence of moments for the dependent variable and there is no need to specify the nature of the feedback between the dependent variable and the current and future values of the independent variable. Finally, we discuss several applications where the sign-based tests can be used to test for multi-horizon predictability of stock returns and for the market efficiency.
In this paper, we propose a novel entropy-based resampling scheme valid for non-stationary data. In particular, we identify the reason for the failure of the original entropy-based algorithm of Vinod and López-de Lacalle (2009) to be the perfect rank correlation between the actual and bootstrapped time series. We propose the Maximum Entropy Block Bootstrap which preserves the rank correlation locally. Further, we also introduce the Maximum non-extensive Entropy Block Bootstrap to allow for fat tail behaviour in time series. Finally, we show the optimal finite sample properties of the proposed methods via a Monte Carlo analysis where we bootstrap the distribution of the Dickey-Fuller test.
Standard kernel density estimation methods are very often used in practice to estimate density functions. It works well in numerous cases. However, it is known not to work so well with skewed, multimodal and heavy-tailed distributions. Such features are usual with income distributions, defined over the positive support. In this paper, we show that a preliminary logarithmic transformation of the data, combined with standard kernel density estimation methods, can provide a much better fit of the density estimation.
The natural rate of interest is an unobservable entity and its measurement presents some important empirical challenges. In this paper, we use identification-robust methods and central bank real-time staff projections to obtain estimates for the equilibrium real rate from contemporaneous and forward-looking Taylor-type interest rate rules. The methods notably account for the potential presence of endogeneity, under-identification, and errors-in-variables concerns.
Our applications are conducted on Canadian data. The results reveal some important identification difficulties associated with some of our models, reinforcing the need to use identification-robust methods to estimate such policy functions. Despite these challenges, we are able to obtain fairly comparable point estimates for the real equilibrium interest rate across our different models, and in the case of the best fitting model, also remarkable estimate precision.
In this paper we study the selection of the number of primitive shocks in exact and approximate factor models in the presence of structural instability. The empirical analysis shows that the estimated number of factors varies substantially across several selection methods and over the last 30 years in standard large macroeconomic and financial panels. Using Monte Carlo simulations, we suggest that the structural instability, expressed as time-varying factor loadings, can alter the estimation of the number of factors and therefore provides an explanation for the empirical findings.
We analyze factor models based on the Arbitrage Pricing Theory (APT). using identification-robust inference methods. Such models involve nonlinear reduced-rank restrictions whose identification may raise serious non-regularities and lead to a failure of standard asymptotic theory. We build confidence sets for structural parameters based on inverting Hotelling-type pivotal statistics. These confidence sets provide much more information than the corresponding tests. Our approach may be interpreted as a multivariate extension of the Fieller method for inference on mean ratios. We also introduce a formal definition for a redundant factor linking the presence of such factors to unbounded confidence sets, and we document their perverse effects on minimum-root-based model tests.
Results are applied to multifactor asset-pricing models with Canadian data, the Fama-French-Carhart benchmarks and monthly returns of 25 portfolios from 1991 to 2010. Despite evidence of weak identification, several findings deserve notice when data are analyzed over ten-year subperiods. With equally weighted portfolios, the three-factor model is rejected before 2000, but weakly supported thereafter. In contrast, the three-factor model is not rejected with value-weighted portfolios. Interestingly in this case, the market factor is priced before 2000 along with size, while both Fama-French factors are priced thereafter. The momentum factor severely compromises identification, which calls for caution in interpreting existing work documenting momentum effects on the Canadian market. This empirical analysis underscores the practical usefulness of our analytical confidence sets.