Abstract
Bias-corrected bootstrap confidence intervals explicitly account for the bias and skewness of the small-sample distribution of the impulse response estimator, while retaining asymptotic validity in stationary autoregressions. Monte Carlo simulations for a wide range of bivariate models show that in small samples bias-corrected bootstrap intervals tend to be more accurate than delta method intervals, standard bootstrap intervals, and Monte Carlo integration intervals. This conclusion holds for VAR models estimated in levels, as deviations from a linear time trend, and in first differences. It also holds for random walk processes and cointegrated processes estimated in levels. An empirical example shows that bias-corrected bootstrap intervals may imply economic interpretations of the data that are substantively different from standard methods. © 1998 by the President and Fellows of Harvard College and the Massachusetts Institute of Technology

This publication has 19 references indexed in Scilit: