Evaluating a Model by Forecast Performance*
- 6 December 2005
- journal article
- Published by Wiley in Oxford Bulletin of Economics and Statistics
- Vol. 67 (s1) , 931-956
- https://doi.org/10.1111/j.1468-0084.2005.00146.x
Abstract
Although out‐of‐sample forecast performance is often deemed to be the ‘gold standard’ of evaluation, it is not in fact a good yardstick for evaluating models in general. The arguments are illustrated with reference to a recent paper byCarruth, Hooker and Oswald [Review of Economics and Statistics(1998), Vol. 80, pp. 621–628], who suggest that the good dynamic forecasts of their model support the efficiency‐wage theory on which it is based.Keywords
This publication has 32 references indexed in Scilit:
- The Use and Abuse of Real-Time Data in Economic ForecastingThe Review of Economics and Statistics, 2003
- Data mining reconsidered: encompassing and the general‐to‐specific approach to specification searchThe Econometrics Journal, 1999
- Unemployment Equilibria and Input Prices: Theory and Evidence from the United StatesThe Review of Economics and Statistics, 1998
- Evidence on Structural Instability in Macroeconomic Time Series RelationsJournal of Business & Economic Statistics, 1996
- The Demand for M1 in the USA: A Reply to James M. BoughtonThe Economic Journal, 1993
- The Demand for M1 in the United States: A Comment on Baba, Hendry and StarrThe Economic Journal, 1993
- Oil and the Macroeconomy since World War IIJournal of Political Economy, 1983
- The behaviour of inconsistent instrumental variables estimators in dynamic systems with autocorrelated errorsJournal of Econometrics, 1979
- Forecasting with Econometric Methods: A CommentThe Journal of Business, 1978
- Investigating Causal Relations by Econometric Models and Cross-spectral MethodsEconometrica, 1969