ON THE IMPLICATIONS OF SPECIFICATION UNCERTAINTY IN FORECASTING*

Abstract
Forecasters typically select a statistical forecasting model from among a set of alternative models. Subsequently, forecasts are generated with the chosen model and reported to management (forecast consumers) as if specification uncertainty did not exist (i.e., as if the chosen model were the “true” model of the forecast variable). In this note, a well‐known Bayesian model‐comparison procedure is used to illustrate some of the ambiguities and distortions of forecasts that do not reflect specification uncertainty. It is shown that a single selected forecasting model (however chosen) will generally misstate measures of forecast risk and lead to point and interval forecasts that are misplaced from a decision‐theoretic point of view.

This publication has 7 references indexed in Scilit: