ON THE IMPLICATIONS OF SPECIFICATION UNCERTAINTY IN FORECASTING*
- 1 January 1982
- journal article
- Published by Wiley in Decision Sciences
- Vol. 13 (1) , 176-184
- https://doi.org/10.1111/j.1540-5915.1982.tb00141.x
Abstract
Forecasters typically select a statistical forecasting model from among a set of alternative models. Subsequently, forecasts are generated with the chosen model and reported to management (forecast consumers) as if specification uncertainty did not exist (i.e., as if the chosen model were the “true” model of the forecast variable). In this note, a well‐known Bayesian model‐comparison procedure is used to illustrate some of the ambiguities and distortions of forecasts that do not reflect specification uncertainty. It is shown that a single selected forecasting model (however chosen) will generally misstate measures of forecast risk and lead to point and interval forecasts that are misplaced from a decision‐theoretic point of view.Keywords
This publication has 7 references indexed in Scilit:
- CHOOSING BETWEEN AN ADDITIVE AND A MULTIPLICATIVE MODEL OF EXPERIMENTAL EFFECTS*Decision Sciences, 1979
- Statistical Model Comparison in Marketing ResearchManagement Science, 1977
- A Bayesian Technique to Discriminate between Stochastic Models of Brand ChoiceManagement Science, 1975
- Bayesian Comparisons of Simple Macroeconomic ModelsJournal of Money, Credit and Banking, 1973
- Increasing risk: I. A definitionJournal of Economic Theory, 1970
- Probabilistic PredictionJournal of the American Statistical Association, 1965
- Probabilistic PredictionJournal of the American Statistical Association, 1965