Abstract
This paper reviews the growing literature on population forecasting to examine a curious paradox: despite continuing refinements in the specification of models used to represent population dynamics, simple exponential growth models, it is claimed, continue to outperform such more complex models in forecasting exercises. Shrinking a large complex model in order to simplify it typically involves two processes: aggregation and decomposition. Both processes are known to introduce biases into the resulting representations of population dynamics. Thus it is difficult to accept the conclusion that simple models outperform complex models. Moreover, assessments of forecasting performance are notoriously difficult to carry out, because they inevitably depend not only on the models used but also on the particular historical periods selected for examination. For example, the accuracy of the Census Bureau's forecasting efforts apparently has improved during the past two decades. How much of this improvement is due to improved methods and how much to the decreased variability in the components of change? Clearly one needs to introduce the dimension of “degree of difficulty”; into each assessment. This paper reviews some of the recent debate on the simple versus complex modeling issue and links it to the questions of model bias and distributional momentum impacts.

This publication has 26 references indexed in Scilit: