Assessing Forecast Skill through Cross Validation

Abstract
This study explains the method of cross validation for assessing forecast skill of empirical prediction models. Cross validation provides a relatively accurate measure of an empirical procedure's ability to produce a useful prediction rule from a historical dataset. The method works by omitting observations and then measuring “hindcast” errors from attempts to predict these missing observations from the remaining data. The idea is to remove the information about the omitted observations that would be unavailable in real forecast situations and determine how well the chosen procedure selects prediction rules when such information is deleted. The authors examine the methodology of cross validation and its potential pitfalls in practical applications through a set of examples. The concepts behind cross validation are quite general and need to be considered whenever empirical forecast methods, regardless of their sophistication are employed. Abstract This study explains the method of cross validation for assessing forecast skill of empirical prediction models. Cross validation provides a relatively accurate measure of an empirical procedure's ability to produce a useful prediction rule from a historical dataset. The method works by omitting observations and then measuring “hindcast” errors from attempts to predict these missing observations from the remaining data. The idea is to remove the information about the omitted observations that would be unavailable in real forecast situations and determine how well the chosen procedure selects prediction rules when such information is deleted. The authors examine the methodology of cross validation and its potential pitfalls in practical applications through a set of examples. The concepts behind cross validation are quite general and need to be considered whenever empirical forecast methods, regardless of their sophistication are employed.

This publication has 0 references indexed in Scilit: