Abstract
A 10-km-grid-spacing version of NCEP's Eta Model was used to simulate 11 warm-season convective systems occurring over the U.S. upper midwest. Quantitative precipitation forecasts (QPFs) from the model valid for 6-h periods were verified using 4-km-grid-spacing stage-IV precipitation estimates. Verification first was performed on the model's 10-km grid by areally averaging the 4-km observations onto the model grid. To investigate and quantify the impact of the verification grid-box size on some standard skill scores, verification was also performed by averaging the 10-km model forecasts onto 30-km grid boxes and then areally averaging the observations onto the same 30-km grid. As a final test of the impact of the verifying grid-box size, the same 11 events were simulated with a 30-km version of the Eta Model, with verification then being performed on this 30-km grid. For all cases in both the 10- and 30-km versions of the model, 12 variations of the model were used, with variations involving either (i) modifications to the initial conditions to better represent mesoscale features present at the initialization time or (ii) changes in moist physics. Equitable threat scores (ETSs) increased when verification occurred on a coarser grid, whether the coarser grid was created by averaging the 10-km model results or was that used in the 30-km model runs. This result suggests that it may be difficult to show improved skill scores as model resolution improves if the verification is performed on the model's own increasingly fine grid. It should be noted, however, that the use of different verification resolutions does not change the general impacts on ETSs of variations in model physics or initial conditions. The sensitivity of ETSs to verifying grid-box size does, however, vary somewhat between model variants using different model moist-physics formulations or initialization procedures.