On the Reliability of Meta-Analytic Reviews

Abstract
The article addresses the issue of intercoder reliability in meta-analyses. The current practice of reporting a single, mean intercoder agreement score in meta-analytic research leads to systematic bias and overestimates the true reliability. An alternative approach is recommended in which average intercoder agreement scores or other reliability statistics are calculated within clusters of coded variables. These clusters form a hierarchy in which the correctness of coding decisions at a given level of the hierarchy is contingent on decisions made at higher levels. Two separate studies of intercoder agreement in meta-analysis are presented to assess the validity of the model.