Abstract
Currently, schools are investing substantial funds in integrated learning systems (I.L.S.'s)—networked comprehensive basic skills software from a single vendor. Although rational arguments can be made for the effectiveness of I.L.S.'s, districts want—and vendors are supplying—empirical evidence for decisionmaking. This article reanalyzes results reported in thirty evaluations of I.L.S.'s by using a common “effect size” statistic and correcting, where possible, for deficiencies in the original designs and reports. Some studies (including the most widely cited) substantially over-report I.L.S. effectiveness. On average, I.L.S.'s show a moderately positive effect on student achievement. However, the poor quality of most evaluations and the likely bias in what does get reported at all provide too weak a platform for district purchasing decisions.

This publication has 1 reference indexed in Scilit: