The benefits of imperfect diagnostic automation: a synthesis of the literature

Abstract
This review of the literature examines, in a quantitative fashion, how the level of imperfection or unreliability of diagnostic automation affects the performance of the human operator who is jointly consulting that automation and the raw data itself. The data from 20 different studies were used to generate 35 different data points that compared performance with varying levels of unreliability, with that of a non-automated baseline condition. A regression analysis of benefits/costs relative to baseline was carried out, and revealed a strong linear function of benefits with reliability. The analysis revealed that a reliability of 0.70 was the ‘crossover point’ below which unreliable automation was worse than no automation at all. The analysis also revealed that performance was more strongly affected by reliability in high workload conditions, implicating the role of workload-imposed automation dependence in producing this relationship, and suggesting that humans tend to protect performance of concurrent tasks from imperfection of diagnostic automation.