Measuring ineffectiveness
- 25 July 2004
- conference paper
- Published by Association for Computing Machinery (ACM)
- p. 562-563
- https://doi.org/10.1145/1008992.1009121
Abstract
An evaluation methodology that targets ineffective topics is needed to support research on obtaining more consistent retrieval across topics. Using average values of traditional evaluation measures is not an appropriate methodology because it emphasizes effective topics: poorly performing topics' scores are by definition small, and they are therefore difficult to distinguish from the noise inherent in retrieval evaluation. We examine two new measures that emphasize a system's worst topics. While these measures focus on different aspects of retrieval behavior than traditional measures, the measures are less stable than traditional measures and the margin of error associated with the new measures is large relative to the observed differences in scores.Keywords
This publication has 2 references indexed in Scilit:
- The effect of topic set size on retrieval experiment errorPublished by Association for Computing Machinery (ACM) ,2002
- Evaluating evaluation measure stabilityPublished by Association for Computing Machinery (ACM) ,2000