Evaluating collaborative filtering recommender systems
Top Cited Papers
- 1 January 2004
- journal article
- Published by Association for Computing Machinery (ACM) in ACM Transactions on Information Systems
- Vol. 22 (1) , 5-53
- https://doi.org/10.1145/963770.963772
Abstract
Recommender systems have been evaluated in many, often incomparable, ways. In this article, we review the key decisions in evaluating collaborative filtering recommender systems: the user tasks being evaluated, the types of analysis and datasets being used, the ways in which prediction quality is measured, the evaluation of prediction attributes other than quality, and the user-based evaluation of the system as a whole. In addition to reviewing the evaluation strategies used by prior researchers, we present empirical results from the analysis of various accuracy metrics on one content domain where all the tested metrics collapsed roughly into three equivalence classes. Metrics within each equivalency class were strongly correlated, while metrics from different equivalency classes were uncorrelated.Keywords
This publication has 22 references indexed in Scilit:
- Experiments in social data miningACM Transactions on Computer-Human Interaction, 2003
- Inferring user interestIEEE Internet Computing, 2001
- Let's Stop Pushing the Envelope and Start Addressing It: A Reference Task Agenda for HCIHuman–Computer Interaction, 2000
- 10.1162/153244301753344614Applied Physics Letters, 2000
- FabCommunications of the ACM, 1997
- Recommender systemsCommunications of the ACM, 1997
- Construction and Comparison of Two Receiver Operating Characteristic Curves Derived from the Same SamplesBiometrical Journal, 1995
- Using collaborative filtering to weave an information tapestryCommunications of the ACM, 1992
- Effectiveness of information retrieval methodsAmerican Documentation, 1969
- Information Retrieval SystemsScience, 1963