Evaluating collaborative filtering recommender systems

If you are interested in Recommender Systems, you cannot miss Evaluating collaborative filtering recommender systems by Jonathan L. Herlocker, Joseph A. Konstan, Loren G. Terveen and John T. Riedl.
You need an ACM Web Account and some time (it is 53 pages).

Recommender systems have been evaluated in many, often incomparable, ways. In this article, we review the key decisions in evaluating collaborative filtering recommender systems: the user tasks being evaluated, the types of analysis and datasets being used, the ways in which prediction quality is measured, the evaluation of prediction attributes other than quality, and the user-based evaluation of the system as a whole. In addition to reviewing the evaluation strategies used by prior researchers, we present empirical results from the analysis of various accuracy metrics on one content domain where all the tested metrics collapsed roughly into three equivalence classes. Metrics within each equivalency class were strongly correlated, while metrics from different equivalency classes were uncorrelated.

7 thoughts on “Evaluating collaborative filtering recommender systems

  1. Cai

    The article is really excellent and represents the current state-of-the-art in research coping with RS evaluation frameworks. The entire ACM TOIS special issue on RS (Jan 2004), where Herlocker’s work is part of, is excellent and recommended to all those into that subject matter …

  2. Pingback: Web Dawn - Rebirth of the Social Marketplace

  3. Pingback: Web Dawn - Rebirth of the Social Marketplace

  4. Pingback: Web Dawn - Rebirth of the Social Marketplace

  5. Pingback: Web Dawn - Rebirth of the Social Marketplace

  6. Pingback: Web Dawn - Rebirth of the Social Marketplace

  7. Pingback: Web Dawn - Rebirth of the Social Marketplace

Leave a Reply

Your email address will not be published. Required fields are marked *