The economist on Collaborative filtering

Article over at The Economist United we find on Collaborative Filtering. It is interesting to note that it speculates also on attacks to Recommender Systems. An interesting (simple as it should be) idea is the following:
Nolan Miller, of Harvard University’s Kennedy School of Government, and his colleagues (…) probabilistic techniques to determine whether a score is likely to be “honest”, by spotting unusual-looking patterns in scoring. Dozens of accounts created on the same day, all of which give high scores both to a bestseller and a new book, for example, might be an orchestrated attempt by a publisher to get fans of the former to buy the latter.

Fiddling the filters
A second concern about collaborative filtering is that as it grows in importance, people may increasingly try to manipulate it: publishers, for example, might start recommending their own books. Last November, Michael O’Mahony of University College, Dublin, published a paper demonstrating that even today’s most advanced collaborative filtering systems are not all that robust when subjected to malicious users seeking to subvert their ranking systems. None of the existing systems is explicitly designed to combat malicious use. Can such “recommendation spam” be prevented?

Nolan Miller, of Harvard University’s Kennedy School of Government, and his colleagues believe that it can, and have outlined a way to do it. Their scheme uses probabilistic techniques to determine whether a score is likely to be “honest”, by spotting unusual-looking patterns in scoring. Dozens of accounts created on the same day, all of which give high scores both to a bestseller and a new book, for example, might be an orchestrated attempt by a publisher to get fans of the former to buy the latter. Honest users are rewarded, and dishonest ones punished, through a points-based system akin to a loyalty scheme, so that honest users might earn discounts or store credit.

The scores used to compute recommendations are the ones corrected for honesty, not the original, potentially malicious scores. Dr Miller’s system is not yet ready for commercial application; it makes assumptions about the statistical distribution of people’s recommendations that may not correspond to their real-world behaviour, for example. But it points out a line of research that could preserve the integrity of collaborative-filtering systems under attack. If the rise of spam e-mail is any guide, it makes sense to think about such problems now, before they become widespread.

But even if the problems of privacy and dishonesty can be overcome, there may be a limit to how accurate the recommendations made by collaborative-filtering systems can be. This arises from the fact that people’s opinions change. You may enjoy a new album at first, and give it a good score, but change your mind after a few weeks once the novelty has worn off. But your old score still stands.

Leave a Reply

Your email address will not be published. Required fields are marked *