Tag Archives: Reviews

Paper accepted at AAAI05: “Controversial Users demand Local Trust Metrics: an Experimental Study on Epinions.com Community”

A paper of mine titled “Controversial Users demand Local Trust Metrics: an Experimental Study on Epinions.com Community” (pdf) got accepted for the Twentieth National Conference on Artificial Intelligence (AAAI-05)! Cool! The email I received this morning says “Your paper was one of 148 accepted to AAAI-05, out of 803 submissions. AAAI is a highly selective conference, and you are to be congratulated on your paper’s acceptance.” This means acceptance rate is 18%. Let me know if you like/dislike the paper or want to discuss its topic a bit. I think controversiality is an important theme and I think there are too many papers that assume that every user/agent has a global goodness value that is the same for everyone (there are some users that are bad for everyone and the goal of the technique is to spot them out). This assumption is unrealistic: just think of Bush or Berlusconi … some people like them (yeah, I know it’s kinda incredible) and some other don’t. My paper hopefully provide some evidence about this intuitive phenomena. You might also want to check other papers of mine.

Title: Controversial Users demand Local Trust Metrics: an Experimental Study on Epinions.com Community
Abstract: In today’s connected world it is possible and very common to interact with unknown people, whose reliability is unknown. Trust Metrics are a recently proposed technique for answering questions such as “Should I trust this user?”. However, most of the current research
assumes that every user has a global quality score and that the goal of the technique is just to predict this correct value. We show, on data from a real and large user community, epinions.com, that such an assumption is not realistic because there is a signi cant
portion of what we call controversial users, users who are trusted and distrusted by many. A global agreement about the trustworthiness value of these users cannot exist. We argue, using computational experiments, that the existence of controversial users (a normal phenomena in societies) demands Local Trust Metrics, techniques able to predict the trustworthiness of an user in a personalized way, depending on the very personal view of the judging user.

Randomly-generated paper accepted for a conference!

Too funny, too sad. SCIgen is an Automatic Computer Science Paper Generator. The program (GPL-licenced and hence Free Software) generates random Computer Science research papers, including graphs, figures, and citations. I was thinking about doing something like it since a lot of time, but wait … one of the random paper got accepted for a conference!!!
One useful purpose for such a program is to auto-generate submissions to “fake” conferences; that is, conferences with no quality standards, which exist only to make money. A prime example, which you may recognize from spam in your inbox, is SCI/IIIS and its dozens of co-located conferences (for example, check out the gibberish on the WMSCI 2005 website). Using SCIgen to generate submissions for conferences like this gives us pleasure to no end. In fact, one of our papers was accepted to SCI 2005! See Examples for more details.
The accepted paper is Rooter: A Methodology for the Typical Unification of Access Points and Redundancy by Jeremy Stribling, Daniel Aguayo and Maxwell Krohn and the “authors” say We are currently working on the “camera-ready”, and received many donations to send us to the conference, so that we can give a randomly-generated talk. Ehi, researcher! You can cite it! After all it is a published paper! Not the crappy stuff you find on blogs! Beware, never cite an online article, only articles published on the old paper at one of the millions of crappy iper-expensive conferences!
And, in case you want to cite a paper of mine, I just created “A Case for Randomized Algorithms” and “Comparing XML and Markov Models” or you can just generate a new paper for me. Writing a paper is now easier than ever!!! I need to click 8 more times on this link and then I can just spend one year on holidays since I already produced a good amount of papers.
[I found the news on BoingBoing, a blog reseachers should cite sometime…]

Review of “Quality Control in Scholarly Publishing”

Some weeks ago, I received an email from Stefano Mizzaro asking my opinion about his paper Quality Control in Scholarly Publishing: A New Proposal (pdf). In the meantime he came to Trento and we discussed face to face but I want to share here some quick comments I wrote on my wiki about the paper. I liked it, it is very clearly presented, it addresses a real problem and a more and more important one. The math is very clear, sound and makes sense. [Yes he found me because of the blog and not because of my papers and this keeps telling me something]. Read the comments to the paper.
Continue reading