I’ve just finished reading a very interesting paper Trust Management for the Semantic Web by Matthew Richardson, Rakesh Agrawal, and Pedro Domingos.

There are a lot of interesting ideas and theoretical analysis about trust and propagation (it defines or explains some concepts such as path algebra, generalized transitive closure algorithms, well-formed decomposable path problems, global invariance, cycle-indifferent combination function, …).

Among the many interesting ideas, I liked a lot the following simple idea partially inspired by pagerank (I think).

Let me give you first a short explanation of how pagerank (one of the algorythms behind google.com) works.

PageRank ideally performs a random walk from webpage to webpage following a link at random and remembering how many times it passed in a web page; this number is the pagerank of the web page. Of course a random surfer will pass in Yahoo.com more often than in my homepage and in fact yahoo.com will have a high pagerank while my homepage will have a lower one.

At every step there is a (small) probability to jump to a random web page, so that the random surfer does not get stuck in a clique of highly connected web pages and every webpage has some chance to be taken into account.

It is proven that the random surfer can start in whatever webpage and the final pageranks will be the same; only the time needed to converge will be different.

If the description was not clear, you can read the the explanation given by Google or by Webworkshop.net or by Ian Rogers. Otherwise you can read the original paper or you can even construct a web graph and try pagerank.

So the authors of Trust Management for the Semantic Web, similarly, propose:

*Imagine a random knowledge-surfer hopping from user to user in search of beliefs. At each step, the surfer probabilistically selects a neighbor to jump to according to the current user’s distribution of trusts. (…) Further, choosing which user to jump to, the random surfer will, with probability delta (in [0,1]) ignore the trusts and instead jump directly back to the original user, i.*

So here the very simple idea:

the random surfer starts in *i* (the user who want to predict how much she should trust other unknown peers), then it jumps to other users depending on the trust distribution (following with higher probability, higher trust links).

But (and here there is the difference!) with probability *delta* it will not follow a trust link but instead jump directly to *i* (our starting point user).

In this way you can choose how much you want the opinion of *i* taken into account and basically how much the random surfer is allowed to go far away from *i*!!!

As in the paper about pagerank, there is also the probabilistic interpretation and the interpretation as operations on matrixes.

Actually on pagerank, doing the random walk is equivalent to compute the first eigenvector of the link matrix.

Ok, if you trust me, you should read it!