Reputation is in the eye of the beholder: on subjectivity and objectivity of trust statements

I eventually managed to get invited to the ENISA Workshop “Security Issues in Reputation Systems” and at the eema’s “The European e-identity conference”. So I’ll be in Paris from Monday 11 until Wednesday 13, of course hosted by friendly Couchsurfers. The program is quite interesting, I’m especially looking forward for the keynote address by Kim Cameron, whose blog I’ve been reading since some time, and a presentation by Alessandro Acquisti of CMU titled “Imagined communities: awareness, information sharing and privacy: the Facebook case”
Let me know if you’ll be there, I’ll be happy to discuss about trust, reputation, identity, whatever.
Since I was required to provide a position paper, I put up the following, the intention was to be a little provocative but I don’t know if it was successful. If you read it, let me know what you think about it. The position paper “Reputation is in the eye of the beholder: on subjectivity and objectivity of trust statements” can be read after the jump (i.e. click on “more” if present).

Reputation is in the eye of the beholder: on subjectivity and objectivity of trust statements.

by Paolo Massa

June 2006


This position paper has been prepared for the workshop "Next Generation Electronic Identity – eID beyond PKI" held at the ENISA/EEMA European eIdentity conference in Paris in June 2007.

Our contribution wants to encourage a discussion about the very basic assumptions behind the design of social networking systems and suggest that reputation and trust are not objective quantities on which everyone can and should agree but instead reflections of subjective and personal beliefs that might differ among different people. This latter assumption should guide the design of online systems powered by the next generation electronic identities.


But first let us introduce some definitions of the basic concepts involved.

Social networking services (also called social software and, more recently, Web2.0) allow users to create a profile. Users generally can upload a picture of themselves, add a textual description of themselves and their interests and can be "friends" with other users [1].
These systems put a special emphasis on the individuals in the system and in their profiles, this is why they are called “social”. Examples of this new paradigm are countless but they can be grouped in the following categories: E-marketplaces (eBay, Yahoo! Auction), Opinions and activity sharing sites (Epinions, Amazon, but also Flickr, Del.icio.us, Last.fm), Business/job networking sites (LinkedIn), Social/entertainment sites (MySpace, Facebook, Friendster, Orkut, CouchSurfing), News sites (Slashdot, Kuro5hin, Digg) [2]. The Web, the Blogosphere, and the Semantic Web can be considered as social networking services as well: in fact, the social structure of the Web is represented by the links between Web pages and exploited by algorithms such as PageRank [3] for inferring authority of Web pages; bloggers commonly include a list of blogs they read in the so called blogroll and this represents as well their social relationships, and the Semantic Web is exploring some formats for letting expressing social relationships such as FOAF [4] or the Microformats XFN and VoteLinks [5]. And Peer-to-Peer (P2P) networks can be considered as well as social networking services since they exploit the concept of friends and preferred peers [6].

In all these social settings it is possible and indeed quite common to interact with unknown people, whose reliability is unknown. Reputation Systems [7] and Trust Metrics [8] are techniques for answering questions such as “Should I trust this person?”. Based on the answer, the active user can decide to interact or not with the other user.

Even if there is no agreement on definitions yet, in general reputation is referred to as a single value that represents what the community as a whole thinks about a certain user. The Oxford Dictionary summarizes it in this way: “reputation is what is generally said or believed about a person’s or thing’s character or standing”.

On the other hand, trust is a binary relationship between a truster and a trustee. It is expressed by a user (truster) about another user (trustee) based on her subjective evaluation of the other user’s characteristics. We also call this quantity a trust statement. An example of trust statement is “I, Alice, trust Bob as 0.8 in [0,1]”.

Trust and reputation are related to a context: my trust in Carol as mechanic can be different from my trust in Carol as violin player.

Both trust and reputation can be normalized in [0,1] with 0 as minimum (no trust or no reputation) and 1 as maximum (total trust or total reputation).

Formally, we have reputation(A)–>[0,1] and trust(A,B)–>[0,1].


Reputation systems and trust metrics are attracting a lot of attention recently because, in the global village created by the Web and the Internet, they promise to reduce social complexity by letting quickly get an impression about the trustworthiness of unknown users [9].

They usually work in the following way: first they aggregate all the trust statements expressed by all the users in a global trust network and then they perform some computation in order to predict the reputation or the trustworthiness of all the users. The computation range from simple averages for computing a global reputation (a la eBay for instance) to trust propagation over the trust network for computing a global reputation (a la PageRank [3]) or a personalized trust score ([4], [10], [8]).


One of the main concerns about reputation systems and trust metrics is the fact they can be attacked and gamed.

What are often called malicious users can hijack their functioning in order to get a personal advantage, that usually consists into being able to increase at one own liking the reputation of an identity (reputation boosting) or into being able to decrease it (reputation nuking). Usually reputation boosting is used for a personal identity or the one of a friend and reputation nuking on the identity of a competitor or enemy. There have been different recommendations for addressing this threats and making a trust metric attack-resistant ([2], [11]).

In this position paper we like to take a slightly different conceptual approach: a system is attackable by definition if the system is created with the assumption of a correct value of reputation for everyone. In this case there will be incentives to try to game the system in order to influence this unique and global reputation value. Such a system is inherently attackable. If this assumption is dropped altogether the threat is already weakened significantly by itself.


What we are suggesting here is to move from global trust metrics to local trust metrics.

While global trust metrics compute a global reputation value for every single user (coming to conclusions such as “the reputation of Carol is 0.4”), local trust metrics predict trustworthiness scores that are personalized from the point of view of every single user (coming to conclusions such as “Alice should trust Carol as 0.9” and “Bob should trust Carol as 0.1”).

Local trust metrics don’t try to average differences of opinions but build on them. The assumption of local trust metrics is that every opinion is equally worthy and that there are no wrong opinions. If someone happens to disagree with the large majority who think that “George is trustworthy”, it is not useful for society at large to consider her opinions as wrong or malicious.


Examples are countless and without moving in the slippery domain of political ideas, it is easy to provide an example from the so debated domain of Peer-to-peer (P2P) file sharing. In a p2p network, Alice might consider "good" a peer that shares illegally a lot of just-released copyrighted movies (i.e. trust her) while Bob might consider the very same peer "bad" (i.e. distrust her). But there is no "correct" trust statement, every peer is free to believe and express what she prefers based on her own personal and subjective belief system. Disagreements are a normal part of life and social groups, someone might argue the most productive ones, so there is no positive utility in trying to squash down differences of opinions.


While the argument here provided is anecdotal but we believe also very self-evident, we offered elsewhere evidence that in fact trust statements are indeed subjective in at least a real world setting.
In "Trust metrics on controversial users: balancing between tyranny of the majority and echo chambers" [8] we conducted an analysis of the Epinions.com trust network. Epinions is a Web site where users can write reviews about products and assign them a rating. Epinions.com also allows the users to express their Web of Trust, i.e. “reviewers whose reviews and ratings they have consistently found to be valuable” and their Block list, i.e. “authors whose reviews they find consistently offensive, inaccurate, or in general not valuable” [12]. They correspond to issuing a positive trust statement (T (A, B) = 1) and a negative trust statement (T (A, B) = 0).

We noted how on Epinions it is common to have disagreement of opinions about the trustworthiness of other users, i.e. it is common that someone places a certain user in the Web of Trust and someone else places the very same user in the Block List. Of course none of these opinions is wrong or malicious but they represent legitimate differences of evaluation. Simply there are users who are trusted by someone and distrusted by someone, we call these users, controversial users. In the Epinions dataset we evaluated they are more than 20%!


On controversial users, local trust metrics are more effective by definition. However we also performed an empirical comparison of local and global trust metrics that demonstrates our claim [8]. Moreover local trust metrics can be attack-resistant ([10], [4], [11]). For instance if only the opinions of users directly trusted by the active user are considered is less easy for an attacker to influence the prediction the active user gets. As long as the active user does not trust explicitly one of the bogus profiles (and the users she trust don’t do it either), the bogus profiles are not going to influence the computation of the trustworthiness values of unknown users. Moreover the user is in control and can check which trusted users, if any, have been fooled into trusting a bogus profile.


We would like to conclude this position paper by highlighting the two extremes of culture and society that can be induced by the basic assumptions behind the two different kinds of trust metrics: tyranny of the majority and echo chambers.

A global trust metric assumes that there are globally agreed good peers and that peers who think different from the average are malicious. This assumption encourages herd behavior and penalizes creative thinkers, black sheep and original, unexpected opinions. What we would like to underline is that there is a “tyranny of the majority” risk, a term coined in 1835 by Alexis de Tocqueville in his book, Democracy in America [13].
The 19th century philosopher John Stuart Mill in his book “On Liberty” [14] also analyzes this concept, with respect to social conformity. The term “tyranny of the majority” refers to the fact that the opinions of the majority within society are the basis of all rules of conduct within that society, so that on a particular issue people will align themselves either for or against this issue and the side of greatest volume prevails. So for one minority, which by definition has opinions that are different from the ones of the majority, there is no way to be protected “against the tyranny of the prevailing opinion and feeling” [15]. However we believe the minority’s opinions should be seen as an opportunity and as a point of discussion and not as “wrong” or “unfair” ratings as often they are modeled in simulations in research papers. Moreover, on digital systems, such as online communities on the Web, automatic personalization is possible and so there is no need to make this assumption and try to force all the users to behave and think in the same way or be considered “unfair” or “wrong”.


However there is a risk on the opposite extreme as well that is caused by emphasizing too much locality in trust propagation by a local trust metric. This means considering, for example, only opinions of directly trusted users (friends) stopping the propagation at distance 1. This risk is called “echo chamber” or “daily me” [16]. Sunstein, in the book Republic.com [16], notes how “technology has greatly increased people’s ability to “filter” what they want to read, see, and hear”. He warns how in this way everyone has the ability to just listen and watch what she wants to hear and see, to encounter only opinions of like minded people and never again be confronted with people with different ideas and opinions. In this way there is a risk of segmentation of society in micro groups who tend to extremize their views, develop their own culture and not being able to communicate with people outside their group anymore. He argues that, in order to avoid these risks, “people should be exposed to materials that they would not have chosen in advance. Unplanned, unanticipated encounters are central to democracy itself” and that “many or most citizens should have a range of common experiences. Without shared experiences, (…) people may even find it hard to understand one another” [16], .


It is easy to foresee that in the short future more and more people will increasingly rely on opinions formed based on facts collected through online systems such as opinions and reviews sites, mailing list, fora and bulletin boards, matchmaking sites of every kind, aggregators and in general social software sites [2]. The assumptions on which these systems are constructed have and will have a fundamental impact on the kinds of societies and cultures they will shape.

The final very open question is “will we be able to find the correct balance between the two described extremes, tyranny of the majority and echo chambers?”. This is surely not an easy task. We hope this paper can help a bit in providing some starting points for a fruitful and ongoing global discussion about these issues so important for our common future.






Creative Commons License
This work is licensed under a Creative Commons Attribution-Share Alike 3.0 License.




[1] "Friends, Friendsters, and MySpace Top 8: Writing Community Into Being on Social Network Sites." danah boyd. First Monday 11(12), December 2006. http://www.firstmonday.org/issues/issue11_12/boyd/index.html

[2] “A Survey of Trust Use and Modeling in Current Real Systems”. Paolo Massa. In "Trust in E-services: Technologies, Practices and Challenges", Idea Group, Inc. (2006)

[3] “The pagerank citation ranking: Bringing order to the web” L. Page, S. Brin, R. Motwani, and T. Winograd. Technical report, Stanford, USA, (1998).

[4] “Trust networks on the Semantic Web” Jennifer Golbeck, James Hendler, and Bijan Parsia. . In Proceedings of Cooperative Intelligent Agents, 2003.

[5] “Page-reRank: using trusted links to re-rank authority”. Paolo Massa. In Book “Search Engines”, IFCAI University Press
http://www.gnuband.org/papers/page-rerank_using_trusted_links_to_re-rank_authority/

[6] “Tribler: A social-based Peer-to-Peer system” J.A. Pouwelse, P. Garbacki, J. Wang, A. Bakker, J. Yang, A. Iosup, D.H.J. Epema, M. Reinders, M. van Steen, H.J. Sips. 5th Int’l Workshop on Peer-to-Peer Systems (IPTPS). 2006

[7] “Reputation Systems” P. Resnick, R. Zeckhauser, E. Friedman, and K. Kuwabara. Communication of the ACM, 43(12), December 2000.

[8] “Trust metrics on controversial users: balancing between tyranny of the majority and echo chambers”. Paolo Massa, Paolo Avesani. Chapter in Special Issue on Semantics of People and Culture, International Journal on Semantic Web and Information Systems (IJSWIS) – 2007

[9] It is interesting to note how the philosopher John Locke in 1680 already provided what we have called a trust metric: “Probability then being to supply the defect of our knowledge, the grounds of it are these two following: First, the conformity of anything with our own knowledge, observation and experience. Secondly, The testimony of others, vouching their observation and experience. In the testimony of others is to be considered: (1) The number. (2) The integrity. (3) The skill of the witnesses. (4) The design of the author, where it is a testimony out of a book cited. (5) The consistency of the parts and circumstances of the relation. (6) Contrary testimonies.”.
From “An Essay concerning Human Understanding” by John Locke. Harvester Press, Sussex, 1680.
This quotation can give an idea of how many different models for representing and exploiting trust have been suggested over the centuries.

[10] “Spreading Activation Models for Trust Propagation” Cai-Nicolas Ziegler, Georg Lausen. Proceedings of the IEEE International Conference on e-Technology, e-Commerce, and e-Service (2004)

[11] “Advogato’s trust metric,” R. Levien. 2000. http://www.advogato.org/trust-metric.html.

[12] This is precisely what the Epinions.com Web of Trust FAQ states (http://www.epinions.com/help/faq/?show=faq wot).

[13] Alexis de Tocqueville. Democracy in America. Doubleday, New York, 1840. The 1966 translation by George Lawrence.

[14] John Stuart Mill. On Liberty. History of Economic Thought Books. Mc-Master University Archive for the History of Economic Thought, 1859.

[15] This definition is extracted from Wikipedia (http://en.wikipedia.org/wiki/On Liberty) which interestingly tries to find a balance between what different people think about every single topic, by asking to the contributors to adopt a neutral point of view (NPOV). This seems to work well enough for now, possibly also because the people who self-elect for editing Wikipedia articles largely share a similar “culture”. However the frequent “edit wars” (http://en.wikipedia.org/wiki/Wikipedia:Edit war), particularly evident on highly sensitive and controversial topics, show that it is and will be hard to keep this global and theoretically unbiased point of view.

[16] Cass Sunstein. Republic.com. Princeton University Press, 1999.

2 thoughts on “Reputation is in the eye of the beholder: on subjectivity and objectivity of trust statements

  1. Matteo Dell'Amico

    There is a very good paper (I keep referencing it to everybody) with a theorem that states that “there is no symmetric sybilproof nontrivial reputation function”, where “sybilproof” refers to the sybil attack, symmetric is a formalization for the class of non-subjective reputation systems with no priviledged users.

    I think it’s very interesting to be referred to when discussing attack-resilient reputation metrics.

    http://portal.acm.org/citation.cfm?doid=1080192.1080202

    BTW, when you refer to controversial users and tiranny of the majority, I can’t help thinking about Fabrizio De André’s words “per chi viaggia in direzione ostinata e contraria, col suo marchio speciale di speciale disperazione”. I find it inspiring that technologies we are working on can be used to somewhat mitigate this.

  2. paolo Post author

    Hi Matteo and thanks for the suggestion! I printed out the paper and … I put it with high priority on top of the “to read” list!
    And reading the abstract it really seems what I need to cite when I start my ramblings on local trust metrics, personalization, tyranny of the majority and the like ;-)
    Or, as you mention, give value to “chi viaggia in direzione ostinata e contraria, col suo marchio speciale di speciale disperazione”. Very appropriate quotation indeed!

Leave a Reply

Your email address will not be published. Required fields are marked *