I’ve been interviewed by the Italian newspaper Corriere della Sera about Manypedia (and Wikitrip). If you know Italian, you can read the resulting article titled “Every Wikipedia represents its own culture: even the concept of controversiality is controversial” at corriere.it. The journalist liked to stress the fact both Manypedia and WikiTrip are open source, which is a good thing I think.
Wikisym was a great conference! Below you can find my presentation about the paper on Collective memory building in Wikipedia. During the presentation, I provided evidence and possible research lines in order to argue how it is becoming possible to study history (of current events) by analyzing what it is written about these events by thousands of editors on Wikipedia.
WikiTrip allows to have a trip in the process of creation of any Wikipedia page from any language edition of Wikipedia. WikiTrip is an interactive web tool empowering its users by providing an insightful visualization of two kinds of information about the Wikipedians who edited the selected page: their location in the world and their gender.
If you want to investigate, for example, where in the world are Wikipedians who edited the page “Peace”, WikiTrip is the right tool. And you can check also the origin of edits for the equivalent page in the Arabic Wikipedia or “Amani” in the Swahili Wikipedia. Moreover, if you have ever wondered if a specific page was edited more by male or female Wikipedians, WikiTrip allows to explore this information as well. How many edits are performed by males and females respectively on Wikipedia on average? What is the page most edited by females? On Wikirip you can explore your own ideas about these questions and more.
Visualization of both information is available over time so that you can appreciate the evolution of the page over years, from its creation up to the present.
More information about WikiTrip at our whitepaper but the best way to enjoy WikiTrip is at http://sonetlab.fbk.eu/wikitrip/.
We would love to hear the Wikipedia pages you found more interesting as they are visualized by WikiTrip and of course we wait for your feedback!
In few hours I’ll start the long journey towards Mountain View, California, for the Wikisym conference where I’m going to speak about WikiRevolutions presenting the paper “Collective memory building in Wikipedia: The case of North African uprisings“.
In the paper, we highlight the intense edit activity by Wikipedians on articles related to protests and uprisings in North African countries such as Tunisia, Egypt, Libya, Syria, Yemen focusing mainly on the Egyptian revolution.
We cast the phenomenon as a process of collective memory building in which thousands of Wikipedia editors were involved as the traumatic events unfolded.
We explore and suggest possible directions for quantitative research on collective memory formation of traumatic and controversial events in Wikipedia.
I’m in a Wikisym session titled “Wikipedia as a Global Phenomenon” in which I will have the pleasure to speak after Brian Keegan that is addressing the same topic of how on Wikipedia it is possible to analyze how editors cover recent events in real time; the paper is “Dynamics, Practices, and Structures in Wikipedia’s Coverage of the T?hoku Catastrophes” (joint work with D. Gergle, N. Contractor).
Probably at Wikisym there will be also people from Ushahidi, such as Heather Ford, which recently announced WikiSweeper, a joint project with the Wikimedia Foundation to track breaking news trends on Wikipedia so I think we will have wonderful exchanges of points of views and possibly future collaborations.
I’m really looking forward for what looks like a fabolous conference!
One of my colleague at Fondazione Bruno Kessler (FBK) in Trento, Carlo Strapparava, was awarded with $50,000 by Google as an incentive to continue his research, especially with the participation of young researchers. Carlo proposed algorithms for distinguishing some of the nuances and emotions expressed in written language.
Dealing with the emotional, persuasive, or other aspects of creative language content in the texts – Strapparava explains- is commonly considered to be off limits for any computational ability. Actually, these features are a key part of communicating, and it is important that research in the field of natural language processing deal with it. The usefulness of automatic recognition of these aspects is nowadays even greater, given the enormous daily production of texts on the web. Through these technologies, it will also be possible to predict the emotional or persuasive content of a text.
As part of our investigation of the social side of Wikipedia in SoNet, Federico “fox” and I created Manypedia, a web mashup which I really like ;)
On Manypedia, you compare Linguistic Points Of View (LPOV) of different language Wikipedias. For example (but this is just one of the many possible comparisons), are you wondering if the community of editors in the English, Arabic and Hebrew Wikipedias are crystallizing different histories of the Gaza War? Now you can check “Gaza War” page from English and Arabic Wikipedia (both translated into English) or from Hebrew Wikipedia (translated into English).
Manypedia, by using the Google Translate API, automatically translates the compared page in a language you don’t know into the language you know. And this is not limited to English as first language. For example you can search a page in the Italian Wikipedia (or in 56 languages Wikipedias) and compare it with the same page from the French Wikipedia but translated into Italian. In this way you can check the differences of the page from another language Wikipedia even if you don’t know that language, sweet!
Well, the Gaza War is just one of the topics which might have very different LPOVs on difference language Wikipedias but there are many more. As a starting point, you can check the Wikipedia page “List of controversial issues” which lists many controversial articles grouped around 15 main categories. Actually it is interesting to compare the controversial articles page on English and Chinese Wikipedia (the English Wikipedia is slightly more centered around topics important for US/Western culture and in particular the Chinese Wikipedia page reports pages such as “Anti-Japanese War”, “Nanjing Massacre”, “Taiwan”, “Human Rights in China”, “Falun Gong”, “Tiananmen Incident”, “Mao Zedong”, “List of sites blocked by China”) or on Catalan Wikipedia (in which controversiality arises around what is a country, Catalan countries and Valencia).
On top header of Manypedia there are some featured comparisons handpicked by us (and a random one is loaded on the main page) but actually you can search in real time for any page that appears in any language Wikipedia. Currently we support 56 languages so that for example, you can search for a page in the Arabic Wikipedia and compare it with the same page in Hebrew Wikipedia but translated into Arabic. Or from Italian compared with French, or from Tagalog compared with Catalan, or from Hindi compared with Irish, or from Turkish compared with Yiddish, or from Persian compared with Swahili … well, you’ve got the idea ;)
Of course if you have any suggestion or feedback, we would love to hear it in order to make Manypedia better and more useful.
You can contact us via Twitter (Manypedia@Twitter) or via Facebook (Manypedia@Facebook).
The successful candidate will join our group working on a project whose goal is to mine, analyze and computationally model the individual and collective behaviour in communities and social networks of Wikipedia users.
The ideal candidate should have:
* Ph.D. level education in a relevant discipline
* A good record of relevant research published in peer-reviewed conferences or journals
* Strong empirical and analytical orientation, with experience in handling large amounts of data coming from user action logs and social networks
* Experience with statistics and with analysis of complex networks
* Knowledge of at least one of Python, Perl, C/C++, R, Java
* Proficiency in both written and spoken English
* Prior experience in using social media for disseminating personal research
* Interdisciplinary background
* Experience with GNU/Linux systems
Type of contract: co.co.pro (collaboration contract) for one year. Initial appointments are for one year and renewal is based on performance. Gross Salary offer will be approximately Euro 22.500,00 and can be increased based on experience and skills of the candidate.
Past week I attended the Hypertext 2011 conference in Eindhoven where I presented the paper “Social networks of Wikipedia” discussing two different algorithms for extracting networks of conversations from User Talk pages in Wikipedia and evaluating them against the manual coding of all messages in User Talk pages of the Venetian Wikipedia. The main point was listing all the many details in Wikipedia practices and formatting styles that you need to be aware of if you want to derive realistic results from your quantitative analysis. The code of the algorithms is available as open source and some network datasets extracted from Wikipedia as well.
The conference was smaller than what I expected but interesting. There were some people working on Wikipedia and I had many interesting conversations with them.
The best talk was hands down the one by Noshir Contractor titled “From Disasters to WoW: Using Web Science to Understand and Enable 21st Century Multidimensional Networks”. He spoke about the many different great works is doing in an entertaining and funny style. The main methodological take-away message I got is that he is looking at networks at the edge level, considering the “motivation” for each edge (positive/negative links, in fact) and seeing how much different established sociological theories such as homophily, social balance, winner takes it all, etc are able to explain the topology of the network. For example 4 networks extracted from 4 different kinds of interactions of the same users of an online massive multi player game (I think “who fights with whom, who is in guild with whom, who exchanges messages with whoms, who trades goods with whom) exhibit different patterns and the particular orientation of a certain network can be explained by the balance of the motivations explained by the different theories. In particular the network of “who trades goods with whom” has special “motivations” that are influenced by the presence of so-called goldfarmers, people typically in China or other average-low-income countries who play online games doing repetitive tasks with the goal to acquire in-game currency that is usually sold against real-world currency to other players. One of their paper about this “Mapping Gold Farming Back to Offline Clandestine Organizations: Methodological, Theoretical, and Ethical Challenges” won the award for Best Paper at the recent Game Behind the Game conference. What I was really surprised to hear is that he is working as well on Wikipedia!
In fact, in his keynote, Noshir presented some recent work he has been doing with one of his students, Brian Keegan, about Wikipedia’s coverage of breaking news articles such as the Japan earthquake. Interestingly Michela Ferron and I wrote a paper titled “Wikipedia as a Lens for Studying the Real-time Formation of Collective Memories of Revolutions” in which we highlight the richness of the phenomena of collective memory building on Wikipedia about the current north-African revolutions (all the Wikipedia pages get created few minutes or days after the events and receive an incredible number of edits from many different users, what we interpret as a process of collective memory building) and we discuss research directions (more info about this in a next blog post). Out article was recently accepted in the “International Journal of Communication” and we are of course delighted by that. Actually the editor of IJoC is Manuel Castells, who will be giving a keynote at the upcoming ICWSM about … guess what? Social Media and Wiki-Revolutions: The New Frontier of Political Change. I guess it is really a hot topic nowadays, which is both conforting (we are doing cool stuff) and worrying (because these guys are really good and it is hard to do better … but we will try ;)
Actually in two weeks Noshir will come to Trento to give a one week course on Social Network Analysis which I’m really looking forward to attend and I hope to gather further insights via discussions with him.
The other guys who presented works about Wikipedia at Hypertext conference were David Laniado and his colleagues from Barcelona Media who presented “Co-authorship 2.0: Patterns of collaboration in Wikipedia“, an interesting analysis of networks of coediting on Wikipedia and its comparison with networks of scientific co-authoring. He was also there with a poster about “Automatically assigning Wikipedia articles to macro-categories”, joint work with Jacopo Farina and David Laniado.
There was also another very interesting work titled “Social Capital Increases Efficiency of Collaboration Among Wikipedia Editors” presented by Keiichi Nemoto of Fuji Xerox who was working with Peter Gloor and Robert Laubacher of MIT Center for Collective Intelligence. They found the more cohesive and more centralized the collaboration network of Wikipedia editors and the more network members were already collaborating before starting to work together on an article, the faster the article they work on will be promoted to good or featured article.
Overall it was good to discover interesting projects and meet good people working on Wikipedia which I hope I’ll keep meeting at future conferences.
Today I announce that on June 4th and 5th, Random Hacks of Kindness (RHOK) is coming to Trento too (precisely in the research center where I work).
The goal is to put together hackers, coders, designers, engineers, programmers and geeks of every variety to tackle some real-world problems (disaster risk and climate change) and hack for humanity.
RHOK is going to take place in 15 locations around the globe and among them there is Trento, the smallest of the participating cities in fact. Below the list of cities where RHOK will take place simultaneously with the number of inhabitants (data from Wikipedia).
Very interesting interview of Google News director at NiemanLab.
Krishna Bharat ponders about POVs (Point of View). “many perspectives coming together can be much more educational than singular points of view”. Ok, I agree. “You really want the most articulate and passionate people arguing both sides of the equation.” Ok. “Then, technology can step in to smooth out the edges and locate consensus.” Technology to step in starts to become less agreeable. For doing what? For telling me the truth? What is the most consensual representation of facts? “That is the opportunity that having an objective, algorithmic intermediary provides you”.
This is the point that I really don’t like. Shall we rely on the algorithmic objectivity to form our visions of world facts? Interestingly this is how Google was “casting” its algorithm for many years: “PageRank relies on the uniquely democratic nature of the web” or “be based on impartial and objective relevance criteria“.
The interview goes on with “If you trust the algorithm to do a fair job and really share these viewpoints, then you can allow these viewpoints to be quite biased if they want to be.” and “Trusting in the algorithm means trusting in the tacit completeness of the automation it offers to readers.”
Now, I think it is a bit scary that a corporation asks you to trust the objective, algorithmic intermediary they provide to you (with the goal of making money, which is of course totally acceptable per se).
Actually I agree with Ken Thompson that in “Reflections on Trusting Trust” (Communication of the ACM, Vol. 27, No. 8, August 1984, pp. 761-763) claimed You can’t trust code that you did not totally create yourself. (It it very pertinent also that in the paper the very next sentence is Especially code from companies that employ people like me).
As last point, I would like to say that I prefer to trust the transparent social process that happens, for example, on Wikipedia. On pages such as “Climate Change” hundreds of different editors participate and, even if Wikipedia policy asks to write from a Neutral Point of View, it is undeniable that many of them have strong POVs. This is very visible on controversial pages such as the Israeli-Palestinian conflict for example.
What I prefer of Wikipedia, over the objective, algorithmic intermediary provided by Google, is the fact the process is carried out by humans (this is not completely true since there are many automatic bots on Wikipedia but currently they perform mainly maintenance tasks) and, more importantly, the fact you can analyze the complete history of edits (and who made them) that brought each article to its current state. Moreover, if you don’t agree with the current framing of a concept, you can get involved and contribute your POV by editing the page or discussing it in the related talk page.
Let me highlight also how the FAQ about Neutral Point of View on Wikipedia clearly states that “the NPOV policy says nothing about objectivity. In particular, the policy does not say that there is such a thing as objectivity in a philosophical sense—a “view from nowhere” (to use Thomas Nagel’s phrase), such that articles written from that viewpoint are consequently objectively true.”
Let me conclude with the Italian poet Giacomo Leopardi which in “La ginestra” (Wild Broom) was lamenting “le magnifiche sorti e progressive” (the “magnificent and progressive fate”) of the human race. I think we should do it all a bit more than we currently do instead of embracing algorithmic objectivity.