Tag Archives: paper

A walk on the CHIld side: our CHI paper got a Honourable Mention award!

Our paper “A Walk on the Child Side: Investigating Parents’ and
Children’s Experience and Perspective on Mobile
Technology for Outdoor Child Independent Mobility” was accepted at CHI 2019 and also got a Honourable Mention. Wow!

You can read the paper and download the pdf at the paper page. Enjoy!

Science is nothing more than a game: 8-Year-Olds Publish Scientific Bee Study

A study, titled “Blackawton bees”, has been published by the peer-reviewed journal “Biology Letters”. And this is nothing new.
The notable fact is that authors are 25 8- to 10-year-old children (and 2 older guys, a neuroscientist and a teacher). Source: Wired.

The project grew out of a lecture Beau Lotto, a neuroscientist, gave at the school, where his son was a student. Lotto spoke about his research on human perception, bumblebees and robots, and then shared his ideas on how science is done: Science is nothing more than a game.

The principal finding of the paper is: ‘We discovered that bumble-bees can use a combination of colour and spatial relationships in deciding which colour of flower to forage from. We also discovered that science is cool and fun because you get to do stuff that no one has ever done before. (Children from Blackawton)’.

Lotto got problems in getting the paper published because of lack of citations and I think it’s comment to Wired is all too true and agreeable: “That’s what I tell my PhD students: Don’t do any reading. Figure out why you wake up in the morning, what you’re passionate about, and then read the literature. But don’t figure out what’s interesting based on what other people say.”
And the attitude of one of the author (10 years old at most) really strikes a chord: “I thought science was just like math, really boring,” he said. “But now I see that it’s actually quite fun. When you’re curious, you can just make up your own experiment, so you can answer the question.” This should be science but sometimes possibly we adults tend to forget it.

A brief review of the paper now. The paper is written with a refreshening style, containing gems such as “Once upon a time…” and “the puzzle . . .duh duh duuuuhhh” as or considerations such as “Otherwise they might fail the test, and it would be a disaster.”

The paper, after the “Once upon a time…” entry, starts with “People think that humans are the smartest of animals, and most people do not think about other animals as being smart, or at least think that they are not as smart as humans. Knowing that other animals are as smart as us means we can appreciate them more, which could also help us to help them.
They go on with “After talking about what it is like to create games and how games have rules, we talked about seeing the world in different ways by wearing bug eyes, mirrors and rolled-up books. We then watched the David Letterman videos of ‘Stupid Dog Tricks’, in which dogs were trained to do funny things.”
And they brag a little bit about themselves which I think it’s good “Next, we too had to learn to solve a puzzle that Beau (a neuroscientist) and Mr Strudwick (our headteacher) gave us (which took an artificial brain 10 000 trials to solve, but only four for us)
Then they describe the real experiments they devised and conducted scientifically and report the results. “This experiment is important, because, as far as we know, no one in history (including adults) has done this experiment before.”

And they conclude with “Before doing these experiments we did not really think a lot about bees and how they are as smart as us. We also did not think about the fact that without bees we would not survive, because bees keep the flowers going. So it is important to understand bees. We discovered how fun it was to train bees. This is also cool because you do not get to train bees everyday. We like bees. Science is cool and fun because you get to do stuff that no one has ever done before. (Bees—seem to—think!)

Image by mitikusa released on Flickr under Creative Commons license.

Influence of religion on altruism

Interesting blog post at “Experimental Turk, A blog on social science experiments on Amazon Mechanical Turk” by Gabriele Paolacci.

David Rand posted on Crowdflower about a great Amazon Mechanical Turk study he recently conducted along with John Horton on altruism (as measured by cooperative behavior on a Prisoner’s Dilemma), that also used religious priming. The authors found that (rearranged from the original post):

1. A majority of Turkers cooperate in a Prisoner’s Dilemma. Thus even in the entirely anonymous and profit-motivated online labor market of AMT, many people still choose to help each other.

2. Reading a religious passage about the important of charity makes religious Turkers more altruistic, but has no effect on Turkers who do not believe in god. This shows that Turkers respond in basically the same way as “normal” lab subjects, and is fairly intuitive. Those who believe in god are receptive to calls for generosity phrased in religious language, while non-believers aren’t.

Review of “Taking up the mop: identifying future wikipedia administrators”

Paper by Moira Burke and Robert Kraut of Carnegie Mellon University, presented at CHI ’08, Conference on Human Factors in Computing Systems.

This paper presents a model of editors who have successfully passed the peer review process to become admins. The lightweight model is based on behavioral metadata and comments, and does not require any page text. It demonstrates that the Wikipedia community has shifted in the last two years to prioritizing policymaking and organization experience over simple article-level coordination, and mere edit count does not lead to adminship.

In short, authors compute lots of stats for every single user and then they do regression with the binary variable “election successful, i.e. X became admin”. They separate Request for Adminship pre-2006 and after-2006.

The stats they compute are:
Strong edit history
* Article edits ‡
* Months since first edit
Varied experience
* Wikipedia (policy) edits ‡
* WikiProject edits ‡
* Diversity score
* User page edits ‡
User interaction
* Article talk edits ‡
* User talk edits ‡
* Wikipedia talk edits
* Arb/mediation/wikiquette edits
* Newcomer welcomes
* “Please” in comments
* “Thanks” in comments
Helping with chores
* “Revert” in comments ‡
* Vandal-fighting (AIV) edits
* Requests for protection
* “POV” in comments
* Admin attention/noticeboard edits
* X for deletion/review edits ‡
* Minor edits (%)
Observing consensus
* Other RfAs
* Village pump
* Votes
Edit summaries / comments
* Commented (%)
* Avg. comment length (log2 chars)
Conclusions
Merely performing a lot of production work is insufficient for “promotion” in Wikipedia. Candidates’ article edits were weak predictors of success. They also have to demonstrate more managerial behavior. Diverse experience and contributions to the development of policies and Wiki Projects were stronger predictors of RfA success. This is consistent with findings that Wikipedia is a bureaucracy [1] and that coordination work has increased substantially [8][13].

However, future work is needed to examine more closely what the admins are doing. Future admins also use article talk pages and comments for coordination and negotiation more often than unsuccessful nominees, and tend to escalate disputes less often.

Although this research has shown that judges pay attention to candidates’ job-relevant behavior and especially behavior that suggests the candidate will be a good manager and not just a good worker, it is silent about whether other factors and probit regressions on the likelihood of success in a identified in the organizational literature [9]—social networks, irrelevant attributes, or strategic self- presentation.

Indeed, recent evidence that Wikipedia admins use a secret mailing list to coordinate their actions toward others suggest that sponsorship may also play a role in promotion.

Future research in Wikipedia using techniques like those in the current paper can be used to test theories in organizational behavior about criteria for promotion. An important limitation of the current model is that it does not take the quality of contribution into account. We plan to improve the model by examining measures of length, persistence, and pageviews of edits, which are already being used in more processor intensive models of existing admin behavior [7] and impact of edits [10].

Criteria for admins have changed modestly over time. Success rates were much higher (75.5%) prior to 2006, and collaboration via article talk pages helped more in the past (+15% for every 1000 article talk edits, compared to +6.3% today). The diversity score performs similarly prior to 2006 (+3.7% then, +2.8% now). However, participation in Wikipedia policy and Wiki Projects? was not predictive of adminship prior to 2006, suggesting the community as a whole is beginning to prioritize policymaking and organization experience over simple article-level coordination.

If you want to read the details, you can read the PDF of the paper.
Credit: Picture by inju released under Creative Commons.

Review of “Feedback Effects between Similarity and Social Influence in Online Communities”

Today I presented to the other SoNetters a wonderful paper titled “Feedback Effects between Similarity and Social Influence in Online Communities” by David Crandall, Dan Cosley, Daniel Huttenlocher, Jon Kleinberg, Siddharth Suri of Cornell University, presented at the 2008 KDD conference on Knowledge discovery and data mining. My review just under the slides I used for the presentation.

Besides the points already presented in the slides, here I add few points relevant for our research on Wikipedia.

Social influence: People become similar to those they interact with
Interaction ? similarity
Selection: People seek out similar people to interact with
Similarity ? interaction

They considered registered users to the English Wikipedia who have a user discussion page (~510,000 users as of April 2, 2007). They are responsible for 61% of edits to the roughly 3.4 million articles. They ignore actions by users without discussion pages, who tend to have very few social connections.

User’s activity vector v(t): number of times that he or she has edited each article up to that point in time t.
Similarity(u,v): similarity between activity vectors of user u and v.
Time of ?rst meeting for two users u and v = time at which one of them ?rst makes a post on the user discussion page of the other.

In principle, we could also try to infer social interactions based on posting to the interactions based on posting to the same article’s discussion page. Moreover, we found that using simple heuristics to infer interaction based on posts to article discussion pages produced closely analogous results to what we obtain from analyzing user discussion pages.

They ?nd that there is a sharp increase in the similarity between two editors just before they ?rst interact (selection), with a continuing but slower increase that persists long after this ?rst interaction (social influence).

They also create a model and estimate the unobservable parameters based on maximum-likelihood. The estimates are as follows:
* The parameter ?, the probability of communicating versus editing, was 0.058 (i.e. every 100 actions, 6 are talks while 94 are page edits). We can cite it and we can even verify this across different wikipedias and at different time slots.
* When considering article edits as actions, the article is chosen from one’s own interests with probability ? = 0.35, from a neighbor’s interests with probability ? = 0.081, from the overall interests of Wikipedia editors with probability ? = 0.5, and by creating a totally new article with probability ? = 0.069.
* When considering talks as actions, the user to communicate with is chosen randomly from the overall set of users with probability ? = 0.71, and someone who has engaged in a common activity with probability 1-? = 0.29

They also do some content analysis (30 instances of two users meeting for the ?rst time. We examined the content of the initial communication and any reply, looking for references to speci?c articles or other artifacts in Wikipedia. We also compared the edit history of the two users).
Of the 30 messages, 26 referenced a speci?c article, image, or topic. In 21 cases, the users had both recently worked on the artifact that was the subject of conversation.
The gap between co-activity and communication was usually short, often less than a day, though it stretched back three months in one case.
Informally, communications tended to fall into a few broad categories: o?ering thanks and praise, making requests for help, or trying to understand the editing.behavior of the other person.
This sample of interactions suggests that people most often come to talk to each other in Wikipedia when they become aware of the other person through recent shared activity around an artifact. Awareness then leads to communication, and often coordination.

A really wonderful paper!

My chapter in “Computing with Social Trust”

Computing with Social TrustThe book “Computing with Social Trust” is out. In it you can find a chapter by Paolo Avesani and myself about my PhD work on Trust in Recommender Systems. You can download my chapter or buy the dead-tree book from Amazon. Following you can find the Table of contents. Enjoy!

.
.
.
.
.
.
.
.
.
.
.

Continue reading