Harnessing the Wisdom of Crowds in Wikipedia: Quality Through Coordination

Summary:

As the world’s largest online encyclopedia, countless users retrieve information its articles. Its success demonstrates one of the many conveniences brought. Its effectiveness can be measured or illustrated by the quality of its articles. When an article is inevitably contributed by many contributors and editors, a common expectation or danger is the decreased effectiveness or quality due to increased coordination requirement. In reality however, the success of Wikipedia contradicts such fact observed in other places. The researchers employ longitudinal data to examine different coordination conditions that can influence the quality of coordinated articles. The authors found that the effectiveness of adding contributors is critically dependent on the degree and type of coordination those contributors use, as well as the life cycle of the article and the interdependence of the tasks involved in editing it. Coordination is categorized into two classes – explicit and implicit. Life cycle refers to the difference in interdependence among tasks. The results suggest that appropriate coordination among editors leads to better article quality even with increased number of editors. While the authors were not clear on why concentration of work predicts better article quality, they suspected it was due to better-distributed collaborative tasks among different contributors. The study is applicable to other free, open environment such as Linux OSS.

Reflection:

This type of study is completely new to me and I was quite happy reading it. From a pure computer science point of view, the method used in the study should be valid. It has been a while since I read something without user-oriented evaluation or proof-of-concept. I think the authors make a strong argument that coordination being a key component in the success of Wikipedia. As the authors noted themselves, some of their assumptions used in the longitudinal analysis are challengeable to some extent. This example of using Wikipedia reminds me of another article I read earlier this semester about a scholar of history found it hard to correct a commonly accepted wrong fact. The scholar gathered enough evidence to make a reasonable claim but Wikipedia’s editor refused to take it and criticizing on the lack of peer-reviewed result. The scholar waited for one and a half year or so till his new book came out so that he can cite some “peer-reviewed” literature to support his claim. To his surprise, his modification was reverted anyway. Another editor further questioned his book’s credibility and refused his modification request, despite the fact that more and more scholars in the field begun to accept the new history fact. This is just an example that can challenge the author’s assumptions.

A trend in reviewing conference paper is reproductivity. I’m totally new to this type of publication, but I think the authors need to make a stronger argument in the difficulty of the research, analysis or data gathering.

Question:

what’s the cost for the authors to assemble the data?