2/19/2020 – Jooyoung Whang – The Work of Sustaining Order in Wikipedia: The Banning of a Vandal

This paper introduces how bots and humans interact and collaborate to moderate thousands of wiki pages and ban vandal users. To study the use of moderator bots, the authors use a technique called trace ethnography. The technique traces the logs and records left by using automated services to give an insight into how the moderation was made using various tools. The authors explain how the tools facilitate distributed cognition and enhance teamwork among rather isolated vandal fighters. According to the paper, the set of vandal warnings is logged on the potential vandal user’s talk page which is then used to determine by feature vandal fighters how severe a warning should be given to the user. Temporary bans are made in a similar fashion, where a ban request is sent to the administrator’s ban request board and the next time an administrator finds a vandal activity by the same user, the ban is given. The paper makes use of a detailed use case to explain the process step-by-step.

The paper was interesting in that it shined a light to another pro that automation can bring to collaborative work. The paper emphasizes that it was the automated bots and their efficient reporting system that created a decentralized network of human moderators by pre-processing and analyzing the queued edits to form a ranked queue of potential vandal edits according to previous warnings. As there exist many effective scheduling algorithms, automated scheduling is a great way of handling human teamwork. Wikipedia’s system reminded me of a thread pool system that modern CPUs use, except that each thread’s task is carried out by a human.

Wikipedia’s vandal fighting system makes perfect use of human and AI affordance. The human’s side makes use of their linguistic and complex reasoning ability to determine the vandal edits. The AI side efficiently handles the many repetitive tasks like sorting edit queues and logging and retrieving warnings.

The followings are the questions that I had while reading the paper:

1. At the end of the use case presented in the paper, an obsolete report made after a user’s ban was automatically removed by the system. This is an example of resolving a race condition. Could there be any other possible conflicts that may occur because of the order of edits? Would some of them be difficult to fix by a bot?

2. According to the paper, it seems that the time of the warning by the system is not considered on a potential vandal user’s talk page when assigning a warning. What if the user who have gotten four warnings decided to quit being a vandal, came back a few years later, and accidentally made an edit that was considered vandal? The system would issue a temporary ban. Do you think this is fair?

3. According to the paper, vandal fighters are able to select from a range of helper bots in their activity. All these bots are compatible with each other because of the presence of a talk page provided by Wikipedia. Would there be any case where the different types of bots cause a problem or conflict with each other?

Leave a Reply