Summary
Geiger and Ribes cover the case of using automated tools or “bots” in order to prevent vandalism on the popular online and user-generated encyclopedia “Wikipedia.” The authors detail how editors use popular distributed cognition coordination services such as “Huggle,” and argue that these coordination applications affect the creation and maintenance of wikipedia as much as the traditional social roles of editors. The team of human and AI work together to fight vandalism in the form of rogue edits. They cover how bots assisted essentially 0% of edits in 2006 to 12% in 2009, while editors use even more bot assistance. They then deep dive into how the editors came to ban a single vandal that committed 20 false edits to Wikipedia in an hour, which they term a “trace ethnography.”
Personal Reflection
This work was eye-opening in seeing exactly how Wikipedia editors leverage bots and other distributed cognition to maintain order in Wikipedia. Furthermore, after reading this, I am much more confident in the accuracy of articles contained on the website (possibly to the chagrin of teachers everywhere). I was surprised how easily attack edits were repelled by the Wikipedia editors, considering that hostile bot networks could be deployed against Wikipedia as well.
I also generally enjoyed the analogy of how managing Wikipedia is like navigating a naval vessel in that both leverage significant amounts of distributed cognition in order to succeed. Showing how many roles are needed in order to understand various jobs and collaborate between people was quite effective.
Lastly, their focus (trace ethnography) on a single vandal was an effective way of portraying what is essentially daily life for these maintainers. I was somewhat surprised that only four people were involved before banning a user; I had figured that each vandal took much longer to identify and remedy. How the process proceeded, where the vandal got repeated warnings before a (temporary) ban occurred, and how the bots and humans worked together in order to come to this conclusion, was a fascinating process that I hadn’t seen written in a paper before.
Questions
- One bot that this article didn’t look into is a twitter bot that tracked all changes on Wikipedia made by IP addresses used by congressional members (@CongressEdits). Its audience is not specifically intended to be the editors of Wikipedia, but how might this help them? How does this bot help the general public? (It has since been banned in 2018) How might a tool like this be abused?
- How might a trace ethnography be used in other applications for HCI? Does this approach make sense for domains other than global editors?
- How can Huggle (or the other tools) be changed in order to tackle a different application, such as version control? Would it be better than current tools?
- Is there a way to exploit this system for vandals? That is, are there any weaknesses to human/bot collaboration in this case?
Hi, LLisle.
Your reflection is interesting because I thought exactly the opposite.
I thought 4 people being involved to ban a user was too many.
Considering that Wikipedia runs purely on enthusiastic volunteers, the fact that 4 unrelated people were able to get together to produce a result (that is, ban a user) seemed almost like a miracle.
The fact that the whole process only took 15 minutes makes it all the more so.
It’s sometimes very hard to even come up with a project proposal in a month with a team of uninterested individuals.
Regarding your question 4, I can think of a case where vandals themselves use the bots to do no good. They may purposely undo a fix that other vandal fighters made using the bots or leave harmful edits be without reporting. This seems possible since I did not read in the paper any special requirements for using the bots.