02/19/2020 – The Work of Sustaining Order in Wikipedia: The Banning of a Vandal – Sushmethaa Muhundan

The paper takes about the counter-vandalism process in Wikipedia focussing on both the human efforts as well as the silent non-human efforts put in. Fully-automated anti-vandalism bots are a key part of this process and play a critical role in managing the content on Wikipedia. The actors involved range from being fully autonomous software to semi-automated programs to user interfaces used by humans. A case study is presented which is an account of detecting and banning a vandal. This aims to highlight the importance and impact of bots and assisted editing programs. Vandalism-reverting software use queuing algorithms teamed with a ranking mechanism based on vandalism-identification algorithms. The queuing algorithm takes into account multiple factors like the kind of user who made the edit, revert history of the user as well as the type of edit made. The software proves to be extremely effective in presenting prospective vandals to the reviewers. User talk pages are forums utilized to take action after an offense has been reverted. This largely invisible infrastructure has been extremely critical in insulating Wikipedia from vandals, spammers, and other malevolent editors. 

I feel that the case study presented helps understand the internal working of vandalism-reverting software and it is a great example of handling a problem by leveraging the complementary strengths of AI and humans via technology. It is interesting to note that the cognitive work of identifying a vandal is distributed across a heterogeneous network and is unified using technology! This lends speed and efficiency and makes the entire system robust. I found it particularly interesting that ClueBot, after identifying a vandal, immediately reverted the edit within seconds. This edit did not have a wait in a queue for a human or a non-human bot to review but was resolved immediately using a bot.

A pivotal feature of this ecosystem that I found very fascinating was the fact that domain expertise or skill is not required to handle such vandal cases. The only expertise required of vandal fighters is in the use of the assisted editing tools themselves, and the kinds of commonsensical judgment those tools enable. This widens the eligibility criteria for prospective workers since specialized domain experts are not required.

  • The queuing algorithm takes into account multiple factors like the kind of user who made the edit, revert history of the user as well as the type of edit made. Apart from the factors mentioned in the paper, what other factors can be incorporated into the queuing algorithm to improve its efficiency?
  • What are some innovative ideas that can be used to further minimize the turnaround reaction time to a vandal in this ecosystem?
  • What other tools can be used to leverage the complementary strengths of humans and AI using technology to detect and handle vandals in an efficient manner?

Read More

02/19/2020 – The Work of Sustaining Order in Wikipedia: The Banning of a Vandal – Yuhang Liu

The authors of this paper examine the social role of software tools in Wikipedia, with a particular focus on automatic editing programs and assisted editing tools. The author showed by example that the content of Wikipedia has been modified by people, which may have a bad impact on society. This kind of repair can be done by administrators and some assisted software, and with the development of science and technology, this software also plays more and more roles in the recover. And using trace ethnography, the authors show how these unofficial technologies have fundamentally changed the nature of editing and management in Wikipedia. Specifically, “destructive fighting” can be analyzed as distributed cognition, emphasizing the role of non-human participants in decentralized activities that promote collective intelligence. Overall, this situation shows that software programs are not only used to enforce policies and standards. These tools can take coordinated but decentralized action, and can play a more widespread and effective role in subsequent applications.

I think this paper has given a large number of examples, including the impact of changes on society, and some analogy to perfectly explain the meaning of these network terms, which are very effective illustrations of Wikipedia and the impact of these applications on people’s lives. Among them I think the author made two main points.

  1. Robots or software, such assisted editing tools, play an increasingly important role in life and work. For example, the article mentioned two editing tools, Huggle, Twinkle, and the author introduced the use of the two software in detail. After reading the introduction, this completely subverted my concept of assisted editing tools. These unofficial tools can greatly help administrators to complete the maintenance work. This also led to the new concept of “trace ethnography”. In my opinion, trace ethnography is a way of generating rich accounts of interaction by combining a fine grained analysis of the various “trace” that are automatically recorded by the project ‟s software alongside an ethnographically derived understanding of the tools, techniques, practices, and procedures that generate such traces. it can integrate the small traces left by people on the Internet. I think it can play a vital role in controlling and monitoring people’s behavior on the Internet. In order to maintain the network environment, I even think we can use this feature more widely.
  2. The author describes the destructive behavior reform as distributed cognition through analogy of navigation. The user and the machine complete the judgment and then integrate in the network so that the intentional destructiveness can be seen. I think this kind of thinking will even greatly change the way people work in the future. The work introduced in navigation does not even require a lot of professional knowledge, it only needs to be able to read maps and use a magnifying glass. And in future work, people who work do not even need to have sufficient professional knowledge, they only need to be able to understand the information and have the right judgment. This will definitely change the way people work.

Question:

  1. In the future, will it be possible to complete inspection and maintenance by robots and computers(without people)?
  2. Is it possible to apply the ideas of trace Ethnography in other fields, such as monitoring cybercrime?
  3. Assisted editing tools reduce administrators’ requirements for related expertise. Will this change benefit these people in the long run? Does the easier job completion mean easier replacement by machines?
  4. The article mentioned that we need to consider the social impact of such assisted editing tools. What kind of social impact do you think the various software in your current life have?

Read More

02/18/20 – Akshita Jha – The Work of Sustaining Order in Wikipedia: The Banning of a Vandal

Summary:
“The Work of Sustaining Order in Wikipedia: The Banning of a Vandal” by Geiger and Ribes examines the role of software tools in the English Wikipedia, specifically involving autonomous and assisted editing. Wikipedia is a “free online encyclopedia, created and edited by volunteers around the world and hosted by the Wikimedia Foundation.” Bots are “fully-automated software agents that perform algorithmically-defined tasks involved with editing, maintenance, and administration in Wikipedia.” Different bots have different functions which can range from simple tasks like correcting grammatical errors to more complicated tasks like detecting personal insults. The authors present a detailed case study: “The Banning of a Vandal”. The authors talk about “Huggle”, that is the most widely used editing tool across Wikipedia that queues all the edits. The user then has the option to perform a variety of actions like ‘revert’, ‘warn’, etc. on each of the edits that is displayed. The user does not have the option to select which edit he wants to make changes to. An anonymous user had been vandalizing multiple Wikipedia pages and was not discouraged by the warning and comments given by the moderators. Eventually, this rogue user was blocked by making use of the network of moderators or vandal fighters and the bots but it was more cumbersome than expected. In addition to the quantitative and the qualitative studies, the research also demonstrated the importance of trace ethnography for studying such sociotechnical systems.

Reflections:
This is an interesting work. It was particularly insightful as I was unaware of the role of multiple bots in Wikipedia editing. Bots and humans working cohesively have helped make Wikipedia the widely used resource it currently is. Making Wikipedia a free resource that allows editing by volunteers comes with a cost. This paper helped highlight the limitations of the Wikipedia bots and how a significant amount of effort is needed from multiple moderators to ban a vandal from Wikipedia. Each moderator makes a local judgement but the Wikipedia talk pages help keep a record of all the warnings against a particular user. Certain kinds of vandalism, like inserting obscenities and profanities, are easy to detect. However, if a vandal deletes an important section from the Wikipedia page, that might involve significant cognitive effort from moderators to identify and rectify. An interesting question is how would Wikipedia be effected, if it made use of a completely automated bot instead of the hybrid system it currently uses. Would the bots be able to determine the significance of an edit or a change? How would that change the moderators behaviors and actions? Since, automated tools help determine the kind of social activities that are possible on Wikipedia, will having a completely automated bot significantly alter Wikipedia and the user involvement? It would also be interesting to see if we can use trace ethnography to study Reddit, which is another big sociotechnical system.

Questions:
1. How did such a network come into place?
2. Do you think certain kinds of Wikipedia pages are more susceptible than others to vandalism?
3. Will completely automated bots help?
4. Can we conduct such a case study for Reddit? Why? Why not?

Read More

02/19/2020 – Nurendra Choudhary – The Work of Sustaining Order in Wikipedia

Summary

In this paper, the authors discuss the problem of maintaining order in open-edit information corpora, specifically Wikipedia here. They start with explaining the near-immunity of Wikipedia to vandalism that is achieved through a synergy between humans and AI. Wikipedia is open to all editors and the team behind the system is highly technical. However, the authors study on its immunity dependence on the community’s social behavior. They show that vandal fighters are networks of people that identify the vandals based on a network of behavior. They are supported by AI tools but banning a vandal is yet not a completely automated process. The process of banning a user is a requires individual editor judgements at a local level and a collective decision at a global level. This creates a heterogeneous network and emphasizes on decision corroboration by different actors.

As given in the conclusion, “this research has shown the salience of trace ethnography for the study of distributed sociotechnical systems”.  Here, trace ethnography combines the ability of editors with data across their actions to analyze vandalism in Wikipedia.

Reflection

It is interesting to see that Wikipedia’s vandal fighters include such a seamless cooperation between humans and AI. I think this is another case where AI can leverage human networks for support. The more significant part is that the tasks are not trivial and require human specialization and not just plain effort. Also, collaboration is a significant part of AI’s capability. Human editors analyze the articles in the local context. AI can efficiently combine the results and target the source of these errors by building a heterogeneous network of such decisions. Further, human beings analyze these networks to ban vandals. This methodology applies the most important abilities of both humans and bots. The collaboration involves the best attributes of humans, i.e; judgement and of AI, i.e; pattern recognition. Also, it effectively utilizes this collaboration against vandals who are independent or small networks of mal-practitioners who do not have access to the bigger picture.

The methodology utilizes distributed work patterns for accomplishing different tasks of editing and moral agency. Distributing the work enables involvement of human beings on trivial tasks. However, combining the results to attain logical inferences is not humanly possible. This is because the vast amount of data is incomprehensible to humans. But, humans have the ability to develop algorithms that the machine can apply at a larger-scale to get such inferences. However, the inferences do not have a fixed structure and require human intelligence to retrieve desired actions against vandalism. Given that, most of the cases of such vandalism are by independent humans, a collaborative effort by AI can greatly turn the odds for vandal fighters. This is because AI aids humans by utilizing the bigger picture incomprehensible to just humans.

Questions

  1. If vandals have access to the network, will they be able to destroy the synergy?
  2. If there’s more motivation like political or monetary gain, will it give rise to a kind-of mafia network of such mal-practitioners? Will the current methodology still be valid in such a case?
  3. Do we need a trust-worthiness metric for each Wikipedia page? Can the page be utilized as reference for absolute information?
  4. Wikipedia is a great example of crowd-sourcing and this is a great article for crowd-control on these networks. Can this be extended to other crowd-sourcing softwares like Amazon MT or information blogs?

Read More