02/19/2020 – Sukrit Venkatagiri – The Case of Reddit Automoderator

Paper: Shagun Jhaver, Iris Birman, Eric Gilbert, and Amy Bruckman. 2019. Human-Machine Collaboration for Content Regulation: The Case of Reddit Automoderator. ACM Transactions on Computer-Human Interaction (TOCHI) 26, 5: 31:1–31:35. https://doi.org/10.1145/3338243

Summary: This paper studies Reddit’s Automod, a rule-based moderator for Reddit that automatically filters content on subreddits, and can be customized by the moderators to suit each subreddit. The paper sought to understand how moderators use Automod, and what advantages and challenges it presented. The paper discusses these findings in detail and the authors found that: there was a need for audit tools to tune the performance of Automod, a repository for sharing these tools, and for improving the division of labor between human and machine decision making. They concluded with a discussion of the sociotechnical practices that shape the use of the tools, how they help workers maintain their communities, and the challenges and limitations, as well as solutions that may help address them.

Reflection:

I appreciate that the authors were embedded within the Reddit community for over one year and provides concrete recommendations for creators of new and existing platforms, for designers and researchers interested in automated content moderation, for scholars of platform governance, and for content moderators themselves.

I also appreciate the deep and thorough qualitative nature of the study, along with the screenshots, however the paper may be too long and too detailed in some aspects. I wish there was a “mini” version of this paper. The quotes themselves were exciting and exemplary of problems the users faced.

The finding that different subreddits configured and used subreddits was interesting and I wonder how much a moderators’ skills and background affects whether and in what ways they configure and use Automod. Lastly, the conclusion is very valuable and especially as it is targeted towards different groups within and outside of academia.

Two themes that emerged, “Becoming/continuing to be a moderator” and “recruiting new moderators” sound interesting, but I wonder why it was left out of the results. The paper does not provide any explanation in regards to this.

Questions:

  1. How do you subreddits might differ in their use of Automod based on their technical abilities?
  2. How can we teach people to use Automod better?
  3. What are the limitations of Automod? How can they be overcome through ML methods?

2 thoughts on “02/19/2020 – Sukrit Venkatagiri – The Case of Reddit Automoderator

  1. Following up on question 2, I believe there is more we can do to help teach and help people learn how to use Automod. Decentralizing the knowledge of how to use Automod is pertinent to help spread the knowledge, as the subreddits studied by the team seemed to let a singular person handle majority of the Automod operations. This could be done by creating a “REPL” (read-evaluate-print-loop) environment with some sample testbed comments, or even an easier mode of use. From the YAML example configuration given, it is easy to see how extensible the rules can and why people would normally be hesitant to learn about it. Given an easier and less complicated but more restrictive configuration, I believe it would create a lower barrier to entry.

  2. In PhotoshopBattle, there is a lot of regulations about images, like sources or resolution. In Politics, there are regulations about link sources, and the sources must in the white list in the regulation.
    I think current system is good, reddit give moderators a wiki page where they can modify the configurations. As the basic part will cover most of the implementation of the regulations, it is not to hard for the moderators. But if they have expert programmers and want to edit something in the core, the tool also allow them to do so.

Leave a Reply