Summary:
“Human-Machine Collaboration for Content Regulation: The Case of Reddit Automoderator” by Jhaver et al. talks about the popular social media website Reddit and the unusual unpaid human moderators and automated moderator collaboration. Reddit moderators make use of the heavily configurable automated program called, ‘Automoderator’ to help make decisions about the content that should be removed from the website. The authors interview 16 Reddit moderators to understand how they benefit from the moderating tool, ‘Automod’ and how they adapt and configure it to reflect the subreddit’s policies to help them moderate the subreddit effectively. The authors also offer valuable insights that may benefit the creators of the platforms, designers of automated regulation systems, scholars of platform governance, and content moderators. The authors conclude by pointing out that the moderation system in reddit is a collaborative effort between humans as well as the automated systems. This hybrid system works but there is definitely a scope for improvement in the development and deployment of these tools.
Reflections:
Online platforms can be a boon or a bane depending on how people choose to engage with it. Regulation might seem necessary to ensure that low quality posts (these posts can be treated as noise) do not drown out informative and worthy posts on the site. However, this is a challenging task. Deciding whether a post is appropriate for the subreddit puts a lot of responsibility on the moderator. In some cases the moderator might be a bot, ‘Automod’ and in other cases the platform relies on paid or unpaid volunteers. Reddit moderators are unpaid. The authors in this work analysed 5 different subreddits: ‘r/photoshopbattles’, ‘r/space’, ‘r/oddlysatisfying’, ‘r/explainlikeiamfive’ and ‘r/politics’. It’s interesting that some reddit moderators prefer to implement moderation bots from scratch while others make use of tools made by others. It’s intriguing how making use of tools made by others forms a sense of community of moderators within the bigger community of reddit. Most redditors use ‘Automod’ which was initially created by ‘Chad Birch’ using the Reddit API in January 2012. However, a major drawback of this study is that all the moderators that the authors interviewed were males. It would be helpful to get the perspective of female moderators, if there are any, since the user base for Reddit is disproportionately male. I feel the authors should have selected ‘r/AskHistorians’ as one of the subreddits for analysis since it’s widely known to be highly moderated and content driven. It would have also been interesting to deep dive into the comments that ‘Automod’ marked as offensive but were not. This would help improve the performance of the moderator while informing us of its limitations. One might also need to wonder about the consequences if the subreddit community grows larger. There might be a need to reflect on the existing tools and their scale.
Questions:
1. Do you agree that social media content should be moderated?
2. What about the mental health of the moderators?
3. What kind of resources should be make available to the moderators since they are dealing with sensitive content all the time?
For the first question, yes, I agree that the content should be moderated. If no one governs the content of the posts, there would be a lot useless or even illegal posts. As a result, users may feel uncomfortable and leave this discussion site. This is harmful to both the site and the users.
I have to say that, there may be bad moderators. However, the users can also supervise the moderators. When moderators did something bad, users can report them.
I think feedbacks are important. The Automod currently provides no feedback to the moderators when it remove some of the posts. For this reason, it is hard for the moderators to modify the configurations.