Reflection #3 – [01/25] – [John Wenskovitch]

This paper describes a study regarding antisocial behavior in online discussion communities, though I feel that labeling the behavior as “negative” rather than “antisocial” may be more accurate.  In this study, the authors looked at the comment sections of CNN, Breitbart, and IGN, identifying users who created accounts and were banned during the 18-month study window.  Among other findings, the authors noted that these negative users write worse than other users, they both create new discussions and respond to existing discussions, and they come in a variety of forms.  The authors also found that the response from the rest of the community has an influence on the behavior of these negative users, and also that they are able to predict whether or not a user will be banned in the future with great accuracy just by evaluating a small number (5-10) of the user’s posts.

Overall, I felt that this paper was very well organized.  I saw the mapping pattern discussed during Tuesday’s class linking the data analysis process to the sections of the paper.  The data collection, preprocessing, and results were all presented clearly (though I had a visualization/data presentation gripe with many of the subfigures being rendered far too small with extra horizontal whitespace between them).  Their results in particular were neatly organized by research finding, so it was clear what was being discussed from the bolded introductory text.

One critique that I have which was not well addressed by the authors was the fact that all three of the discussion communities that they evaluated used the Disqus commenting platform.  In a way, this works to the authors’ advantage by having a standard platform to evaluate.  However, near the end of the results, the authors note that “moderator features… constitute the strongest signals of deletion.”  It would be interesting to run a follow-up study with websites that use different commenting platforms, as moderators may have access to different moderation tools.  I would be interested to know if the specific actions taken by moderators have a similar effect to the community response, if these negative users respond differently to more gentle moderation steps like shadowbanning or muting than to harsher moderation steps like post deletion and temporary or permanent bans.  From research like this, commenting platform creators can modify their tools to support actions that mitigate negative behavior.

In a similar vein, the authors have no way of knowing precisely how moderators located comments from these negative users to begin the punishment process.  I would be interested to know if there is a cause and effect relationship between the community response and the moderator response (e.g., the moderators look for heavily downvoted comments to delete and ban users), or if the moderators simply keep track of problem users and evaluate every comment made from those users.  Unfortunately, this information is something that would like require moderator interviews or further knowledge of moderation tools and tactics, rather than something that could be scraped or easily provided by Disqus.

The “factors that help identify antisocial users” and “predicting antisocial behavior” sections were quite interesting in my opinion, because problem users could be identified and moderated early on instead of after they begin causing severe problems within the discussion communities.  The authors’ use of inferential statistics here was well written and easy to follow.  Their discussion at the end of these sections regarding the generalizability of these classifiers was also pleasing to see included in the paper, showing that negative users share enough features that a classifier trained on CNN trolls could be used elsewhere.

Finally, I wanted to make note of the discussions under Data Preparation regarding the various ways that undesired behavior could be defined.  The discussion was helpful both from an explanatory perspective, describing negative tactics like baiting users, provoking arguments, and derailing discussions, as well as from a methodological perspective to understand what behaviors were being measured and included throughout the rest of the study.  However, I’m curious if there are cases that the authors did not measure, or if there were false negative bans that may have been introduced into the data.  For example, several reddit communities are known for banning users who simply comment with different political views.  Though I don’t want to visit Breitbart myself, second-hand information that I’ve heard about the community makes me suspect that a similar approach might exist there.  It was not clear to me if authors would have removed comments and banned users from consideration in this study if, for example, they simply expressed unwanted content (liberal views) in polite ways on a conservative website.  It still counts as “undesired behavior,” but I wouldn’t count it in the same tier as some of the other behaviors noted.

John Wenskovitch

To come.

Leave a Reply

Your email address will not be published. Required fields are marked *