Reflection #3 – [1/23] – [Deepika Kishore Mulchandani]

[1]. Cheng, J., Danescu-Niculescu-Mizil, C. and Leskovec, J. 2015. Antisocial behavior in online discussion communities.

Summary :

In this paper, Justin Cheng et al. study antisocial behavior in 3 online discussion communities: CNN, a general news site, Breitbart.com, a political news site, and IGN.com, a computer gaming site. The authors first characterize antisocial behavior by comparing the activity of-of users banned from the community(FBUs) and the users never banned(NBUs). They then perform longitudinal analysis, i.e, the study of the behavior of the users over their active tenure in the community. They also consider the readability of the posts and the proportion of user’s post deletion rate as features to train their model. After developing a model, they predict the users who will be banned in the future. With their model, they need to observe only 5 to 10 of the user’s posts to accurately predict that the user will be banned. They present two hypothesis and try to answer the following research questions: Do users become antisocial later? , Does a community’s reaction affect their behavior? , Can antisocial users be identified early?

Reflection:

Antisocial Behavior is a problem is a worry no matter if it is online or in person. That said, this research is an indication of the advancement that is being made to alleviate the ill effects of such behavior. The authors mention the four features that help in recognizing the antisocial users in a community. Out of these features, the one that is salient in the study conducted by the authors is the ‘Moderator features’. Moderators delete the posts and ban the users in a community. They have a particular set of guidelines based on which they delete the posts that they consider antisocial. This raises a few questions. ‘Do these moderators delete posts only based on the language of the post or factors like ‘ the number of down votes’, ‘whether the post is reported’ affect the decision?’ The point of this question is to figure out which do they weigh more heavily. Also, this opens up a variety of questions like, ‘Do moderator demographics(e.g age) play a role in how offensive they find a post to be?’ The authors mention that there were more ‘swear’ words in the posts written by the FBUs. The moderators who are more tolerable of swear words may not delete posts of potential FBUs.

I admire the efforts of the authors in studying the entire history of a particular user to identify patterns in the user behavior over time. I also like the other features used by the authors. The activity features(time spent in a thread) are not that intuitive and end up playing a significant role. The authors made an important observation that the model trained by one community perform relatively well in the other communities as well. Also, they provided some facts that FBUs survived over 42 days in CNN, 82 days in Breitbart and 103 days on IGN. This could be an interpretation of the category of the online discussion community. One could expect the online community which hosts only political news to be more tolerant of antisocial behavior by virtue of the fact that there is opposition inherent in the news. Most of the posts on such a community could have down votes and replies to comments. These are both significant features of the model as well as factors that influence a moderator’s decision.  Thus, the question, ‘Does the category of the online discussion community affect the ban of an antisocial user?’ I also agree with the authors that it is difficult to track users who might instigate arguments but maintain an NBU behavior. This could be a crucial research question to look into.

Leave a Reply

Your email address will not be published. Required fields are marked *