The paper for today’s discussion:
Cheng, Justin et al. (2015) – “Antisocial Behavior in Online Discussion Communities”- Proceedings of the Ninth International AAAI Conference on Web and Social Media (61-70).
Summary
The paper was mainly focused on analyzing the antisocial behaviors in large online community namely, CNN.com, Breitbart.com and IGN.com. The authors describe undesirable behavior such as trolling, flaming, bullying, harassment and other unwanted online interactions as anti-social behaviors. They have categorized users who display such unwanted attitudes into two broad groups, namely, the Future-Banned Users (FBUs) and the Never-Banned User (NBUs). The authors conducted statistical modelling to predict individual users who will eventually be banned from the community. They collected data from the above-mentioned sites via Disqus for a period of about 13 months. They based their measure of undesirable behaviors on the posts that were deleted by the moderators. The main characteristic of an FBU post are
- They post more than an average user would and contribute more towards posts per thread.
- They generally post off topic conversation and which generally are negative emotions.
- Posting quality decreases over time and this may be as a result of censorship.
It was found that at times the community tolerance changes as well and become less tolerant of an users’ post over time.
The authors further classified the FBU’s into Hi-FBU and Lo-FBU with the name signifying the amount of post deletion that occurs. It was found that Hi-FBUs exhibited strong anti-social characteristics and their post deletion rate were always high. Whereas, for the Lo-FBUs the post deletion rates were low until the second half of their lives where it rose. Lo-FBU start to attract attention (the negative kind) in their later life. Few features were established in the paper for the identification of the antisocial users namely the post features, activity features, community features and the moderator features. Thus, the authors were able to establish a system that would identify undesirable users early on.
Reflection.
This paper was an interesting read on how the authors conducted the data-driven study of anti-social behavior in online communities. The paper on Identity and Deception by Judith had introduced us to online “trolls” and how their posts are not welcomed by the community and might even lead to the system administrators banning them from such posts. This paper delved further into the topic with analyzing the types of anti-social users.
One issue which comes to my mind is how are the moderators going to block users when the platform is anonymous? The paper on 4chen’s popular board /b/, which was also assigned as a reading, focused on the anonymity of users posting on threads and that the majority of the site attracted anti-social behavior. Is it possible to segregate users and ultimately block them from creating profanity in anonymous platforms?
One platform where I have witnessed such online unwanted comments is YouTube. The famous platform by google has a comment section where anyone having a google account can post their views. I recently read an article “Text Analysis of YouTube Comments” [1]. The article focused on videos from few categories like comedy, science, TV and news & politics. It was observed that news and political related channels attracted the majority of the negative comments whereas, the TV category are mostly positive. This leads me to think that the subject of discussion is sometimes important as well. What kind of topics do generate the most amount of anti-social characteristic in the discussion communities?
The social media in general has now become a platform for cyberbullying and unwanted comments. If these users and their patterns are detected and if such comments are automatically filtered out as “anti-social”, it would be a huge step in the right direction.
[1] https://www.curiousgnu.com/youtube-comments-text-analysis