Reflection #2 – [8/30] – [Deepika Rama Subramanian]

Cheng, Justin, Cristian Danescu-Niculescu-Mizil, and Jure Leskovec. “Antisocial Behavior in Online Discussion Communities.”

Summary
In this paper, Cheng et al. exhaustively study antisocial behaviour in online communities. They classify their dataset into Future Banned Users (FBU) and Never Banned Users (NBU) for the purpose of comparing the difference in their activities in the following factors – post content, user activity, community response and actions of the community moderators. The paper suggest that the content of the posts by FBUs tend to be difficult to understand and full of profanity, they tend to attract more attention to themselves and engage/instigate pointless arguments. With such users even tolerant communities over time begin to penalise FBUs more harshly than they did in the beginning. This maybe because the quality of the FBUs posts have degraded or simply because the community no longer wanted to put up with the user. The paper points out, after extensive quantitative analyses, it is possible for FBU users to be identified as early as 10 posts into their contribution to discussion forums.

Reflection
As I read this paper, there are a few questions that I wondered about:
1. What was the basis of the selection of their dataset? While trolling is prevalent in many communities, I wonder if Facebook or Instagram may have been a better place because trolling is at its most vitriolic when the perpetrator has some access to your personal data.
2. One of the bases for the classification was the quality of the text. There are several groups of people who have reasons other than trolling for the quality of text viz. non-native speakers of English, teens who have taken to unsavoury variations of words like lyk(like), wid (with), etc.
3. Another characteristic of anti-social users online was people who led other users of the community into pointless and meaningless discussions. I have been part of a group that was frequently led into pointless discussions by legitimate well-meaning members of the community. In this community ‘Adopt a Pet’, users are frequently outraged by the enthusiasm that people show in adopting pedigrees versus local mutts. Every time there is a post about pedigree adoptions, there are always a number of users who will be outraged. Are these users considered anti-social?
4. The paper mentions that some NBUs have started out being deviant but had improved over time. If as this paper proposes, platforms begin banning members based on a couple of posts soon after they join, wouldn’t we be losing on these users? And as suggested by the paper, users that believe they have been wrongly singled out (in deleted posts whereas other posts with similar content were not deleted) tend to become more deviant. When people feel like they’ve been wrongly characterised, based on a few posts, wouldn’t they come back with a vengeance to create more trouble on the site?
5. Looking back at the discussion in our previous class, how would this anti-social behaviour be managed in largely anonymous websites like 4chan? It isn’t really possible to ‘ban’ any member of that community. However, maybe because of the ephemerality of the website, if the community ignores trolls, the post may disappear on its own.
6. If we look at communities where deviant behaviour is welcome. If visitors who visit say r/watchpeopledie reports a post to the mod as would the moderator have to delete the post given that it is the norm on that discussion board?

Leave a Reply

Your email address will not be published. Required fields are marked *