Reflection #2 – [08/30] – [Prerna Juneja]

Antisocial Behavior in Online Communities

Summary:

In this paper authors perform a quantitative, large-scale longitudinal study of antisocial behavior on three online discussion communities namely CNN, IGN and Breitbart by analyzing users who were banned from these platforms. They find that such users use obscene language that contains less positive words and is harder to understand. Their posts are limited to a few threads and are likely to amass more responses from other users. The longitudinal analysis reveals that the quality of their posts degrade with passage of time and community becomes less and less tolerant to their posts. The authors also discover that excessive censorship in the initial stages might aggravate the antisocial behavior in the future. They identify features that can be used to predict whether a user is likely to be banned or not namely content of the post, community’s response & reaction to the post, user activities ranging from posts per day to votes given to other users and actions of moderators. Finally, they build a classifier that can make the aforementioned prediction after observing just 5-10 posts with 80% AOC.

Reflections:

Antisocial behavior can manifest in several forms like spamming, bullying, trolling, flaming and harassment. Leslie Jones, actress starring in movie “Ghostbusters” became a target of online abuse. She started receiving misogynistic and racist comments on her twitter feed from several people including a polemicist Milo Yiannoloulos and his supporters. He was then permanently banned from twitter. Sarahah was removed from app stores after Katrina Collin’s started an online petition accusing the app of breeding haters after her 13 year old daughter received hateful messages, one even saying “i hope your daughter kills herself”. According to an article, 1.4 million people interacted with Russian spam accounts on twitter during the 2016 US elections. Detecting such content has become increasingly important. 

Authors say several techniques exist in the online social communities to discourage antisocial behavior ranging from down voting, reporting posts, mute feature, blocking a user, comments to manual human moderation. It would be interesting to find how these features fit in the design of the community. How are these features being used? Are all these “signals” true indicators of anti-social behavior? e.g. the authors suggest in the paper that downvoting is sometimes used to express disagreement rather than antisocial behavior which is quite true in case of quora and youtube. Both these websites have an option to downvote as well as to report the post. Will undesirable content always have larger number of downvotes? Do posts of users exhibiting antisocial behavior receive more downvotes, do their posts get muted by most of their online friends?

All of the author’s inferences make sense. The FBUs use more profanity and less positive words and get more replies which is expected since they use provocative arguments and attempt to bait users. We saw examples of the similar behavior in the last paper we read ”Identity and deception in the virtual community”. I also decided to visit the 4chan website to see if concept of moderators exist there. Surprisingly it does. But as answered in one of the FAQs one hardly gets to see moderated posts since there are no public records of deletion and since the content is deleted it gets removed from the page. I wonder if it’s possible to study the moderated content using the archives and if the archives keep temporal snapshots of the website’s content. Secondly, the website is famous for it’s hateful and pornographic content. How do you pick less hateful stuff from the hateful. I wondered if hate and sexual content are even considered criteria there. On checking their wiki I found the answer to “how to get banned in 4chan” {https://encyclopediadramatica.rs/4chan_bans=>quite an interesting read}. This makes one thing clear, criteria to moderate content is not universal. It depends a lot on the values and culture of the online community.

Having an automated approach to detect users will definitely lessen the burden from the shoulders of human moderators. But I wonder about the false positive cases. How will it affect the community if a series of harmless posts gets some users banned? Also, some users might redeem themselves later. In Fig 6 c) and corresponding explanation in “Characterizing users who were not banned”, we find that even though some FBUs were improving they still get banned. Punishing someone for improving will make sure that person will never improve in life. And the community might loose faith in the moderators. Considering these factors, is it wise to ban a user after observing initial few posts? Even exposing such users to moderators will make the later biased against the former. How long one should wait to form a judgment?

Overall I think it was a good paper, thorough in every aspect: from data collection, annotation, analysis to specifying the future work. I’ll end by mentioning a special app “ReThink[1]” that I saw in an episode of Sharktank (a show where millionaires invest their money in unique ideas). This app detects when a user writes an offensive message and gives him a chance to reconsider sending that message by showing an alert. Aimed for adolescents, the app’s page mentions that 93% of people do change their mind when alerted. Use of such apps by young people might make them responsible adults and might help in reducing the anti social behavior that we see online.

[1] http://www.rethinkwords.com/whatisrethink

One thought on “Reflection #2 – [08/30] – [Prerna Juneja]

  1. “Polemicist” is a new word in my dictionary, thank you for that.
    Your point on auto-moderating is something the phenomenon I’ve seen on Twitter (a post classified as not following “community guidelines” is automatically shadow-banned or removed — based on individual settings, I think? But the point still stands). We briefly discussed in class how this was done automatically to account that changed their profile pictures to match Elon Musks’. Rather than this being a point of chastisement, people wore it as a badge of honor. I wonder if this also scales to your point.

Leave a Reply

Your email address will not be published. Required fields are marked *