- Justin Cheng, Cristian Danescu-Niculescu-Mizil, Jure Leskovec. “Antisocial Behavior in Online Discussion Communities.”
In this day and age when “information overload” is widespread, the commodity everyone is eager to capture is attention. Users having the ability to do the same are sought after by companies trying to tout their next revolutionary product. But, there is one group of users which has particular ability to capture attention but the way they achieve it, make them, thankfully, undesirable to these establishments. These users through their vile and provocative mechanisms can send even the most civil of the netizens off their rails. But who and how these rogue actors function, can their behaviour be profiled at scale and then be used for nipping such bad actors from forums early from the bud? These are the questions [1] is trying to answer.
To start with, the paper distinguishes users from three websites namely CNN, IGN and Breitbart over a period of 18 months into two categories, Future Banned Users (FBUs) and Never Banned Users (NBUs). The FBUs are observed to have two subgroups, ones who concentrate their efforts to few threads or groups and ones who distribute their efforts to multiple forums. The author then measures the readability of the posts by these different categories of users to observe that the FBUs tend to have higher automated readability indices (ARIs) and displayed more negative emotions than NBUs. The author also measures the trend of user’s behaviour overtime to note any shift in their category label. The author later uses four different feature set namely post, activity, community and moderator to build a model to predict if a user will be banned or not.
To start with, the dataset is annotated by 131 workers in AMT. But due to the nature of selection of the workers nothing is known about the race, educational background or even political alignments which can definitely change the definition of “anti-social.” The diversity of opinion on what constitutes as “anti-social” is extremely important which the author hasn’t given much credence to.
Given the use of metric of using user deletions, effectiveness of such a model in forums where such user feedback mechanism is not present or in forums while such behaviour is norm and rampant, I believe, would be extremely low. What could be the metrics that could be adopted in forums like the ones mentioned? This could be an interesting avenue to explore.
Also, could these anti-social elements have a coordinated attack in-order take control over the platform? The individuals can bench members with more reports and and use its members who have less of these reports. The individuals can even create new accounts helping them steer a conversation to their cause. These are interesting strategies these individuals could adopt, the methods described in the paper would fail to detect. Can profiling these users’ content in order to ascertain their true identities create a slightly more robust model? This is something one can definitely try to work on in the future.
Another, interesting work that could be done is to identify the different methods through which these trolls try to elicit inflammatory behavior from their target. Also one could try to see how these mechanisms evolve, if they do, over time, as old ones tend to lose their ability to elicit such behaviour.
Can identifying users’ susceptibility in different forums or networks, be used to take preventive steps against anti-social behaviour? If one were to do that what are the different features that could be used to predict such susceptibility? Couple of features without much could be the number of replies the user gives to these trolls, the timespan the user has been active in the network and length of replies along with the sentiment. This if done could also be used to identifies trolls who have more sway over people.
Although, the intent and the motivation of the paper was excellent the content the paper left much to be desired.