This paper studies anti-social behavior of people on three different platforms which i believe is quite relevant looking at the ever increasing consumption of social media. First off,ย in my opinion what the authors have studied is not anti-social behavior, but rather negative, unpopular and/or inflammatory behavior (which also might not be the case as I’ll highlight a bit later). Nonetheless, the findings are interesting.
Referring to Table 1 in the paper (also shown above) I’m surprised to see so few posts deleted. I was expecting something in the vicinity of 9-10% but that might be just me though! maybe I have a tendency to run into more trolls online ๐ . What are other people’s experiences, do these numbers reflect the number of trolls you find online?
Now, a fundamental problem that I have with the paper is the use of moderators actions of “banning or not banning” as the ground truth. This approach fails to address a few things. First, What of the moderators biases? One moderator might consider certain comments on certain topic acceptable while another might not and this varies based on how the person in question feels about the topic at hand.ย For example, I very rarely talk or care about politics, hence most comments seem innocuous to me, even ones that i see other people react very strongly to. That being the case, if I was the moderator that saw some politically charged comments i would most probably ignore them.
Second, unpopular opinions expressed by people most certainly don’t count as anti-social behavior or troll remarks or even as attempts to derail the discussion or cause inflammation e.g. one such topic that could pop up on IGN would be the strict gender binary enforced by most video game studios, which will, by my experience, get down voted pretty quickly because people are resistant to such changes. So this raises a few questions as to what is used a metric to deal with unpopular posts? Are down-votes used by the moderators as a metric to remove the posts?
Third, varying use of English based on demographics would thrown off the language similarity among posts for FBUs and NBUs by a fair margin and thee authors don’t seem to have catered for it. The paper relies quite heavily on this metric for making a lot of the observations. So, If we were conducting a follow up study, how would we go about taking cultural difference in use of English into account? Do we even have to i.e. will demographically more diverse platforms automatically have aย normalizing effect?
Finally, the idea of detecting problematic people beforehand seems like a good idea at first but on second thought i think it might not be so! but that depends on how the tool is used. The reason why I say this is because, suppose we had an omnipotent classifier that could with 100% accuracy, what would we do once we have the predictions? Ban the users beforehand? wouldn’t that be a violation of the right to opinion and freedom of speech? wouldn’t the classifier just reflect what the people like to see and hear and end up tailoring their content to their point of views? and in a dystopian scenario wouldn’t it just lead to snowflake culture?
As a closing note, how would the results be if the study was to be repeated on Facebook pages? would the results from this studyย generalize?