Reflection #2 – [08/30] – [Karim Youssef]

The rise of online social platforms in the last two decades brought people from around the world closer together in an unprecedented manner. The forming of online communities created opportunities for spreading knowledge, initiating fruitful conversations, and openly expressing opinions among people from different geographical and social origins. With these online communities becoming larger, they face the inherent challenge of controlling undesirable social behavior.

Justin Cheng et al. address this challenge in their research: Antisocial Behavior in Online Discussion Communities by conducting a data-driven analysis of antisocial behavior on three different social platforms, then using insights gained from their analysis to develop a prediction model that can help the early detection of antisocial behavior. This work counts as a substantial contribution towards automating the process of identifying an undesirable online behavior.

Below are some facts that we could infer from this work:

  1. A simple way of defining an undesirable behavior could be the actions that cause someone to be expelled from a community. Justin Cheng et al. consider users who are banned from posting or commenting on a social platform to be those who post undesirable or antisocial content. Of course, this could be subjective to a community, but it is still one of the most indicative acts of undesirability. 
  2. Regarding selecting features for the prediction task, those features selected from the actions of community members and moderators have a more significant effect than those selected from the textual content. This could lead to two conclusions. It could strengthen the argument that undesirability is subjective to a community, but it could also be that selecting textual or content features to describe undesirable or antisocial behavior is a more challenging problem. But given that the prediction model performs relatively well across different platforms, we can conclude that antisocial acts could be reliably defined by the reaction of other community members.
  3. The automation of detecting antisocial behavior in online communities could help moderators to better control the content, but it could not yet completely replace them. The involvement of a human to approve the prediction is necessary.

If I could have a chance to build over this work, I could focus on the following points:

  1. Although the features selected from the reaction of the community are more descriptive of undesirable behavior, it has a drawback of being less effective for posts that are further in time than the time a user was banned, making it harder to make an earlier detection of an undesirable or antisocial behavior. Hence, improving the prediction through features selected from the content of the post could help to address this limitation.
  2. As mentioned above, the automated detection of antisocial behavior could not yet completely replace the human decision. We can make use of this fact to even enhance the prediction by utilizing the decisions of human moderators to correct the prediction error and build a continuously improving model.

Finally, It is highly important and also challenging to control the spread of undesirable content in online social platforms. An undesirable content could range from simply being off-topic, to using swear words, to discrimination and bigotry, to spreading rumors and misinformation.

 

Leave a Reply

Your email address will not be published. Required fields are marked *