Reflection #2 – [08/30] – Subhash Holla H S

Cheng, J., Danescu-Niculescu-Mizil, C., & Leskovec, J. (2015). Antisocial Behavior in Online Discussion Communities. Proceedings of the Ninth International AAAI Conference on Web and Social Media Antisocial, 61–70. Retrieved from http://www.aaai.org/ocs/index.php/ICWSM/ICWSM15/paper/view/10469

The focus of the paper is categorizing anti-social behavior in online discussion communities. The inferential statistical approach taken by the paper by corroborating all claims with statistics is one that I appreciate. The approach in itself needs to be picked apart with a fine tooth comb to both understand the method followed and point a few discrepancies.

The paper claims to have adopted a “retrospective longitudinal analyses“. The long-term observational study in a subjects naturalistic environment is close to home as my current research hopes to study the “Evolution of trust”. A few key takeaways here are:

  • The pool of study is limited to online discussion forums and not extended to general social media platforms. Since the author has not claimed the same or provided any evidence for the possibility of the same it is safe to say that this model is not completely generalizable. In platforms like Twitter where the site structure may be similar, the model adopted here might fail. A possible reason could be the option of retweeting on Twitter.
  • The use of the propensity scores to determine causal effects by matching, according to my understanding, is a representational and reductional technique. It is representational because it considers a section of the data to represent all of it. Reductional because it discards a section of the data not used for the mapping. I wonder if this data loss has an impact on the outcome.
  • The use of Mechanical Turk is always a good way to complete work that is not possible for Artificial Intelligence. In the above Human Intelligence Task the paper mentions the use of 131 workers with each post being averaged for three workers. The question that seemed important is whether this is required if a model is being built for another platform not covered by the one mentioned in the paper. As human hours can be expensive an alternative could be explored by compromising on the quality of the label classification and building a better model which will also make it more robust.
  • The main question in the paper that I was hoping that was clearly answered but felt was not was “Can antisocial users be effectively identified early on?”. This can be a huge boon to have for any social media platform developer and/or designer. The promise of having very less or no trolls is like giving the customers a Charlies Chocolate Factory.

I wonder if this can be achieved by the introduction of an “Actor-Critic Reinforcement Learning algorithm“[1]. The use of a reinforcement learning algorithm lets the AI agent venture into the dark maze to find an exit. By rewarding the classification or flagging of a user in the right category we will be pushing it to train itself into becoming a good classifier of Anit-social behavior. The advantage of this model will be that the critic will ensure that the actor i.e. the agent performing the classification will not learn very quickly and will learn only the right things. It takes care of any anomalies that could occur. If the possibility exists then I feel this can be an area definitely worth pursuing through a course project.

REFERENCES:

[1] Konda, V. R., & Tsitsiklis, J. N. (2003). Actor-Critic Algorithms. Control Optim, 42(4), 1143–1166. https://doi.org/10.1137/S0363012901385691

Leave a Reply

Your email address will not be published. Required fields are marked *