Reflection #2 – [08/30] – [Lindah Kotut]

  • Justin Cheng, Cristian Danescu-Niculescu-Mizil, Jure Leskovec. “Antisocial Behavior in Online Discussion Communities.”

Brief:

Cheng et al considered discussion posting from CNN, Breitbart and IGN to study anti-social behavior — mostly trolling, using banned users from these discussion  communities as the ground truth. They applied retrospective longitudinal analysis on these banned users to be able to categorize their behavior. Most of hypothesis about behaviors: change in posting language and frequency, community chastisement and moderator intervention by issuing warnings, temporary or permanent banning – all bear out to be useful markers in creating a classifier that could predict a Future Banned User (FBU) within a few posts.

Reflection:

Considering the anti-social markers and other factors surrounding the posters, we can reflect on different facets and their implications on the classifier and/or the discussion community.

The drunk uncle hypothesis: A cultural metaphor of the relative who makes a nuisance of themselves at formal/serious events (deliberately?) is an equivalent anti-social behavior to online trolls as defined by Cheng et al. (they are given multiple chances and warning to behave accordingly, they cause chaos in discussions, and the community may tolerate them for a time, before they are banned). Questions surrounding the drunk uncle serves as an excellent springboard to query the online troll behavior:

  • What triggered it? (what can be learned from the dileanating point between innocuous and anti-social posts?)
  • Once the drunk uncle is banned from future formal events, do they cease to be the ‘drunk uncle’? — this paper considers some aspect of this with temporary bans. On banning, does the behavior suddenly stop, and the FBU is suitably chastised?

Hijacked profiles and mass chaos: The authors did not make any assumption about the change of posting behavior/language — a troll marker. They only made observations that such behaviors could be used to predict a FBU, but not that the account could have been compromised. I point to the curious case of the Florida dentist posting markedly different sentiments on Twitter  (an intrepid commenter found that the good dentist had bee dead for 3 years, and included an obituary conveniently bearing the same picture as the profile. With this lens in mind:

  • When viewing posts classified to be by FBUs, and given the authors claim of generalization of their model, and swiveling the lens and assuming commenters to be in good faith and a sudden change in behavior an anomaly, what tweaks would need to be made in order to recognize hijacked account (would other markers have to be considered sch as time difference, mass change of behavior, bot-like comments)?
  • The model heavily relies on moderator to classify FBUs, and given the unreliable signals of down-voting, what happens when a troll cannot be stopped? Do other commenters ignore the troll, or abandons the thread entirely?
  • On Trolling-as-a-service, and learning from the mass manipulation of Yelp and Amazon reviews whenever a controversy linked to a place/book (and how the posters have become more sophisticated at beating the Yelp classifier), (how) does this manifest in commenting?

The Discus® Effect: The authors used Discus (either partly or wholly) for this work, and proposed looking at other online communities to challenge both the generalizability of their model, and to observe differences considering a specialized groups. There is another factor to consider in this case: Since the commenters are registered to Disqus and the platform is used by a multitude of websites…

  • What can be learned about a FBU from one community, assuming CNN was using Disqus, and how this behavior transferred to other sites (especially since all comments across different sites are viewable from the users account)?

 

Leave a Reply

Your email address will not be published. Required fields are marked *