Reflection #2 – [08/30] – [Subil Abraham]

Summary:

This paper examines the behavior of anti social users – trolls – in comment sections of three different websites. The goal of the authors was to identify and characterize these users and separate them from the normal users i.e. the ones who are not intentionally creating problems. The paper analyzed more than a years worth of comment section activity on these sites and identified that the users who had a long history of post deletions and were banned after a while were the trolls (referred to as “Future Banned Users (FBUs)” in the paper). They analyzed the posting history and activity of the trolls, looking at post content, frequency of posting, distribution of their posting across different articles and also comparing them with non problematic users (“Never Banned Users (NBUs)”) who have similar posting activity. The trolls were found to post more on average compared to regular users, tended to have more number of posts under an article, the text in their comment replies were less similar to earlier posts compared to an NBU, and they also engaged more number of users. Trolls also aren’t a homogeneous bunch, with one section of trolls having a higher proportion of deleted posts compared to the rest and are also banned faster and spend less time on the site as a result. The results of this analysis were used to create a model to identify trolls with reasonable accuracy by examining their first 10 posts and determining whether they will be banned in the future.

 

Reflection:

This paper seems to me like an interesting follow up to the section on Donath’s “Identity and Deception” paper on trolls. Where Donath studied and documented troll behavior, Cheng et al. seems to have gone further to perform quantitative analysis on the trolls and their life in the online community. Their observation that the behavior of a troll gets worse over time due to the rest of community actively standing against them seems to be parallel to the behavior of humans in the physical world. Children that grow up abused tend to not be the most well adjusted adults, with studies showing higher rates of crime among adults who were abused or shunned by family and/or community as children compared to those who were treated well. Of course, the difference here is that trolls start off with the intention of making trouble whereas children do not. So an interesting question that we could possibly look at is: If an NBU is treated like an FBU in an online community without chance for reconciliation, will they take on the characteristics of an FBU over time?

It is interesting that the authors were able to get an AUC of 0.80 for their model I feel that is hardly sufficient (my machine learning background is minimal so I cannot comment on whether 0.80 is a relatively good result or not from an ML perspective). This is also a fact that the authors touched upon and recommended having a human moderator on standby to verify the algorithm’s claims. Considering that 1 in 5 cases are false positives, what other factors could we add to increase the accuracy? Given that these days, memes play a big role in the activities of trolls, could that also be factored in the analysis or is meme usage still too small compared to plain text to make it a useful consideration?

 

Other Questions for consideration:

  1. How will the statistics change if you analyze cases where multiple trolls work together? Are they banned faster? Or can they cover and support each other in the community, leading to them being banned slower?
  2. What happens when you stick a non anti social user into a community of anti social users? Will they adopt the community mindset or will they leave? How many will stay and how many will leave and what factors determine whether they stay or leave?

 

2 thoughts on “Reflection #2 – [08/30] – [Subil Abraham]

  1. Meme usage kinds of gives a different level of consideration, doesn’t it? I think it works when considering troll behavior on Twitter and other discussion communities that allow picture posting but the point would still stand, what would be the criteria to flag memes that are not on-topic (given people make their conclusions based on their own perception of the meme), and does this differ greatly from how decisions are made about textual posts? ?

    1. It is an interesting question, isn’t it? One obviously easy target (well easier, anyway) is violent or sexually explicit images, but I would assume in most cases, trolls would try to be more subtle and not post those, since in most polite (to the extent you can use that word) online discussion communities, posting such images would lead to an instant ban.

      Another problem one would run into when classifying memes is changing trends. Remember LOLcats? That was the height of memeing back in the day and now they are all but non-existent. Text communication seems to be relatively unchanged. Even if we do build a successful meme classifier today, tomorrow a new set of memes will arrive and our classifier is now obsolete. I believe flagging text would be far easier than trying to flag memes, even if you have a human moderator doing it because, as you said, people have their own perceptions of what memes mean.

      So what’s the solution? I don’t know.

Leave a Reply

Your email address will not be published. Required fields are marked *