Reflection #4 – [09/06] – [Deepika Rama Subramanian]

Kumar, Srijan, et al. “An army of me: Sockpuppets in online discussion communities.”

SUMMARY & REFLECTION

This paper deals with the identification of sockpuppets and groups of sockpuppets. This particular paper defines sockpuppets simply as multiple accounts controlled by a single user, they don’t assume that this is always with malicious intent. The study uses nine different online discussion communities that have varied interests. The study also helped identify types of sockpuppetry based on deceptiveness (pretenders vs non-pretenders) and supportiveness (supportive vs. dissenter).

In order to identify sockpuppets, the authors identified four factors – the posts should come from the same IP address, in the same discussion, be similar in length and posted closer together in time. However, they had eliminated the top 5% of all users who posted from similar IP addresses that could have come from behind a nationwide proxy. If the puppetmaster was backed by an influential group that was able to set up such a proxy in order to propagate false information, these cases would be eliminated right up front.  Is it possible that the most incriminating evidence is being excluded?

Further, the linguistic traits that were identified and considered in this study were largely those used in the previous discussions about antisocial behaviour in online communities. Even the frequency of posting of sockpuppets versus ordinary users and the fact that they participate in more discussions than they start make these user accounts similar to trolls.

In pairs/groups, the sockpuppets tend to interact with one another more than with any other user in terms of replies or upvotes. The paper states in the beginning that it is harder to find the first sockpuppet account and when one is found, the pair or the group are easily identified. Cheng et al. in their paper ‘Antisocial Behaviour in Online Communities’ have already spoken about a model that would be able to weed out anti-social users early on in their lives. Once they have been identified, we could apply the non-linguistic criterion outlined in this paper to identify the rest of the sockpuppets.

A restrictive way of solving this issue of sockpuppet accounts could be to have users tie their discussion board accounts not only to an email id but also to a phone number. The process of obtaining a phone number is longer and also involves submitting documentation that can tie the account to the puppetmaster firmly. This would discourage multiple accounts being spawned by the same individuals.

The author’s classification of the sockpuppets gives us some insight into the motives of the puppetmasters. While the supporters are in the majority, they don’t seem to have much credibility, most supporters being pretenders. However, how could puppetmasters use dissent effectively to spread consensus on issues they are concerned about? One way could be to have their sockpuppets disagree with one other until the dissenter gets ‘convinced’ by the opinion of the puppetmaster. Ofcourse, this would require longer posts that are uncharacteristic of sockpuppets in general. So why do people jump through such hoops when they are highly likely to be flagged by the community over time? I wonder if the work in sockpuppets is a sort of introductory work on spambots because a human puppetmaster could hardly wreak the same havoc that bots could in online platforms.

Leave a Reply

Your email address will not be published. Required fields are marked *