Reading Reflection:
- Kumar, S., Cheng, J., LesKovek, J., and Subrahmanian, V.S. “An Army of Me: Sockpuppets in Online Discussion Communities”
Brief:
The authors make a case of the fact that anonymity encourages deception via sock-puppets, and so propose a means of identifying, characterizing and predicting sockpuppetry using user IP (if there are at least 3 posts from the same IP) and user session data (if posts occur within 15 minutes). Some of the characteristics found to be attributable to sockpuppets include the use of more first-person pronouns, fewer negations, fewer English parts-of-speech (worse-writing than average user). They also found sockpuppets to be responsible for starting fewer conversations but participate in more replies in the same discussion than can be attributed to random chance. They were also more likely to be down-voted, reported and/or deleted by moderators, and tended to have higher page rank and higher local clustering coefficient
Authors also note some concerns regarding the use of sockpuppets in discussion communities: notably, the potentiality of showing a false equivalence, as an act of vandalism and/or cheating.
Reflection:
What happens when the use deceptive sockpuppets are capable of usurping and undermining the will of the majority? I do not have a good example of a case where this is true on social media (separate from the case of battle of bots during the 2016 U.S. election cycle), but there is ample cases where this case could be examined: The FCC request for comment during the Net Neutrality debate in 2017/2018 and the saga of Boaty McBoatface serve as placeholder cautionary tales, for there was no do-over to correct for sockpuppets especially in the case of FCC. This is concern, because this phenomenon can erode the very fabric by which trust in the democratic process is built upon (beyond the fact that some of this events happened over two years ago with no recourse/remedies applied to-date). A follow-up open question would be: what then would replace the eroded system? Because if there is no satisfactory answer to this, then maybe we should have some urgency in shoring up the systems. How then do we mitigate sockpuppetry apart from using human moderators to moderate and/or flag suspected accounts? A hypothetical solution that uses the characteristics pointed out by the authors in automating the identification and/or suspension of suspected accounts is not sufficient as a measure in itself.
The authors, in giving an example of an exchange between two sock-puppets and that of a user who identifies the sockpuppet as such, reveals the presence/power of User Skepticism. How many users are truly fooled by these sockpuppets over the nuisance questions. A simplified way this can be done is a simple recruitment of users to determine whether a certain discussion(s) can be attributed to regular users or sockpuppets. This consideration can lead down the path of measuring for over-corrections:
- is the pervasive knowledge of the presence of these sockpuppets lead to users doubting even legitimate discussions (and to what extent is this prevalent)?
This paper’s major contribution is in looking at sockpuppets in discussions/replies (therefore this point is not to detract from this contribution). On the matter of the (mis)use of pseudonyms: From a benign use-case such as the Reddit for example has a term “throw-away account” from when a regular user wants to make a discussion about controversial topic that s/he does not want to associate with their regular account, to the extreme end of a journalist using it to “hide” their activities in alt-right community discussions.
- Can these discussions be merged, or does the fact that it does not strictly adhere to the authors’ definition disqualify it? (Because I believe that considering why users resort to the use of sockpuppets beyond faking consensus/discussion and sowing discord.)
A final point regards positive(ish) use. A shopkeeper with a new shop that wants customers can loudly hawk their wares in front of their shop to attract attention: which is to say, could we consider positive use-cases of this behavior, or do we categorize it as all bad? A forum can attract shy contributors and spark a debate by using friendly sockpuppetry to get things going. Ethical?