Summary:
“Disinformation as Collaborative Work: Surfacing the Participatory Nature of Strategic Information Operations” by Starbird et. al. talks about strategic information operations like disinformation, political propaganda, conspiracy theories, etc.. They gather valuable insights about the functioning of these organizations by studying the online discourse in-depth, both qualitatively and quantitatively. The authors present three case studies: (i) Trolling Operations by the Internet Research Agency Targeting U.S. Political Discourse (2015-2016), (ii) The Disinformation Campaign Targeting the White Helmets, and (iii) The Online Ecosystem Supporting Conspiracy Theorizing about Crisis Events. These three case studies highlight the coordinated effort of several organizations to spread misinformation and influence the political discourse of the nation. Through these three case studies, the authors attempt to go beyond understanding online bots and trolls to move towards providing a more nuanced and descriptive perspective of these co-ordinated destructive online operations. This work also successfully highlights the challenging problem for “researchers, platform designers, and policy-makers — distinguishing between orchestrated, explicitly coordinated, information operations and the emergent, organic behaviors of an online crowd.”
Reflections:
This is an interesting work that talks about misinformation and the orchestrated effort that goes behind spreading it. I found the overall methodology adopted by the researchers particularly interesting. The authors use qualitative, quantitative and visual techniques to effectively demonstrate the spread of misinformation from the actors (twitter accounts and websites that initiate the discussion) to the target audience (the accounts that retweet and are connected to these actors either directly or indirectly). For example, the case study talking about the Internet Research Agency Targeting U.S. Political Discourse that greatly influenced the 2016 elections, highlighted the pervasiveness of the Russian IRA agents using network analysis and visual techniques. The authors noted that the “fake” accounts influenced both sides: left-leaning accounts criticized and demotivated the support for the U.S. presidential candidate, Hillary Clinton while promoting the now president, Donald Trump, on the right. Similarly, these fake Russian accounts were active on both sides of the discourse for the #BlackLivesMatter movement. It is commendable that the authors were able to successfully uncover the hidden objective of these misinformation campaigns and observe how these accounts presented themselves as both people and organizations in order to embed themselves in the narrative. The authors also mention that they make use of trace ethnography to track down the activities of the fake accounts. I was reminded of another work, “The Work of Sustaining Order in Wikipedia: The Banning of a Vandal”, that also made of trace ethnography to narrow down the rogue user. It would be interesting to read about a work where trace ethnography was used to track down a “good user”. I would have liked if the paper went into the details of their quantitative analysis and the exact methodology they adopted for their network analysis. I’m also curious to know if the accounts were cherry-picked to show the ones with the most destructive influence or the resulting graph we see in the paper covers all possible accounts. It would have helped if the authors had spoken about the limitations of their work and their own biases that might have had some influence on the results.
Questions:
1. What are your general thoughts on the paper?
2. Do you think machine learning algorithms can help in such a scenario? If yes, what role will they play?
3. Have you ever interacted with an online social media bot? What has that been like?
To your second question, “Synthesized Social Signals: Computationally-Derived Social Signals from Account Histories” http://imjane.net/papers/s3-chi2020.pdf — is an interesting example of how algorithms can contribute towards generating visualizations/synthesizing all available information for the users to decide about a user being a troll/bot/XYZ type person, etc. The current solution, described in the paper, might be inadequate, but somehow opens the possibilities of employing human-machine collaboration for determining these forces of disinformation.