- Bond, Robert M., et al. “A 61-million-person experiment in social influence and political mobilization.”
- Kramer, Adam DI, Jamie E. Guillory, and Jeffrey T. Hancock. “Experimental evidence of massive-scale emotional contagion through social networks.”
Both of the paper assigned though short are pretty interesting. Both you them have to do with social contagion showing where behavior of people can propagates outwards.
In the first paper, the observation/behavior that bugs me is that people who were shown the informational message behaved in a manner unsettlingly similar to people who had no message at all. Desire to attain social credibility alone cannot be the cause! because the difference in validated voting records between the control group and informational message group is practically speaking non-existent. This leads to what i think might be a pretty interesting question, “do people generally lie on social media platform to fit the norm in order to achieve acceptance from community? monkey sees, monkey does?”. A slight intuition, though controversial, might be that elections in USA are highly celebritized, which might be affecting how voters behave on social media. Another important factor that i think was not controlled for by the authors was fake accounts which may have a significant impact on the results. We’ve seen recently in the US Presidential how these bogus accounts can be used to influence elections.
The second paper was more interesting of the two and slightly worrying in a sense too. Taking the result just at face value, “is it possible to program sentiments of crowds through targeted and doctored posts? if yes, how much can this impact important events such as presidential elections”.
Nevertheless, moving on to the content of the paper itself, I disagree with the methodology of the authors in using just LIWC for analysis. While it may be a good tool, the results should have been cross-tested with other similar tools. Another thing to be noted is the division of posts into a binary category with the threshold being just a single positive or negative word. I feel that this is choice of threshold is flawed and will fail to capture sarcasm or jokes. My suggested approach would have been to have three categories of negative, neutral/ambiguous and positive. The authors’ choice of Poisson regression is not well-motivated and implicitly assumes that posting times (gaps between posts) assume a Poisson distribution of which no proof has been provided which leads me to believe that the results might be artifacts of the fit and not actual observations. Finally, a single trial, in my opinion, is insufficient; multiple trials should’ve been conducted with sufficient gaps in between.
Regardless of the approach adopted, building on the results of the paper that when people see more positive posts their subsequent posts are positive and vice versa for negative statements, my questions is that “People who subsequently share positive or negative posts after being exposed to the stimuli, are they actually more positive or more negative respectively? or is it again monkey sees, monkey does i.e. do they share the similar posts as their entourage to stay relevant?”. I might be going off a tangent but the it might be interesting to observe the impact of age i.e. “Are younger people more susceptible to being influenced by online content and does age act as a deterrent against that gullibility?”