Reflection #8 – [02-20] – [Patrick Sullivan]

A 61-million-person Experiment in Social Influence and Political Mobilization” by Bond et al. reports on the effects that political mobilization messages on social media have on elections and personal expression.

There were 61 million in the social message group, but 600 thousand for the other groups? I see that their methods are fairly sound in this research, but this kind of difference makes me consider how other company-sponsored research could become biased very easily. This should be a concern to many people, especially when news media have repeatedly hastily reported or drew unrelated conclusions out from published research articles. I feel the areas of academic research, news media, and corporations are becoming so interconnected that people are finding it difficult to tell them apart from each other.

I see one issue with the design of the experiment. The informational message is still on the Facebook website, where nearly all information and actions available to the user are shareable with their Facebook friends. The assumption that many people could have would be that any available message or action given to the user can be shared to their friends. So participants might have wrongly assumed that it is another common social media sharing ploy, and not realized that the self-reported “I voted” would be kept confidential.  I think this design in the experiment actually impacted the results to make it harder for the authors to come to these conclusions than necessary.

I think that in most instances, people should know when they are being studied. There can be exceptions to this if it has an obvious negative impact on the integrity of the research data. But participants might be more honest and accurate in their self-reporting if they knew it was being researched. Participants might be more mindful that their answers could lead to research and social changes that are unfounded and unjustified. This question should be investigated in meta-analyses of research methods, how participants perceive them, and how they change a person’s behavior. I understand that there is lots of previous work done on studies like this, but I think the results and conclusions from such research deserve to be so widespread that more people outside academia understand them. The importance of this makes me surprised that this isn’t quite ‘common knowledge’ yet. Maybe I shouldn’t be since the scientific method is another incredibly important process to understand that many people brush away.

Experimental Evidence of Massive-scale Emotional Contagion through Social Networks” by Kramera, Guillory, and Hancock

If “Posts were determined to be positive or negative if they contained at least one positive or negative word…”, then how can mixed emotions in social media posts be measured? Simplifying emotions to simple and quantifiable categories can be helpful in many cases, but it should be justified. Emotions are much more complex than this, even in infants who have the most basic desires. Even using a one-dimensional scale instead of binary categorization can give a better degree of emotional range someone can feel.

The researchers also find that viewers that were exposed more emotional posts is connected to them making posts and being more engaged with social media later. I think this is alarming since Facebook and other social platforms are financially motivated to keep users online and engaged as much as possible. This contradicts recent claims by Facebook and other social media outlets that they wish to defend against purposefully outrageous and inflammatory posts. I see this as a major issue in the current politics and tech industry.

Read More

Reflection #8 – [02/20] – [Vartan Kesiz-Abnousi]

Reviewed Paper

Kramer, Adam DI, Jamie E. Guillory, and Jeffrey T. Hancock. “Experimental evidence of massive-scale emotional contagion through social networks.” Proceedings of the National Academy of Sciences 111.24 (2014): 8788-8790.

Summary

The authors an online social platform, Facebook, and manipulated the extent to which people were exposed to emotional expressions in their News Feed. Two parallel experiments were conducted for positive and negative emotion: One in which exposure to friends’ positive emotional content in their News Feed was reduced, and one in which mood posture to negative emotional content in their News Feed nature, reduced. Posts were determined to be positive or negative if they contained at least one positive or negative word, as defined by the Linguistic Inquiry and Word Count. Both experiments had a control condition, in which a pro portion of posts in their News Feed were omitted random (i.e., without respect to emotional

The experiments took place for 1 week (January 11 January 18). In total, over 3 million posts were analyzed. Participants were randomly selected based on their User ID, resulting in a total of ~155,000 participants per condition who posted at least one status update during the research period.

 

 Reflections

I am skeptical on whether “emotional contagion” has a scientific basis. Therefore, I am even more skeptical about it an online framework.  I am going to assume for the rest of the paper that it is indeed “well-established”, the words that the authors use.

How about non-verbal posts i.e. images? For instance, people have a tendency to “post” images that do contain texts, that reflect their emotions or thoughts. How about people who post songs that reflect a specific emotional state? For instance, I assume Johnny Cash’s “Hurt” does not invoke the same emotions with “Macarena”.

Two dependent variables pertaining to emotionality expressed in people’s status updates. The authors initially choose Poisson regression. This is sometimes also referred as “log-linear” model. It’s used when the dependent variable is “counts” or frequencies, while there is a linear relationship between with the independent variables. The authors argue that a direct examination of the frequency of positive and negative words is not possible because the frequencies would be confounded with the change in overall words produced. Subsequently, they revert to a different method, a weighted linear regression in which there is a dummy variable that separated control and treated observations. The coefficient of this dummy variable is statistically significant, providing support that emotions do spread through a network.

 Questions

  1. What if the posts had images that have texts? What if they had song tracks?
  2. The network structure is not taken into account. For instance, when does the emotional effect “die out”?

Read More

Reflection #8 – [02/20] – [Jamal A. Khan]

  1. Bond, Robert M., et al. “A 61-million-person experiment in social influence and political mobilization.”
  2. Kramer, Adam DI, Jamie E. Guillory, and Jeffrey T. Hancock. “Experimental evidence of massive-scale emotional contagion through social networks.”

Both of the paper assigned though short are pretty interesting. Both you them have to do with social contagion showing where behavior of people can propagates outwards.

In the first paper, the observation/behavior that bugs me is that people who were shown the informational message behaved in a manner unsettlingly similar to people who had no message at all.  Desire to attain social credibility alone cannot be the cause! because the difference in validated voting records between the control group and informational message group is practically speaking non-existent. This leads to what i think might be a pretty interesting question, “do people generally lie on social media platform to fit the norm in order to achieve acceptance from community? monkey sees, monkey does?”. A slight intuition, though controversial, might be that elections in USA are highly celebritized, which might be affecting how voters behave on social media. Another important factor that i think was not controlled for by the authors was fake accounts which may have a significant impact on the results. We’ve seen recently in the US Presidential how these bogus accounts can be used to influence elections.

The second paper was more interesting of the two and slightly worrying in a sense too. Taking the result just at face value, “is it possible to program sentiments of crowds through targeted and doctored posts? if yes, how much can this impact important events such as presidential elections”.

Nevertheless, moving on to the content of the paper itself, I disagree with the methodology of the authors in using just LIWC for analysis. While it may be a good tool, the results should have been cross-tested with other similar tools. Another thing to be noted is the division of posts into a binary category with the threshold being just a single positive or negative word. I feel that this is choice of threshold is flawed and will fail to capture sarcasm or jokes. My suggested approach would have been to have three categories of negative, neutral/ambiguous and positive. The authors’ choice of Poisson regression is not well-motivated and implicitly assumes that posting times (gaps between posts) assume a Poisson distribution of which no proof has been provided which leads me to believe that the results might be artifacts of the fit and not actual observations. Finally, a single trial, in my opinion, is insufficient; multiple trials should’ve been conducted with sufficient gaps in between.

Regardless of the approach adopted, building on the results of the paper that when people see more positive posts their subsequent posts are positive and vice versa for negative statements, my questions is that “People who subsequently share positive or negative posts after being exposed to the stimuli, are they actually more positive or more negative respectively? or is it again monkey sees, monkey does i.e. do they share the similar posts as their entourage to stay relevant?”.  I might be going off a tangent but the it might be interesting to observe the impact of age i.e. “Are younger people more susceptible to being influenced by online content and does age act as a deterrent against that gullibility?”

Read More

Reflection #8 – [02/20] – [Hamza Manzoor]

[1]. Bond, Robert M., et al. “A 61-million-person experiment in social influence and political mobilization.” Nature 489.7415 (2012): 295.

[2]. Kramer, Adam DI, Jamie E. Guillory, and Jeffrey T. Hancock. “Experimental evidence of massive-scale emotional contagion through social networks.” Proceedings of the National Academy of Sciences 111.24 (2014): 8788-8790.

Summaries:

Both these papers come from researchers at Facebook and focus on influence of social networking platforms. In [1] Bond et al. run a large-scale experiment and show that how social media can encourage users to vote. They show that the social messages not only influence the users who receive them but also the users’ friends, and friends of friends. They perform a randomized controlled trial of political mobilization messages on 61 million Facebook users during the 2010 U.S. congressional elections. They randomly assigned users to a ‘social message’ group (~60M), an ‘informational message’ group (~600k), or a control group (~600k). The ‘social message’ group was shown a button “I Voted” and a counter indicating how many other Facebook users had previously reported voting including pictures of 6 of their friends. The ‘informational message’ was only shown “I Voted” button and control group was not shown anything. The authors discovered that ‘social message’ group was more likely to click on the “I Voted” button. They also matched the user profiles with public voting records and observed that the users who received the social message were more likely to vote than other two groups. They also measure the effect of friendships and found that likelihood of voting increases if a close friend has voted.

In [2] Kramer et al. analyzed if the emotional states spread through social networks. They present a study that shows that emotional states can be transferred to others via emotional contagion. The experiment manipulated the extent to which people (N = 689,003) were exposed to emotional expressions in their News Feed. They conducted two parallel experiments for positive and negative emotion and tested whether exposure to emotions led people to change their own posting behaviors, in particular whether exposure to emotional content led people to post content that was consistent with the exposure. The results of their experiment show that emotions spread via contagion through a network and the emotions expressed by friends, via online social networks, influence our own moods. In short, they found that when negativity was reduced, users posted more positive content, and vice versa.

Reflections:

Keeping aside the ethical implications, which we have already discussed in class regarding the first paper, I believe they were very well designed experiments which makes me wonder that is there a way we can get Facebook data for analysis?

While reading the first paper, the thought that I had in mind was that people must have clicked on “I Voted” just for the sake of being socially relevant but I was glad that authors validated their findings with public voting records.

Even though I really liked the experimental design but I still have major concerns regarding the imbalance in sample size, 60 million to 600k. Also, was that 600k sample diverse enough? I wasn’t also convinced with their definition of close friends that “Higher levels of interaction indicate that friends are more likely to be physically proximate”. How can they claim this without any analysis? It is highly possible that I interact with someone just because I like his posts but I haven’t met him in real life. Furthermore, there can be many external factors that make a user go for voting but in general I would agree with the results of the findings and even though we cannot say for sure that these messages were the reason people went for voting but people generally want to be socially relevant. It would have been interesting to see if conservatives or liberals were more influenced by these messages but I believe that there is no way to validate it. But one potential research direction can be to characterize people on different traits and analyze if certain types of people are more easily influenced than others.

In the second paper, the authors show that users posted more positive content when negativity was reduced. This finding is completely in conflict with many other studies that show that social media causes depression and seeing other people happy makes other people feel their life is “worthless” or not very happening. Secondly, Facebook posts are generally much longer than tweets and characterizing them as positive or negative if they contained at least one positive or negative word respectively is naïve, the entire sentiment of posts should have been analyzed which Googles’ API does very efficiently (might not be available in 2014). Apart from all these concerns, I thoroughly enjoyed reading both papers.

Read More

Reflection #8 – [02/20] – [Jiameng Pu]

Bond, Robert M., Christopher J. Fariss, Jason J. Jones, Adam DI Kramer, Cameron Marlow, Jaime E. Settle, and James H. Fowler. “A 61-million-person experiment in social influence and political mobilization.” Nature 489, no. 7415 (2012): 295.

Summary & Reflection:

Since traditional human behavior mainly spread by face-to-face social networks, which is difficult to measure social influence effects in it. But for online social networks, it might be a possible way to evaluate social interaction effects in it and the paper conducts a randomized controlled trial of political mobilization messages to identify social influence effects. Since the act of voting in national elections is a typical behavior that spreads through networks, the paper conducted a randomized controlled trial with all users in Facebook who access the website on the day of US congressional elections in 2010. All the users are randomly assigned to three groups — a ‘social message’ group, an ‘informational message’ group or a control group. The results imply that it’s not rigorous to say previous research suggested that online messages do not work, which possibly caused by small conventional sample sizes. The political mobilization messages can directly influence political self-expression, information seeking and real-world voting behavior of millions of people. In addition, the experiment measuring indirect effects that spread from person to person in the social network suggest strong ties are instrumental for people’s behavior in human social networks.

When measuring the direct effects of online mobilization by assigning users into three groups, they measure acts of political self-expression and information seeking. Personally, I don’t think the whole experiment design is rigorous and valid enough for several reasons. Firstly, there is a huge imbalance in the sample size of the social and informational message groups, i.e., 60,055,176 versus 611,044. I don’t think such a massive difference can be ignored, for instance, how do we make sure 611,044 users in group 2 are sufficient enough to represent the whole user community? Secondly, I’m not convinced that information seeking is a good indicator of people’s political positivity. If a person clicks “I voted” button, it would be very likely that he or she will not click the polling-place link because they’ve voted.

Kramer, Adam DI, Jamie E. Guillory, and Jeffrey T. Hancock. “Experimental evidence of massive-scale emotional contagion through social networks.” Proceedings of the National Academy of Sciences 111, no. 24 (2014): 8788-8790.

Summary & Reflection:

Emotional states can be contagious, which leads people to experience the same emotions without their awareness. The paper test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed on Facebook. It turns out that emotions expressed by other users on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks.

For the experiment design, they use Linguistic Inquiry and Word Count software word counting system to determine whether posts are positive or negative, which I don’t think is sufficient. This is like sentiment analysis by only counting positive and negative words. How about positive sentences expressed by the double negative? It would classify sentences with sarcasm into positive posts, but we all know that language expression is a pretty complex phenomenon. From my perspective, we may solve this problem by applying classic models of polarity analysis on posts. My another concern would be whether people update positive posts are actually happy in real life. There are many examples that people’s attitude showed in their social media does not necessarily represent their real mood or life status, sometimes people even pretend to be positive. This would raise another question about the definition of emotional contagion.

Read More

Reflection #8 – [02/20] – [John Wenskovitch]

This pair of papers explores how the behavior of users can propagate to other users through social media.  In the Bond et al. study, the authors measured the influence of voting information and social sharing across a Facebook friend network.  Users were assigned to one of three groups:  a control group with no additional voting information, an informational message group with links to polling places and a global voting tally, and a social message group who received the same information as the informational message group plus profile pictures of friends who said they had voted.  The researchers found that both the informational message group and the social message group outperformed the control group in voting influence, when measured only by clicks of the “I voted” button. When examining validated voting records, the difference between social message and control group persisted, while the difference between informational message and control disappeared.  Further, the social message group greatly outperformed the informational message group under both of the same measures.  In total, this experiment generated thousands of additional votes.  In the Kramer et al. study, the authors manipulated the news feeds of Facebook users to either change the amount of positive-leaning or negative-learning posts that a user sees, and measuring whether that user is likely to be influenced by the bias mood of their news feed.  The researchers found that emotionally-biased news feeds are contagious, and that text-only communication of emotional state is possible; non-verbal cues are not necessary to influence mood.

I was glad to see that the Bond study validated their findings with public voting records, as it’s certainly reasonable to assume that a Facebook user might see many of their friends voting and click the “I voted” button as well for the social credibility.  It was certainly interesting to see the change in results, from a 2% boost between the social message group and control group when measuring button clicks vs. the 0.39% boost through voter record validation.  I also didn’t expect that the informational message would have no influence in the voting-validated data; I would expect at least some increase in voting rate, but that’s not what the researchers found.

I took some issue with the positive/negative measurement of posts in the Kramer study.  The authors noted that a post was determined to be positive or negative if they contained at least one positive or negative LIWC word.  However, this doesn’t seem to take into account things like sarcasm.  For example, “I’m so glad that my friends care about me” contains two words that I expect to be positive (“glad” and “care”), but the post itself could certainly be negative overall if the intent was sarcastic.  I would expect this to affect some posts; obviously not enough of them to change the statistical significance of their results, but the amount of sarcasm and cynicism that I see from friends on Facebook can often be overwhelming.  Could the authors have gotten even stronger results with a better model to gauge whether a post is positive or negative?

I had never heard of Poisson regression before reading the Kramer paper, so I decided to look into it a bit further.  I presume that they authors chose this regression model because they hypothesized (or knew) that Facebook users’ post rates follow a Poisson distribution.  My understanding of the Poisson distribution is that it assumes the events being recorded are independent and occur at a constant rate; however, I feel that my own Facebook postings violate both of those assumptions.  My posts are often independent, but occasionally I’ll post several things on the same theme (like calling for gun control following a school shooting) rapidly.  Further, I’ll occasionally go a week or more without posting because of how busy my schedule is, whereas other times I’ll make multiple posts in a day.  My post distribution seems to be more bimodal than Poisson.  Can anyone fill in the gap in my understanding why the authors chose Poisson regression?

Read More

Reflection #8 – 2/20 – Pratik Anand

Paper 1 : A 61-million-person experiment in social influence and political mobilization

Paper 2 : Experimental evidence of massive-scale emotional contagion through social networks

The two paper discuss two aspects of the same phenomenon, one is specific to politics and other is general – influence of online social ties on user behavior.

The first paper shows that social messages can cause small-scale effect in political mobilization. They are more effective than just informational messages.
The paper performs experiments on influencing people to vote on Election Day. The informational message pops up in a user’s feed stating that it is an election day and if they have voted. It also shows link for finding the nearest polling booth as well as number of people who self-reported voting in that election.
Another variation of this message was a social message where the user is shown photos of friends, with whom user has “close-ties”, who have self-reported voting in that election.
The study finds that the social message has a much higher impact on political mobilization. The people are more likely to click on link to find nearest polling booth and self-report themselves as voted. The authors also mention that that this effect is only visible at a macroscopic level and when the pictures of close friends is shown in the message.

The paper raises many questions. First of all, there is no way to verify whether a self-reporting user has actually voted. Also, there are enough external factors which can make a user go for voting other than these messages by Facebook. It cannot be distinguished if these messages were the factors behind a user’s decision to vote.

The second paper takes a more general approach. It tries to identify if people are positively or negatively influenced by posts of their online friends on social networks. It saw that people post more positive posts if they are shown more positive posts from their friends and same for the negative posts. I liked its hypothesis a lot because it opens doors for new kind of questions, apart from general question of generizability, diversity and validity etc. Given that the hypothesis is verified by other experiments too and is identified as genuine human behavior online, the new question is , what causes such behavior. The people who start posting more positive things in influence of their social friends are really happier or are they just posting happy posts to stay relevant in their social circle ?
Another interesting question is for negative posts. If depressing posts make other people depressed and suicidal, then will social platforms like Facebook enforce some kind of negativity censorship? Will it be in alignment with freedom of speech and expression? These are very complex questions with no correct answers.

Read More

Reflection #8 – [2/20] – [Aparna Gupta]

Reflection #8

  1. Bond, Robert M., et al. “A 61-million-person experiment in social influence and political mobilization.” Nature 489.7415 (2012): 295.
  2. Kramer, Adam DI, Jamie E. Guillory, and Jeffrey T. Hancock. “Experimental evidence of massive-scale emotional contagion through social networks.” Proceedings of the National Academy of Sciences 111.24 (2014): 8788-8790.

Summary:

Both the studies come from the Facebook Data Science team, where paper by Bond et al, shows that the social messages like user’s posts directly influence political self-expression, information seeking and real-world voting behaviour of millions of people and the paper by Kramer et al, tests whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the Newsfeed.

Reflection:

Paper 2: The objective behind Kramer et al. ‘s study on massive Facebook data was to show that emotional states can be transferred to others via emotional contagion which further leads people to experience the same emotions without their awareness. To evaluate this the author’s tested whether the posts with emotional content are more engaging and that expressions manipulated the extent to which people were exposed to emotional expression in their Newsfeed. They conducted an experiment on people who viewed Facebook in English. I wonder how will the results vary if people viewing Facebook in other language are also considered? The experiments for positive and negative emotions were conducted parallelly. Experiment 1: Exposure to friend’s positive emotional content in the user’s Newsfeed was reduced. Experiment 2: Exposure to friend’s negative emotional content was reduced. The authors have considered a status update to be positive or negative if it contains at least one positive or negative word. However, I am not convinced by this technique of determining positive or negative updates. Is it sufficient to classify posts like this without analyzing the sentiment behind the entire text? Moving forward, a control condition was introduced for each experiment where a similar proportion posts in user’s Newsfeed were omitted entirely at random. While these experiments were conducted for 1 week (Is a 1-week study sufficient to analyze the results?), participants (~155,000) were randomly selected per condition based on who posted at least 1 post during the experimental period. This makes me wonder if merely 1 status update is sufficient to identify the influence? In the end, the author’s analyzed 3 million posts, containing over 122 million words, 4 million of which were positive (3.6%) and 1.8 million negative (1.6%) and concluded that online messages influence user’s experience of emotions which may affect a variety of offline behaviors. To the best of my knowledge Facebook, these days is used more a social show-off platform where users share posts related to travel, food, success, new jobs, etc.  How are such updates responsible for affecting offline behaviors and what kind of offline behaviors?

Paper 1: In this paper, Bond et al. have tried to analyze the spread of voting act in national elections through social networks. The authors have defined their hypothesis: How Political behavior can spread through an online social network and to test this hypothesis they have conducted randomized controlled trials wherein the users were assigned to a group: 1. Social message group (n = 60,055,176). The users in the social group were shown a statement at the top of their ‘News feed’, provided a link to find local polling places, showed a clickable button reading ‘I Voted’, showed a counter indicating how many other Facebook users had previously reported voting, and displayed up to six small randomly selected “profile pictures” of the user’s Facebook friends who had already clicked the ‘I voted’ button. 2 Information message group (n= 611,044)– Users were shown the message, poll information, counter and button, but they were not shown any faces of the friends.3 The control group (n=613,096) did not receive any message at the top of their Newsfeed. There is a huge imbalance between the #of users in social message group and the #of users in information message and control group. The authors claim that users who received the social message were 2.08% more likely to click on the I Voted button that those you received the informational message and the users who received the social message were also 0.26% more likely to click the polling-place information link than users who received the informational message. Hence, online political mobilization can have a direct effect on political self-expression, information seeking, and real-world voting behavior. In my opinion, this paper could have been more interesting if authors would have included the information about the models and the technique used by them to derive the results.

Read More

Reflection #8 – [02/20] – [Meghendra Singh]

  1. Bond, Robert M., et al. “A 61-million-person experiment in social influence and political mobilization.” Nature 489.7415 (2012): 295.
  2. Kramer, Adam DI, Jamie E. Guillory, and Jeffrey T. Hancock. “Experimental evidence of massive-scale emotional contagion through social networks.” Proceedings of the National Academy of Sciences 111.24 (2014): 8788-8790.

Both the papers provide interesting analysis of user generated data on Facebook. As far as I remember, the key idea behind the first paper was briefly discussed in one of the early lectures. While, there might be some ethical concerns regarding the data collection, usage and human subject consent in both the studies, I find the papers to be very relevant and thought provoking in today’s world where social media is more or less an indispensable part of everyone’s lives. The first paper by Bond et. al. discusses a randomized controlled trial of political mobilization messages on 61 million Facebook users during the 2010 U.S. congressional elections. The experiment showed an ‘informational’ or ‘social’ message at the top of the news feed of Facebook users in the U.S. (18 years of age and above) as shown in the image below.

Approximately 60 million users where shown the social message, 600 K users were shown the social message and 600 K users were not shown any message adding up to the ‘61-million-person’ sample advertised in the title. The key finding of this experiment was that messages on social media directly influenced political self-expression, information seeking and real-world voting behavior of people (at least those on Facebook). Additionally, ‘close-friends’ in a social network (a.k.a. strong ties) are responsible for the transmission of self-expression, information seeking and real-world voting behavior. In essence, strong ties play a more significant role in spreading online and real-world behavior as compared to ‘weak ties’, in online social networks. Next, I summarize my thoughts on this article.

The authors find that users who received the social message (instead of the plain informational message) where 2.08% more likely to click on the ‘I Voted’ button. This seems to suggest a causality between the presence of images of friends who pushed the ‘I Voted’ button and the user’s decision to push the ‘I Voted’ button. I am not convinced with this suggestion because of the huge difference in the sample size of the social and informational message groups. I believe online social networks are complex systems and spread of behaviors (contagions) in such systems is a non-linear and emergent phenomenon. I feel that ignoring the differences between the two samples (in terms of network size and structure) is a little unreasonable while making such comparisons at the gross level. I feel this particular result will be more convincing if the two samples were relatively similar and the findings were consistent for repeated experiments. Another interesting analysis could be to look at, which demographic segments are influenced more by the social messages as compared to the informational messages. Is the effect, reversed for certain segments of the user population? Lastly, approximately 12.8% of the 2.1 billion user accounts on Facebook are either fake or duplicate. It would be interesting to see how these accounts would affect the results published in this article.

The second article by Kramer et. al. suggests that emotions can spread similar to contagions, from one user to another in online social networks. The article presents an experiment wherein the amount of positive and negative posts in the News Feed of Facebook users was artificially reduced by 10%. The key observation was that, when positive posts were reduced the amount of positive words in the affected user’s status updates decreased. Similarly, when negative posts were reduced the amount of negative words in the affected user’s status updates decreased. I think this result suggests that people innately reciprocate the emotions they experience (even in the absence of nonverbal cues) acting like feedback loops. I feel that the weeklong study described in the article is somewhat insufficient to support the results. It might also be more convincing if the experiment was repeated and the observations remained consistent each time. Another thing that I feel is missing in the article is statistics about the affected users status updates, i.e. what was the mean, std. dev. of the number status updates posted by the users. Additionally, it is important to know if the users posted status updates only ‘after’ reading their News Feeds? And if this ‘temporal’ information is captured in the data at all? Based on my limited observations on Facebook status updates, I feel most of the time they relate to the daily experiences of the user. For example, visit to a restaurant, a promotion, successful defense, holidays or trips. I feel it’s very important that we avoid ‘Apophenia’ when it comes to this kind of research. Also, it is unclear to me why the authors have used Poisson regression here and what is the response variable?

Read More

Reflection #8 – [02/20] – [Ashish Baghudana]

Bond, Robert M., et al. “A 61-million-person experiment in social influence and political mobilization.” Nature 489.7415 (2012): 295.
Kramer, Adam DI, Jamie E. Guillory, and Jeffrey T. Hancock. “Experimental evidence of massive-scale emotional contagion through social networks.” Proceedings of the National Academy of Sciences 111.24 (2014): 8788-8790.

Summary – 1

This week’s paper readings focus on influence on online social networking platforms. Both papers come from researchers at Facebook (collaborating with Cornell University). The first paper by Bond et al. discusses how social messages on Facebook can encourage users to vote. They run large-scale controlled studies, divvying up their population into three segments – “social message” group, “informational message” group and a control group. In the “informational message” group, users were shown note telling them that “Today is election day” with a count of Facebook users who voted and a button “I voted“. In the “social message” group, users were additionally shown 6 randomly selected friends who had voted. In the control group, no message was shown.
The researchers discovered that users who received the social message were 2.08% more likely to click on the I voted button as compared to the informational message group. Additionally, they matched the user profiles with public voting records to measure real-world voting. They observed that the “social message” group was 0.39% more likely to vote than users who received no message at all or received the informational message. Finally, the authors measured the effect of close friends on influence and discovered that a user was 0.224% more likely to have voted if a close friend had voted.

Reflection – 1

Firstly, it is difficult to imagine how large 61 million really is! I was personally quite awed at the scale of experimentation and data collection through online social media and what they can tell us about human behavior.

This paper dealt with multiple issues and could have been a longer paper. An immediately interesting observation is that the effect of online appeals to vote is very small. In more traditional survey-based experiments, this increase would probably not have been noticeable. However, I found it odd that the “social message” group consisted of over 60 million people, however, the “informational message” and control group had only ~600,000. The imbalance seems uncharacteristic for an experiment of this scale and I am curious what the class thought about the distribution of the dataset.

Finally, I found the definition of the close friends arbitrary. The authors define close friends as users who were in the eightieth percentile or higher of frequency of interaction. This definition seems engineered to retrofit the observation that 98% of users had at least one close friend and an average of 10 close friends.

Summary – 2

The 2014 paper on emotional contagion on social networks is mired in controversy. The researchers ask the question – do emotional states spread through social networks? Quoting verbatim from the paper:

The experiment manipulated the extent to which people (N = 689,003) were expose to emotional expressions in their News Feed.

The researchers adapt LIWC to their News Feed algorithm to filter out positive or negative content. They find that when negativity was reduced, users posted more positive content, and consequently when positivity was reduced, the users post less positive content.

Reflection – 2

While the experiment itself is within the realms of Facebook’s acceptable data use policy, there are several signs in the Editorial Expression of Concern and Correction that this experiment may not pass the Institutional Review Board. However, if these two experiments are deemed ethical, they are examples of great experiment design.

Neither paper builds any model. They rely on showing correlation between their dependent and independent variables. As with the previous paper, the effects are small. But applied to large population, even small percentage increases are large enough to take note of.

Questions

  1. I personally find it quite scary neither study can be done by a university and explicitly needs access from Facebook (or any other social media) for that matter. The consolidation of social media analytics capability in the hands of a few may not bode well for ordinary citizens. How can academic research make this data more available and reachable?
  2. The first paper lays basis for how societies can be influenced online. Can this be used to target only a small section of users? Can this also be used to identify groups that are under-represented and vocalize their opinions?

Read More