Reflection #9 – [02/22] – [Vartan Kesiz-Abnousi]

First Paper Reviewed
[1] MITRA, T.; COUNTS, S.; PENNEBAKER, J.. Understanding Anti-Vaccination Attitudes in Social Media. International AAAI Conference on Web and Social Media, North America, mar. 2016. Available at: <https://www.aaai.org/ocs/index.php/ICWSM/ICWSM16/paper/view/13073/12747>. Date accessed: 21 Feb. 2018.

Summary

The authors examine the attitudes of people who are against vaccines. They compare them with a pro-vaccine group and to their differences with the people are just joining the anti-vaccination camp. The data is four years of longitudinal data from Twitter, capturing vaccination discussions on Twitter. They identify three groups: those who are persistently pro vaccine, those who are persistently anti vaccine and users who newly join the anti-vaccination cohort. After fetching each cohort’s entire timeline of tweets, totaling to more than 3 million tweets, we compare and contrast their linguistic styles, topics of interest, social characteristics and underlying cognitive dimensions.  Subsequently, they built a classifier to determine positive and negative attitudes towards vaccination. They find that people holding persistent anti-vaccination attitudes use more direct language and have higher expressions of anger compared to their pro counterparts. Adopters of anti-vaccine attitudes show similar conspiratorial ideation and suspicion towards the government.

Reflections

The article stresses that alternative methods should be adopted (non-official sources) in order to change the opinion of those who belong in the anti-vaccination group. However, this would work on the targeted groups who have anti-vaccination attitudes. If the informational method changes, it might have adverse effects, in the sense that it might revert pro-vaccination people into anti-vaccination.

I wonder if they could use unsupervised learning and perform and explorative analysis in order to find more groups of people. In addition, I didn’t know that population attitudes extracted from tweet sentiments has been shown to correlate with traditional polling data.

For the first phase, the authors use snowball samples. However, such samples are subject to numerous biases. For instance, people who have many friends are more likely to be recruited into the sample. I also find it interesting that the final set of words basically included a permutation of the words: mmr, autism, vaccine and measles. Is this what anti vaccination groups mainly focus on? The authors use a qualitative examination and find that trigrams and hashtags were prominent cues of a tweet’s stance towards vaccination. Interestingly enough, only “Organic Food” is statistically significant both Between Groups and Within Time.

Questions

  1. What kind of qualitative examination made the authors choose trigrams and hastaghs as the prominent cues of a tweet’s stance towards vaccination?
  2. I wonder whether the authors could find more than the three groups by using an unsupervised learning method.
  3. The number of Pre-Time Tweets are significantly less than Post-Time Tweets. Was that intentional?

 

Second Paper Reviewed

[2] Choudhury, M.D., Counts, S., Gamon, M., & Horvitz, E. (2013). Predicting Depression via Social Media. ICWSM.

Summary

The mail goal of the paper is to predict Major Depressive Disorder (henceforth MDD), as the title suggests, through social media. They author collect their data via crowdsourcing, specifically Amazon Turk. They ask them to complete a standardized depression form (CES-D) and they compare the answers to another standardized form (BID) in order to see whether they are correlated. They quantify the user’s behavior through their Twitter posts. They include two groups those who do suffer from depression and those who do not and they compare the two groups. Finally, they build a classifier that predicts MMD which has an accuracy of 70%.

The authors suggest that Twitter posts contains useful signals for characterizing the onset of depression in individuals, as measured through decrease in social activity, raised negative affect, highly clustered egonetworks, heightened relational and medicinal concerns, and greater expression of religious involvement.

Reflections

It should be noted that the classification is based on the behavioral attributes of people who already had a depression. Did they ask them for how long they suffer from a major depression disorder? I imagine someone who has been diagnosed for having depression years ago might have different behavioral attributes compared to someone who has been diagnosed a few months ago. In addition, being diagnosed as having depression is not equivalent to the actual onset of depression.

What if they collected the pre-depression onset tweets and compare them with the post-depression tweets? That might be an interesting extension. In addition, since the tweets are from the same individuals, factors that do not change temporarily could be controlled.

Something that puzzles me is their seemingly ad-hoc choice of onset dates. Specifically, they keep individuals with depression onset dates anytime in the last one year, but no later than three months prior to the day the survey was taken. Are they discarding individuals who have depression onset dates for more than one year? There is an implicit assumption that people who suffer from MMD are homogeneous.

Questions

  1. Why do they keep depression onset dates within the last year? Why not go further back?
  2. There is an implicit assumption by the authors. That people who suffer from a MDD are the same (i.e. homogeneous). Is someone who suffers from MDD for years the same as someone suffers for a few months? This lack of distinction might affect the classification model.
  3. An extension would be to study the twitter posts of the people who have MDD through time. Specifically, pre-MDD vs post-MDD behavior, for the same users. Since they are the same users, they will be able to control for factors that do not change through time.

Read More

Reflection 8 [Anika Tabassum]

Experimental evidence of massive-scale emotional contagion through social networks

Summary:

The paper analyzes the influence of emotions of people on his friends and other people in social networks. The authors observe user posts from real-time social network data like Facebook for over 20 yrs. They identify posts as positive and negative from the words contained in the posts. They observe the behavior of people using two different patterns. First reducing positive contents from user news feed, second reducing negative contents from user news feed. Their observation show that people having more negative contents in news feed post more negative status and vice versa.

 

Reflection:

Some challenges and questions-

The paper identify positive/negative contents and posts with words. What if some positively used words used in the posts in negative or sarcastic way?

Can it happen that on reaction to some negative contents the status updated are positive? People’s perspective can be different.

How the contents are identified as positive/negative. Same posts or content can be negative to one while positive to other people.

Some ideas:

better observation to understand which contents change people reaction most? Is it more vocal over posts/ texts or video, photos etc.

 

Read More

Reflection #8 – [02-20] – [Patrick Sullivan]

A 61-million-person Experiment in Social Influence and Political Mobilization” by Bond et al. reports on the effects that political mobilization messages on social media have on elections and personal expression.

There were 61 million in the social message group, but 600 thousand for the other groups? I see that their methods are fairly sound in this research, but this kind of difference makes me consider how other company-sponsored research could become biased very easily. This should be a concern to many people, especially when news media have repeatedly hastily reported or drew unrelated conclusions out from published research articles. I feel the areas of academic research, news media, and corporations are becoming so interconnected that people are finding it difficult to tell them apart from each other.

I see one issue with the design of the experiment. The informational message is still on the Facebook website, where nearly all information and actions available to the user are shareable with their Facebook friends. The assumption that many people could have would be that any available message or action given to the user can be shared to their friends. So participants might have wrongly assumed that it is another common social media sharing ploy, and not realized that the self-reported “I voted” would be kept confidential.  I think this design in the experiment actually impacted the results to make it harder for the authors to come to these conclusions than necessary.

I think that in most instances, people should know when they are being studied. There can be exceptions to this if it has an obvious negative impact on the integrity of the research data. But participants might be more honest and accurate in their self-reporting if they knew it was being researched. Participants might be more mindful that their answers could lead to research and social changes that are unfounded and unjustified. This question should be investigated in meta-analyses of research methods, how participants perceive them, and how they change a person’s behavior. I understand that there is lots of previous work done on studies like this, but I think the results and conclusions from such research deserve to be so widespread that more people outside academia understand them. The importance of this makes me surprised that this isn’t quite ‘common knowledge’ yet. Maybe I shouldn’t be since the scientific method is another incredibly important process to understand that many people brush away.

Experimental Evidence of Massive-scale Emotional Contagion through Social Networks” by Kramera, Guillory, and Hancock

If “Posts were determined to be positive or negative if they contained at least one positive or negative word…”, then how can mixed emotions in social media posts be measured? Simplifying emotions to simple and quantifiable categories can be helpful in many cases, but it should be justified. Emotions are much more complex than this, even in infants who have the most basic desires. Even using a one-dimensional scale instead of binary categorization can give a better degree of emotional range someone can feel.

The researchers also find that viewers that were exposed more emotional posts is connected to them making posts and being more engaged with social media later. I think this is alarming since Facebook and other social platforms are financially motivated to keep users online and engaged as much as possible. This contradicts recent claims by Facebook and other social media outlets that they wish to defend against purposefully outrageous and inflammatory posts. I see this as a major issue in the current politics and tech industry.

Read More

Reflection #8 – [02/20] – [Vartan Kesiz-Abnousi]

Reviewed Paper

Kramer, Adam DI, Jamie E. Guillory, and Jeffrey T. Hancock. “Experimental evidence of massive-scale emotional contagion through social networks.” Proceedings of the National Academy of Sciences 111.24 (2014): 8788-8790.

Summary

The authors an online social platform, Facebook, and manipulated the extent to which people were exposed to emotional expressions in their News Feed. Two parallel experiments were conducted for positive and negative emotion: One in which exposure to friends’ positive emotional content in their News Feed was reduced, and one in which mood posture to negative emotional content in their News Feed nature, reduced. Posts were determined to be positive or negative if they contained at least one positive or negative word, as defined by the Linguistic Inquiry and Word Count. Both experiments had a control condition, in which a pro portion of posts in their News Feed were omitted random (i.e., without respect to emotional

The experiments took place for 1 week (January 11 January 18). In total, over 3 million posts were analyzed. Participants were randomly selected based on their User ID, resulting in a total of ~155,000 participants per condition who posted at least one status update during the research period.

 

 Reflections

I am skeptical on whether “emotional contagion” has a scientific basis. Therefore, I am even more skeptical about it an online framework.  I am going to assume for the rest of the paper that it is indeed “well-established”, the words that the authors use.

How about non-verbal posts i.e. images? For instance, people have a tendency to “post” images that do contain texts, that reflect their emotions or thoughts. How about people who post songs that reflect a specific emotional state? For instance, I assume Johnny Cash’s “Hurt” does not invoke the same emotions with “Macarena”.

Two dependent variables pertaining to emotionality expressed in people’s status updates. The authors initially choose Poisson regression. This is sometimes also referred as “log-linear” model. It’s used when the dependent variable is “counts” or frequencies, while there is a linear relationship between with the independent variables. The authors argue that a direct examination of the frequency of positive and negative words is not possible because the frequencies would be confounded with the change in overall words produced. Subsequently, they revert to a different method, a weighted linear regression in which there is a dummy variable that separated control and treated observations. The coefficient of this dummy variable is statistically significant, providing support that emotions do spread through a network.

 Questions

  1. What if the posts had images that have texts? What if they had song tracks?
  2. The network structure is not taken into account. For instance, when does the emotional effect “die out”?

Read More

Reflection #8 – [02/20] – [Jamal A. Khan]

  1. Bond, Robert M., et al. “A 61-million-person experiment in social influence and political mobilization.”
  2. Kramer, Adam DI, Jamie E. Guillory, and Jeffrey T. Hancock. “Experimental evidence of massive-scale emotional contagion through social networks.”

Both of the paper assigned though short are pretty interesting. Both you them have to do with social contagion showing where behavior of people can propagates outwards.

In the first paper, the observation/behavior that bugs me is that people who were shown the informational message behaved in a manner unsettlingly similar to people who had no message at all.  Desire to attain social credibility alone cannot be the cause! because the difference in validated voting records between the control group and informational message group is practically speaking non-existent. This leads to what i think might be a pretty interesting question, “do people generally lie on social media platform to fit the norm in order to achieve acceptance from community? monkey sees, monkey does?”. A slight intuition, though controversial, might be that elections in USA are highly celebritized, which might be affecting how voters behave on social media. Another important factor that i think was not controlled for by the authors was fake accounts which may have a significant impact on the results. We’ve seen recently in the US Presidential how these bogus accounts can be used to influence elections.

The second paper was more interesting of the two and slightly worrying in a sense too. Taking the result just at face value, “is it possible to program sentiments of crowds through targeted and doctored posts? if yes, how much can this impact important events such as presidential elections”.

Nevertheless, moving on to the content of the paper itself, I disagree with the methodology of the authors in using just LIWC for analysis. While it may be a good tool, the results should have been cross-tested with other similar tools. Another thing to be noted is the division of posts into a binary category with the threshold being just a single positive or negative word. I feel that this is choice of threshold is flawed and will fail to capture sarcasm or jokes. My suggested approach would have been to have three categories of negative, neutral/ambiguous and positive. The authors’ choice of Poisson regression is not well-motivated and implicitly assumes that posting times (gaps between posts) assume a Poisson distribution of which no proof has been provided which leads me to believe that the results might be artifacts of the fit and not actual observations. Finally, a single trial, in my opinion, is insufficient; multiple trials should’ve been conducted with sufficient gaps in between.

Regardless of the approach adopted, building on the results of the paper that when people see more positive posts their subsequent posts are positive and vice versa for negative statements, my questions is that “People who subsequently share positive or negative posts after being exposed to the stimuli, are they actually more positive or more negative respectively? or is it again monkey sees, monkey does i.e. do they share the similar posts as their entourage to stay relevant?”.  I might be going off a tangent but the it might be interesting to observe the impact of age i.e. “Are younger people more susceptible to being influenced by online content and does age act as a deterrent against that gullibility?”

Read More

Reflection #8 – [02/20] – [Hamza Manzoor]

[1]. Bond, Robert M., et al. “A 61-million-person experiment in social influence and political mobilization.” Nature 489.7415 (2012): 295.

[2]. Kramer, Adam DI, Jamie E. Guillory, and Jeffrey T. Hancock. “Experimental evidence of massive-scale emotional contagion through social networks.” Proceedings of the National Academy of Sciences 111.24 (2014): 8788-8790.

Summaries:

Both these papers come from researchers at Facebook and focus on influence of social networking platforms. In [1] Bond et al. run a large-scale experiment and show that how social media can encourage users to vote. They show that the social messages not only influence the users who receive them but also the users’ friends, and friends of friends. They perform a randomized controlled trial of political mobilization messages on 61 million Facebook users during the 2010 U.S. congressional elections. They randomly assigned users to a ‘social message’ group (~60M), an ‘informational message’ group (~600k), or a control group (~600k). The ‘social message’ group was shown a button “I Voted” and a counter indicating how many other Facebook users had previously reported voting including pictures of 6 of their friends. The ‘informational message’ was only shown “I Voted” button and control group was not shown anything. The authors discovered that ‘social message’ group was more likely to click on the “I Voted” button. They also matched the user profiles with public voting records and observed that the users who received the social message were more likely to vote than other two groups. They also measure the effect of friendships and found that likelihood of voting increases if a close friend has voted.

In [2] Kramer et al. analyzed if the emotional states spread through social networks. They present a study that shows that emotional states can be transferred to others via emotional contagion. The experiment manipulated the extent to which people (N = 689,003) were exposed to emotional expressions in their News Feed. They conducted two parallel experiments for positive and negative emotion and tested whether exposure to emotions led people to change their own posting behaviors, in particular whether exposure to emotional content led people to post content that was consistent with the exposure. The results of their experiment show that emotions spread via contagion through a network and the emotions expressed by friends, via online social networks, influence our own moods. In short, they found that when negativity was reduced, users posted more positive content, and vice versa.

Reflections:

Keeping aside the ethical implications, which we have already discussed in class regarding the first paper, I believe they were very well designed experiments which makes me wonder that is there a way we can get Facebook data for analysis?

While reading the first paper, the thought that I had in mind was that people must have clicked on “I Voted” just for the sake of being socially relevant but I was glad that authors validated their findings with public voting records.

Even though I really liked the experimental design but I still have major concerns regarding the imbalance in sample size, 60 million to 600k. Also, was that 600k sample diverse enough? I wasn’t also convinced with their definition of close friends that “Higher levels of interaction indicate that friends are more likely to be physically proximate”. How can they claim this without any analysis? It is highly possible that I interact with someone just because I like his posts but I haven’t met him in real life. Furthermore, there can be many external factors that make a user go for voting but in general I would agree with the results of the findings and even though we cannot say for sure that these messages were the reason people went for voting but people generally want to be socially relevant. It would have been interesting to see if conservatives or liberals were more influenced by these messages but I believe that there is no way to validate it. But one potential research direction can be to characterize people on different traits and analyze if certain types of people are more easily influenced than others.

In the second paper, the authors show that users posted more positive content when negativity was reduced. This finding is completely in conflict with many other studies that show that social media causes depression and seeing other people happy makes other people feel their life is “worthless” or not very happening. Secondly, Facebook posts are generally much longer than tweets and characterizing them as positive or negative if they contained at least one positive or negative word respectively is naïve, the entire sentiment of posts should have been analyzed which Googles’ API does very efficiently (might not be available in 2014). Apart from all these concerns, I thoroughly enjoyed reading both papers.

Read More

Reflection #8 – [02/20] – [Jiameng Pu]

Bond, Robert M., Christopher J. Fariss, Jason J. Jones, Adam DI Kramer, Cameron Marlow, Jaime E. Settle, and James H. Fowler. “A 61-million-person experiment in social influence and political mobilization.” Nature 489, no. 7415 (2012): 295.

Summary & Reflection:

Since traditional human behavior mainly spread by face-to-face social networks, which is difficult to measure social influence effects in it. But for online social networks, it might be a possible way to evaluate social interaction effects in it and the paper conducts a randomized controlled trial of political mobilization messages to identify social influence effects. Since the act of voting in national elections is a typical behavior that spreads through networks, the paper conducted a randomized controlled trial with all users in Facebook who access the website on the day of US congressional elections in 2010. All the users are randomly assigned to three groups — a ‘social message’ group, an ‘informational message’ group or a control group. The results imply that it’s not rigorous to say previous research suggested that online messages do not work, which possibly caused by small conventional sample sizes. The political mobilization messages can directly influence political self-expression, information seeking and real-world voting behavior of millions of people. In addition, the experiment measuring indirect effects that spread from person to person in the social network suggest strong ties are instrumental for people’s behavior in human social networks.

When measuring the direct effects of online mobilization by assigning users into three groups, they measure acts of political self-expression and information seeking. Personally, I don’t think the whole experiment design is rigorous and valid enough for several reasons. Firstly, there is a huge imbalance in the sample size of the social and informational message groups, i.e., 60,055,176 versus 611,044. I don’t think such a massive difference can be ignored, for instance, how do we make sure 611,044 users in group 2 are sufficient enough to represent the whole user community? Secondly, I’m not convinced that information seeking is a good indicator of people’s political positivity. If a person clicks “I voted” button, it would be very likely that he or she will not click the polling-place link because they’ve voted.

Kramer, Adam DI, Jamie E. Guillory, and Jeffrey T. Hancock. “Experimental evidence of massive-scale emotional contagion through social networks.” Proceedings of the National Academy of Sciences 111, no. 24 (2014): 8788-8790.

Summary & Reflection:

Emotional states can be contagious, which leads people to experience the same emotions without their awareness. The paper test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed on Facebook. It turns out that emotions expressed by other users on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks.

For the experiment design, they use Linguistic Inquiry and Word Count software word counting system to determine whether posts are positive or negative, which I don’t think is sufficient. This is like sentiment analysis by only counting positive and negative words. How about positive sentences expressed by the double negative? It would classify sentences with sarcasm into positive posts, but we all know that language expression is a pretty complex phenomenon. From my perspective, we may solve this problem by applying classic models of polarity analysis on posts. My another concern would be whether people update positive posts are actually happy in real life. There are many examples that people’s attitude showed in their social media does not necessarily represent their real mood or life status, sometimes people even pretend to be positive. This would raise another question about the definition of emotional contagion.

Read More

Reflection #8 – [02/20] – [John Wenskovitch]

This pair of papers explores how the behavior of users can propagate to other users through social media.  In the Bond et al. study, the authors measured the influence of voting information and social sharing across a Facebook friend network.  Users were assigned to one of three groups:  a control group with no additional voting information, an informational message group with links to polling places and a global voting tally, and a social message group who received the same information as the informational message group plus profile pictures of friends who said they had voted.  The researchers found that both the informational message group and the social message group outperformed the control group in voting influence, when measured only by clicks of the “I voted” button. When examining validated voting records, the difference between social message and control group persisted, while the difference between informational message and control disappeared.  Further, the social message group greatly outperformed the informational message group under both of the same measures.  In total, this experiment generated thousands of additional votes.  In the Kramer et al. study, the authors manipulated the news feeds of Facebook users to either change the amount of positive-leaning or negative-learning posts that a user sees, and measuring whether that user is likely to be influenced by the bias mood of their news feed.  The researchers found that emotionally-biased news feeds are contagious, and that text-only communication of emotional state is possible; non-verbal cues are not necessary to influence mood.

I was glad to see that the Bond study validated their findings with public voting records, as it’s certainly reasonable to assume that a Facebook user might see many of their friends voting and click the “I voted” button as well for the social credibility.  It was certainly interesting to see the change in results, from a 2% boost between the social message group and control group when measuring button clicks vs. the 0.39% boost through voter record validation.  I also didn’t expect that the informational message would have no influence in the voting-validated data; I would expect at least some increase in voting rate, but that’s not what the researchers found.

I took some issue with the positive/negative measurement of posts in the Kramer study.  The authors noted that a post was determined to be positive or negative if they contained at least one positive or negative LIWC word.  However, this doesn’t seem to take into account things like sarcasm.  For example, “I’m so glad that my friends care about me” contains two words that I expect to be positive (“glad” and “care”), but the post itself could certainly be negative overall if the intent was sarcastic.  I would expect this to affect some posts; obviously not enough of them to change the statistical significance of their results, but the amount of sarcasm and cynicism that I see from friends on Facebook can often be overwhelming.  Could the authors have gotten even stronger results with a better model to gauge whether a post is positive or negative?

I had never heard of Poisson regression before reading the Kramer paper, so I decided to look into it a bit further.  I presume that they authors chose this regression model because they hypothesized (or knew) that Facebook users’ post rates follow a Poisson distribution.  My understanding of the Poisson distribution is that it assumes the events being recorded are independent and occur at a constant rate; however, I feel that my own Facebook postings violate both of those assumptions.  My posts are often independent, but occasionally I’ll post several things on the same theme (like calling for gun control following a school shooting) rapidly.  Further, I’ll occasionally go a week or more without posting because of how busy my schedule is, whereas other times I’ll make multiple posts in a day.  My post distribution seems to be more bimodal than Poisson.  Can anyone fill in the gap in my understanding why the authors chose Poisson regression?

Read More

Reflection #8 – 2/20 – Pratik Anand

Paper 1 : A 61-million-person experiment in social influence and political mobilization

Paper 2 : Experimental evidence of massive-scale emotional contagion through social networks

The two paper discuss two aspects of the same phenomenon, one is specific to politics and other is general – influence of online social ties on user behavior.

The first paper shows that social messages can cause small-scale effect in political mobilization. They are more effective than just informational messages.
The paper performs experiments on influencing people to vote on Election Day. The informational message pops up in a user’s feed stating that it is an election day and if they have voted. It also shows link for finding the nearest polling booth as well as number of people who self-reported voting in that election.
Another variation of this message was a social message where the user is shown photos of friends, with whom user has “close-ties”, who have self-reported voting in that election.
The study finds that the social message has a much higher impact on political mobilization. The people are more likely to click on link to find nearest polling booth and self-report themselves as voted. The authors also mention that that this effect is only visible at a macroscopic level and when the pictures of close friends is shown in the message.

The paper raises many questions. First of all, there is no way to verify whether a self-reporting user has actually voted. Also, there are enough external factors which can make a user go for voting other than these messages by Facebook. It cannot be distinguished if these messages were the factors behind a user’s decision to vote.

The second paper takes a more general approach. It tries to identify if people are positively or negatively influenced by posts of their online friends on social networks. It saw that people post more positive posts if they are shown more positive posts from their friends and same for the negative posts. I liked its hypothesis a lot because it opens doors for new kind of questions, apart from general question of generizability, diversity and validity etc. Given that the hypothesis is verified by other experiments too and is identified as genuine human behavior online, the new question is , what causes such behavior. The people who start posting more positive things in influence of their social friends are really happier or are they just posting happy posts to stay relevant in their social circle ?
Another interesting question is for negative posts. If depressing posts make other people depressed and suicidal, then will social platforms like Facebook enforce some kind of negativity censorship? Will it be in alignment with freedom of speech and expression? These are very complex questions with no correct answers.

Read More

Reflection #8 – [2/20] – [Aparna Gupta]

Reflection #8

  1. Bond, Robert M., et al. “A 61-million-person experiment in social influence and political mobilization.” Nature 489.7415 (2012): 295.
  2. Kramer, Adam DI, Jamie E. Guillory, and Jeffrey T. Hancock. “Experimental evidence of massive-scale emotional contagion through social networks.” Proceedings of the National Academy of Sciences 111.24 (2014): 8788-8790.

Summary:

Both the studies come from the Facebook Data Science team, where paper by Bond et al, shows that the social messages like user’s posts directly influence political self-expression, information seeking and real-world voting behaviour of millions of people and the paper by Kramer et al, tests whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the Newsfeed.

Reflection:

Paper 2: The objective behind Kramer et al. ‘s study on massive Facebook data was to show that emotional states can be transferred to others via emotional contagion which further leads people to experience the same emotions without their awareness. To evaluate this the author’s tested whether the posts with emotional content are more engaging and that expressions manipulated the extent to which people were exposed to emotional expression in their Newsfeed. They conducted an experiment on people who viewed Facebook in English. I wonder how will the results vary if people viewing Facebook in other language are also considered? The experiments for positive and negative emotions were conducted parallelly. Experiment 1: Exposure to friend’s positive emotional content in the user’s Newsfeed was reduced. Experiment 2: Exposure to friend’s negative emotional content was reduced. The authors have considered a status update to be positive or negative if it contains at least one positive or negative word. However, I am not convinced by this technique of determining positive or negative updates. Is it sufficient to classify posts like this without analyzing the sentiment behind the entire text? Moving forward, a control condition was introduced for each experiment where a similar proportion posts in user’s Newsfeed were omitted entirely at random. While these experiments were conducted for 1 week (Is a 1-week study sufficient to analyze the results?), participants (~155,000) were randomly selected per condition based on who posted at least 1 post during the experimental period. This makes me wonder if merely 1 status update is sufficient to identify the influence? In the end, the author’s analyzed 3 million posts, containing over 122 million words, 4 million of which were positive (3.6%) and 1.8 million negative (1.6%) and concluded that online messages influence user’s experience of emotions which may affect a variety of offline behaviors. To the best of my knowledge Facebook, these days is used more a social show-off platform where users share posts related to travel, food, success, new jobs, etc.  How are such updates responsible for affecting offline behaviors and what kind of offline behaviors?

Paper 1: In this paper, Bond et al. have tried to analyze the spread of voting act in national elections through social networks. The authors have defined their hypothesis: How Political behavior can spread through an online social network and to test this hypothesis they have conducted randomized controlled trials wherein the users were assigned to a group: 1. Social message group (n = 60,055,176). The users in the social group were shown a statement at the top of their ‘News feed’, provided a link to find local polling places, showed a clickable button reading ‘I Voted’, showed a counter indicating how many other Facebook users had previously reported voting, and displayed up to six small randomly selected “profile pictures” of the user’s Facebook friends who had already clicked the ‘I voted’ button. 2 Information message group (n= 611,044)– Users were shown the message, poll information, counter and button, but they were not shown any faces of the friends.3 The control group (n=613,096) did not receive any message at the top of their Newsfeed. There is a huge imbalance between the #of users in social message group and the #of users in information message and control group. The authors claim that users who received the social message were 2.08% more likely to click on the I Voted button that those you received the informational message and the users who received the social message were also 0.26% more likely to click the polling-place information link than users who received the informational message. Hence, online political mobilization can have a direct effect on political self-expression, information seeking, and real-world voting behavior. In my opinion, this paper could have been more interesting if authors would have included the information about the models and the technique used by them to derive the results.

Read More