Reflection #14 – [4/17] – Aparna Gupta

1.Lelkes, Y., Sood, G., & Iyengar, S. (2017). The hostile audience: The Effect of Access to Broadband Internet on Partisan Affect. American Journal of Political Science61(1), 5-20.

This article talks about the impact of broadband access on the polarization by exploiting the differences in broadband availability brought about by variation in state right-of-way (ROW). To measure an increase in partisan hostility the authors merged state-level regulation data with county-level broadband penetration data and a large-N sample of survey data from 2004 to 2008. The data is acquired from multiple sets of datasets like data on right-of-way laws come from previous research work by Beyer and Kende, the data on broadband access is from the Federal Communication Commission (FCC), the data on the partisan effect is collected from National Annenberg Election Studies (2004 and 2008) and, the media data is collected from comScore.

The study was interesting however I have a few concerns like, why have the authors considered only broadband penetration as a measure to analyze the partisan Affect or polarization in an era where people always have internet connection on their mobile phones or tablets? and have they considered the geographic locations where getting a high-speed internet connection is still an issue as stated in this article (https://arstechnica.com/information-technology/2017/06/50-million-us-homes-have-only-one-25mbps-internet-provider-or-none-at-all/). Does this mean people in these areas are less polarized?

The author also claims that access to broadband Internet boosts partisan’s consumption of partisan media. I wonder if Isn’t this quite obvious? since a person will tend to consume news he/she is more inclined towards. There is a plethora of free information available on the Internet ready to be consumed by people of any age group, any political inclination, and  any region. Can there be a scenario where being exposed to the fire hose of information (through broadband) influenced people to change their polarization?

 

 

 

 

Read More

Reflection #12 – 04/05 – [Aparna Gupta]

Felbo, Bjarke, et al. “Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm.” arXiv preprint arXiv:1708.00524 (2017).

Nguyen, Thin, et al. “Using linguistic and topic analysis to classify sub-groups of online depression communities.” Multimedia tools and applications 76.8 (2017): 10653-10676.

Reflection:

Felbo et. al, used a raw dataset of 56.6 billion tweets, filtered to 1.2 billion relevant tweets to build a classifier to classify the emotional content of texts accurately. In my opinion, this is one of the best thought and well-executed paper we have read so far. I was impressed by the size of the dataset used by the authors, which obviously helped in building a better predicting classifier. Although this paper spoke mostly about ML techniques, most of which I was unfamiliar with, I found their ‘chain-thaw’ transfer learning technique quite intriguing. It was also quite fascinating to read how this approach helped in avoiding possible overfitting. The authors have also built a website ‘Deepmoji’ to demonstrate their model and are available for use to anyone. The website provided a good understanding of which words were given more weight while converting the text to its equivalent emotion. There are certain users who only use emojis to write their messages. Can this study be extended to actually interpret the context behind such messages?

Paper 2 by Nyugen et. al, talks about exploring the textual cues of online communities interested in depression. For the study, the authors randomly selected 5000 posts from 24 online communities and identified five subgroups of online communities: Depression, Bipolar Disorder, Self-Harm, Grief, and Suicide. To identify these communities’ psycholinguistic features and content topics were extracted and analyzed. This paper also implemented ML techniques to build a classifier for depression vs other subgroups. There are certain aspects which I didn’t like about this paper like, the authors used a small database and from an online forum. How did they handle the possible bias and how did they validate the authenticity of the posts? Do depressed people actually go online and discuss or look for solutions regarding their issues? Also, what remains unclear is the reason behind comparing depression with other subgroups. Aren’t those subgroups a part of depression? I feel a disconnect in terms of how the authors started by stating a problem and then diverging away from the same.

Apart from these points there are certain aspects which I liked about this paper like, Nyugen et. al, implemented and compared results from various classifiers and one future work which I can think of is this method/concept being used by psychiatrist to actually detect the type and severity of depression a person is suffering by analysing their posts or writing behaviour .

 

Read More

Reflection 11 – [Aparna Gupta]

[1] King, Gary, Jennifer Pan, and Margaret E. Roberts. “Reverse-engineering censorship in China: Randomized experimentation and participant observation.” Science 345.6199 (2014): 1251722.

[2] Hiruncharoenvate, Chaya, Zhiyuan Lin, and Eric Gilbert. “Algorithmically Bypassing Censorship on Sina Weibo with Nondeterministic Homophone Substitutions.” ICWSM. 2015.

Reflection #1

Paper 1 by King et al., have presented an interesting approach to reverse-engineering censorship in China.  The experiment performed by the author looks more like a secret operation to analyze how censorship works in China. King et al., created accounts on various social media websites and submitted posts from them to analyze whether they get censored or not. The authors even created their own website and conducted interviews. Their approach was unique and interesting. However, I was not convinced why the authors only considered posts from all over the world between 8 AM and 8PM China time. How about the content being posted before 8 AM and after 8 PM? What I found interesting in the paper is the action hypothesis vs state critique hypothesis. Non-familiarity with the language is a major drawback in understanding it. The authors reported that Chinese social media organizations will hire 50,000 – 70,000 people who will act as human censors which is quite interesting and too less looking at the number of internet users in China.

Reflection #2

Paper 2 by Hiruncharoenvate et al., presents a non-deterministic algorithm for generating homophones that create a large number of false positives for censors. They claim that homophone-transformed weibos posted Sina Weibo remain on site three times longer than their previously censored counterparts. The authors have conducted two experiments – first where they posted original posts and homophone-transformed posts and found that although both the posts eventually were deleted, the homophone-transformed posts stayed 3 times longer. second, they analyze that native Chinese speakers on AMT were able to understand these homophone-transformed weibos. I wonder how this homophone-transformed approach will work in other languages? The dataset used consists of 11 million weibos which was collected from Freeweibo.  Out of all the social science papers, we have read so far I found this paper most interesting and their approach well structured.  It would be interesting to implement this approach in other languages as well.

 

Read More

Reflection #9 – [02/22] – [Aparna Gupta]

Mitra, Tanushree, Scott Counts, and James W. Pennebaker. “Understanding Anti-Vaccination Attitudes in Social Media.” ICWSM. 2016.
De Choudhury, Munmun, et al. “Predicting depression via social media.” ICWSM 13 (2013): 1-10.
Reflection:

Paper 1 by Mitra et al., focuses on understanding the Anti-Vaccination Attitude in social media. The authors have collected over 3 million tweets from Twitter, compared and contrasted their linguistic styles, topics of interest, social characteristics and underlying social cognitive dimensions. They have categorized users into 3 group: anti-vaccines, pro-vaccines, and joining-anti vaccine cohort. Their analysis majorly involves examining individual’s overt expressions towards vaccination in a social media platform. The data collection process involved 2 main phases wherein phase 1 involved extracting tweet sample from the Twitter Firehouse stream between January 1 and 5, 2012 and classified tweets based on 5 phrases. Using these phrases, they fetched more tweets spanning four calendar years. Post data collection the authors built a classifier to classify the collected posts as pro-vaccine and anti-vaccine tweets and using trigrams and hashtags as features they built a supervised learning classifier which gave an accuracy of 84.7%. The authors then segregated users into 3 groups: long-term advocates of pro and anti-vaccination attitude and new users adopting the anti-vaccination attitude. I really like the method adopted by Mitra et al, to analyze the “What” aspect of the topics which people generally talk about. The MEM topic modeling approach implemented by the authors looks quite convincing and I wonder as the authors suggest, how can this study be extended to other social media platforms? And will It produce similar results? I didn’t find anything unconvincing in the paper however, I wonder if the same approach can be applied to other domains apart from public health.

Paper 2 by De Choudhury et al, talks about the depression which is a serious challenge in personal and public health. The objective of this paper is to explore the potential use of social media to detect and diagnose the major depressive disorder in individuals. The authors have collected tweets of users who report being diagnosed with clinical depression using crowdsourcing.  I wonder how can we differentiate if an individual’s posts depressing content on Twitter only to seek attention or they are actually depressed. The hypothesis: “changes in language, activity, and social ties may be used jointly to construct statistical models to detect and even predict MDD in a fine-grained manner”. Based on the individual’s social media behavior the authors have derived measures like user engagement and emotion, egocentric social graph, linguistic style, depressive language use, and mentions of antidepressant medications – to quantify an individual’s social media behavior. It was interesting that the authors conducted an auxiliary screening test in addition to the CES-D questionnaire to eliminate noisy responses. Although authors have not explicitly indicated in the HITs that the two tests were depression screening questionnaires, However, I believe that the questions in CES-D are quite obvious to make individuals understand that the questionnaire is related to depression. Hence, I am not quite sure if this approach would have helped minimize the possible bias. In the Prediction framework section of the paper where authors have described the models they have implemented to build the classifier, it would have been helpful if they would have given information of the dimensions after dimensionality reduction(PCA).

In the end, both the pair of papers presents some quite interesting results. Re-iterating what I have mentioned earlier, I didn’t find anything unconvincing in both the papers and was quite impressed by both the studies.

Read More

Reflection #8 – [2/20] – [Aparna Gupta]

Reflection #8

  1. Bond, Robert M., et al. “A 61-million-person experiment in social influence and political mobilization.” Nature 489.7415 (2012): 295.
  2. Kramer, Adam DI, Jamie E. Guillory, and Jeffrey T. Hancock. “Experimental evidence of massive-scale emotional contagion through social networks.” Proceedings of the National Academy of Sciences 111.24 (2014): 8788-8790.

Summary:

Both the studies come from the Facebook Data Science team, where paper by Bond et al, shows that the social messages like user’s posts directly influence political self-expression, information seeking and real-world voting behaviour of millions of people and the paper by Kramer et al, tests whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the Newsfeed.

Reflection:

Paper 2: The objective behind Kramer et al. ‘s study on massive Facebook data was to show that emotional states can be transferred to others via emotional contagion which further leads people to experience the same emotions without their awareness. To evaluate this the author’s tested whether the posts with emotional content are more engaging and that expressions manipulated the extent to which people were exposed to emotional expression in their Newsfeed. They conducted an experiment on people who viewed Facebook in English. I wonder how will the results vary if people viewing Facebook in other language are also considered? The experiments for positive and negative emotions were conducted parallelly. Experiment 1: Exposure to friend’s positive emotional content in the user’s Newsfeed was reduced. Experiment 2: Exposure to friend’s negative emotional content was reduced. The authors have considered a status update to be positive or negative if it contains at least one positive or negative word. However, I am not convinced by this technique of determining positive or negative updates. Is it sufficient to classify posts like this without analyzing the sentiment behind the entire text? Moving forward, a control condition was introduced for each experiment where a similar proportion posts in user’s Newsfeed were omitted entirely at random. While these experiments were conducted for 1 week (Is a 1-week study sufficient to analyze the results?), participants (~155,000) were randomly selected per condition based on who posted at least 1 post during the experimental period. This makes me wonder if merely 1 status update is sufficient to identify the influence? In the end, the author’s analyzed 3 million posts, containing over 122 million words, 4 million of which were positive (3.6%) and 1.8 million negative (1.6%) and concluded that online messages influence user’s experience of emotions which may affect a variety of offline behaviors. To the best of my knowledge Facebook, these days is used more a social show-off platform where users share posts related to travel, food, success, new jobs, etc.  How are such updates responsible for affecting offline behaviors and what kind of offline behaviors?

Paper 1: In this paper, Bond et al. have tried to analyze the spread of voting act in national elections through social networks. The authors have defined their hypothesis: How Political behavior can spread through an online social network and to test this hypothesis they have conducted randomized controlled trials wherein the users were assigned to a group: 1. Social message group (n = 60,055,176). The users in the social group were shown a statement at the top of their ‘News feed’, provided a link to find local polling places, showed a clickable button reading ‘I Voted’, showed a counter indicating how many other Facebook users had previously reported voting, and displayed up to six small randomly selected “profile pictures” of the user’s Facebook friends who had already clicked the ‘I voted’ button. 2 Information message group (n= 611,044)– Users were shown the message, poll information, counter and button, but they were not shown any faces of the friends.3 The control group (n=613,096) did not receive any message at the top of their Newsfeed. There is a huge imbalance between the #of users in social message group and the #of users in information message and control group. The authors claim that users who received the social message were 2.08% more likely to click on the I Voted button that those you received the informational message and the users who received the social message were also 0.26% more likely to click the polling-place information link than users who received the informational message. Hence, online political mobilization can have a direct effect on political self-expression, information seeking, and real-world voting behavior. In my opinion, this paper could have been more interesting if authors would have included the information about the models and the technique used by them to derive the results.

Read More

Reflection #7 – [02/13] – Aparna Gupta

Niculae, Vlad, et al. “Linguistic harbingers of betrayal: A case study on an online strategy game.” arXiv preprint arXiv:1506.04744 (2015).

This research paper explores linguistic cues in a game ‘Diplomacy’ (a strategy game) where players form alliances and break those alliances through betrayal. The authors have tried to predict a possible betrayal based on following attributes:  positive sentiment, politeness, and structured discourse. However, in my opinion there can be other factors like body language and facial expressions of a player which can also determine a possible betrayal.

The authors have collected data from 2 online platforms. The dataset comprises of 145k messages from 249 games.  Diplomacy is unique in a way that all players submit their written orders and these orders are executed simultaneously; there is no randomness. Hence the communication of the players depends only on the communication, cooperation, and movement of players.

In sec 3 of the paper, authors talk about relationships and stability and how interactions within the game define the relationship between players. The authors have used external tools for sentiment analysis and politeness classification. The authors have built a binary classifier to predict whether a player is going to betray another player.Such computations might give satisfactory results in a game scenario; however, they cannot be extended to real-life scenarios.

In the end, the paper explores relationships in a war based strategy game which doesn’t quite relate to the real world and looks quite unrealistic.

Read More

Reflection #6 – [02/08] – Aparna Gupta

  1. Danescu-Niculescu-Mizil, C., Sudhof, M., Jurafsky, D., Leskovec, J., & Potts, C. (2013) “A computational approach to politeness with application to social factors”.
  2. Voigt, R., Camp, N. P., Prabhakaran, V., Hamilton, W. L., Hetey, R. C., Griffiths, C. M., … & Eberhardt, J. L. (2017) “Language from police body camera footage shows racial disparities in officer respect”.

Danescu et al., have proposed a computational approach to politeness with application to social factors. They build a computational framework to study the relationship between politeness and social power, showing how people behave once elevated. The authors have built a new corpus where data comes from 2 online communities – Wikipedia and Stack exchange. To label the data they used Amazon Mechanical Turks and labeled over 10,000 utterances. The Wikipedia data was used to train the politeness classifier whereas Stack Exchange data was used to test the classifier. Authors have constructed a politeness classifier with a wide range of domain-independent lexical, sentiment, and dependency features and presents a comparison between two classifiers – a Bag of Words classifier and linguistically informed classifier. The classifiers were evaluated in both in-domain and cross-domain settings. Looking at the results of cross-domain setting, I wonder if the politeness classifier will give same or better results for a different corpus from a different domain. Their results confirm shows a significant relationship between politeness and social power, showing that polite Wikipedia editors, once elevated, becomes less polite and Stack Exchange users at the top of the reputation scale are less polite than those at the bottom. However, it would be interesting to identify a common feature list, irrespective of the domain, given any corpus which can classify polite and impolite requests or posts or replies.

Voigt et al., have also proposed computational linguistic methods to extract the level of respect and politeness automatically from transcripts. The authors of this paper talk about racial disparity between black and white communities by traffic signal officers. The data was collected from the transcribed body camera footage from vehicle stops of white and black community conducted by the Oakland Police Department during April 2014. Since the officers were made to wear the cameras and record their own footage, will they still show racial disparity? Can there be other factors behind it?  I really like the approach and the  3 studies – Perceptions of Officer Treatment from Language, Linguistic Correlates of Respect and Racial Disparities in Respect, conducted by the authors. However, I wonder if the results will be the same if a similar study is conducted in different cities (which reports low or high racial disparities.)

Read More

Reflection #5 – [02/06] – Aparna Gupta

  1. Garrett, R. Kelly. “Echo chambers online?: Politically motivated selective exposure among Internet news users.” Journal of Computer-Mediated Communication2 (2009): 265-285.
  2. Bakshy, Eytan, Solomon Messing, and Lada A. Adamic. “Exposure to ideologically diverse news and opinion on Facebook.” Science6239 (2015): 1130-1132.

Both papers talk about how the exposure to news and the civic information is increasing through online social networks and personalization. The emphasis is more on how this is leading to an era of “echo chambers” where people read news or information which favors their ideology and opinions.

Garrett et al. demonstrated that opinion-reinforcing information promotes news story exposure while opinion-challenging information makes exposure only marginally less likely. They conducted a controlled study where the participants were presented with news content and a questionnaire. However, I am not convinced by the fact that the participants were presented with the kind of news or information they already have strong opinions about. This could have led to a possible bias in the conclusion drawn from the study. Although the paper has presented some interesting findings of opinion-reinforcing and opinion-challenging content and how readers perceive information when presented with such content, I was unable to correlate the claims and findings specified by the authors. Also, the study revolves around three issues – gay marriage, social security reform, and civil liberties- which were current topics in 2004. Does this mean that the results presented won’t generalize to other topics? Of all the papers we have read so far, generalizing the results across genres and other geographic location looks like a major roadblock.

Bakshy et al., have used deidentified data to examine how 10.1 million Facebook users interact with socially shared news. Their focus is on identifying how heterogeneous friends could potentially expose individuals to cross-cutting content. Apart from “echo chambers” the authors also talk about “filter bubbles” in which the content is selected by algorithms according to viewer’s previous behaviors. I like the quantitative analysis presented by the authors to compare and quantify the extent to which individuals encounter comparatively more or less diverse content while interacting via Facebook’s algorithmically ranked Newsfeed. Apart from this, in my opinion “how likely is it that an individual will share a cross-cutting post with his friends” should also be considered and “what if an individual doesn’t click on the link containing a cross-cutting post?

In the end, it makes me wonder how the results will be if authors of both papers would have conducted the study on individuals from outside of the US.

Read More

Reflection #4 – [1/30] – Aparna Gupta

Reflection 4:

  1. Garrett, R. Kelly, and Brian E. Weeks. “The promise and peril of real-time corrections to political misperceptions.” Proceedings of the 2013 conference on Computer supported cooperative work. ACM, 2013.
  2. Mitra, Tanushree, Graham P. Wright, and Eric Gilbert. “A parsimonious language model of social media credibility across disparate events.” Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. ACM, 2017.

Summary:

Both papers talk about social media Credibility of the content posted on social media websites like Twitter. Mitra et. al., have presented a parsimonious model that maps language cues to the perceived level of credibility and their results show that certain linguistic categories and their associated phrases are strong predictors surrounding disparate social media events. Their dataset contains 1,377 real-world events. Whereas, Garrett et.al., have presented a study that focuses on comparing the effects of real-time corrections to corrections that are presented after a short distractor task.

Reflection:

Both the papers present interesting findings of information credibility across world wide web can be interpreted.

In the first paper Garrett et. al., have shown how political facts and information can be misstated. According to them, real-time corrections are better than making corrections after a delay. I feel that this is true to a certain level since a user hardly revisits an already read post. If the corrections are made real-time it helps to understand that a mistake has been corrected and hence credible information has now been posted. However, I feel that the experiment about what users perceive – 1. When provided with an inaccurate statement and no correction, 2. When provided a correction after a delay and 3. When provided with messages in which disputed information is highlighted and accompanied by correction. – can be biased based on user’s interest in the content.

An interesting part of this paper was the listing of various tools (Truthy, Videolyzer, etc.,) which can be used to either identify and highlight inaccurate phases.

The second paper Mitra et. al., have tried to map language cues with perceived levels of credibility. They have targeted a problem which is now quite prevalent. Since world wide web is open to everyone, people have the freedom to post any content without caring about the credibility of the information being posted. For example, there are times when I have come across same information (with exact same words) being posted by multiple users. This makes me wonder about the authenticity of the content and raises a doubt about the content credibility. I really liked the approach adopted by the authors to identify expressions which leads to the low or high credibility of the content. However, the authors have focussed on the perceived credibility in this paper. Can “perceived” credibility be considered same as the “actual” credibility of the information? How can the bias be eliminated, if there is any? I feel these are more psychology and theory-based questions and extremely difficult to quantify.

In conclusion, I found both papers very intriguing. I felt that these papers present a perfect amalgamation of human psychology and problems at hand and how they can be addressed using statistical models.

 

 

 

 

Read More

Reflection #3 – [1/25] – Aparna Gupta

Paper: Antisocial Behavior in Online Discussion Communities

Summary:

This paper talks about characterizing anti-social behavior which includes trolling, flaming, and griefing, in online communities. For this study the authors have focussed on CCN.com, Breitbart.com and IGN.com. The authors have presented a retrospective longitudinal analysis to quantify anti-social behaviour throughout an individual user’s tenure in a community. They have divided users in two groups – Future Banned Users(FBUs) and Never Banned Users(NBUs) based on the language and the frequency of their posts.

Reflection:

The paper ‘Antisocial Behavior in Online Discussion Communities’ focuses on detecting the anti-social users at an early stage by evaluating their posts. Their results are based on the features – post content, user activity, community response, and the actions of community moderators. In my opinion “What leads an ordinary man to exhibit trolling behaviour” should also have been considered as a contributing feature.

For example, in communities or forums where political discussions are held, comments exhibiting strong opinions are bound to be seen. I therefore feel that What is considered anti-social depends on the particular community and the topic around which the respective community is formed” [1].

What struck my mind was there can be scenarios where discussion context determines the trolling behaviour of an individual. However, the ‘Readability Index’ parameters which authors have considered looked promising.

In the Data Preparation stage to measure “Undesired Behaviour” the authors have stated that “At the user-level, bans are similarly strong indicators of antisocial behaviour”. How is a user getting banned from an online community determines antisocial behaviour? For example, a user got banned from Stack overflow because all of the questions posted were out of scope.

The paper majorly revolves around the 2 hypothesis which authors have stated to evaluate an increase in the post deletion rate. H1: a decrease in posting quality, H2: an increase in community bias. To test both H1 and H2, the authors have conducted 2 studies – 1. Do writers write worse over time? This study is somewhat agreeable where one can analyse how the user writing is changing over time. 2. Does community tolerance change over time?  According to the results presented by the authors this indeed looks true. However, in my opinion It also depends on how opinions/comments are perceived by other members of the community.

In the closing note, the paper presents some interesting facts about how to identify trolls and ban them at the very early stages.

 

[1] https://www.dailydot.com/debug/algorithm-finds-internet-trolls/

Read More