Reflection #5 – [02/06] – [Ashish Baghudana]

Garrett, R. Kelly. “Echo chambers online?: Politically motivated selective exposure among Internet news users.” Journal of Computer-Mediated Communication 14.2 (2009): 265-285.
Bakshy, Eytan, Solomon Messing, and Lada A. Adamic. “Exposure to ideologically diverse news and opinion on Facebook.” Science 348.6239 (2015): 1130-1132.

Summary

The central theme in both these papers talks about echo chambers on the Internet, and specifically, social media websites, such as Facebook. The fundamental premise that the authors build on is that – likes attract, opposites repel. Garrett mentions selective pressure as the main driver behind people’s news choices on the Internet. Algorithms in social media websites and recommender systems suggest similar content to users that drives them towards an extreme, i.e. opinions are reinforced and differing opinions are not read or recommended.

However, the papers differ in their methods. Garrett et al. conduct a small and specific user behavior-tracking study (N = 727), with users recruited from readers of two news websites – AlterNet (left-leaning) and WorldNetDaily (right-leaning). Since readers already visit these websites, ground truths about their political affiliation are assumed. Once these users sign up for the study,  Bakshy et al. perform similar analysis on a significantly larger scale of 10.1 million active users of Facebook who self-report their political affiliations. Their evaluation involves sophisticated data collection involving ~3.8 billion potential exposures, 903 million exposures and 59 million clicks. The paper by Bakshy et. al is very dense in content and I refered to [1, 2] for more explanation.

Both papers conclude by confirming our suspicions about content preference amongst users – they spend more time reading opinion-reinforcing articles over opinion-challenging opinions.

Reflections

Kelly Garrett’s paper, though published in 2009, uses data collected in 2005. This was a time before global social media websites like Facebook and Twitter were prevalent. At the time, the author chose the best means to generate ground truth by looking at left-leaning and right-leaning websites. However, this mechanism of classification feels naïve. It is possible that certain users merely stumbled upon the website or participated in the survey for the reward. Equally importantly, the sample is not truly reflective of the American population, as a vast majority may look for news from a more unbiased source.

One of the undercurrents of the “Echo chambers online?” paper is the effect of Internet in making these biases profound. However, the study does not speak or attempt to measure users’ preferences before the advent of Internet. Would the same citizenry buy newspapers that were partisan or is this behavior reflective only of news on the Internet?

Bakshy et al.’s paper is considerably more recent (2015). While it is evaluating many of the same questions as Garrett’s paper, the methodology and mechanism are fundamentally different, as is the time period. Therefore, comparing the two papers feels a little unfair. Facebook and Twitter are social platforms, and in that sense, very different from news websites. These are platforms where you do not fully choose the content you want to see. The content served to you is an amalgamation of that shared by your friends, and a ranking algorithm. However, a distinction must be made between a website like Facebook and one like Twitter. The authors themselves highlight an important point:

Facebook ties primarily reflect many different offline social contexts: school, family, social activities, and work, which have been found to be fertile ground for fostering cross-cutting social ties.

Therefore, it is substantially more possible to interact with an opinion-challenging article. However, interaction is sometimes poorly defined because there is no real way of knowing if a user merely looked at the article’s summary without clicking on it. Hence, tracking exposure can be tricky and an avenue for further research.

Questions

  1. Almost 10 years later, especially after the wave of nationalism across the world, is there more polarization of opinion on the Internet?
  2. Is polarization an Internet phenomenon or are we measuring it just because most content is now served digitally? Was this true back in 2005?
  3. Can and should recommendation algorithms have individual settings to allow users to modify their feed and allow more diverse content?

References

[1] https://solomonmessing.wordpress.com/2015/05/24/exposure-to-ideologically-diverse-news-and-opinion-future-research/

[2] https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/LDJ7MS

Leave a Reply

Your email address will not be published. Required fields are marked *