Reflection #5 – [09/10] – [Shruti Phadke]

Paper 1: Bakshy, Eytan, Solomon Messing, and Lada A. Adamic. “Exposure to ideologically diverse news and opinion on Facebook.” Science 348.6239 (2015): 1130-1132.

Paper 2: Eslami, Motahhare, et al. “I always assumed that I wasn’t really that close to [her]: Reasoning about Invisible Algorithms in News Feeds.” Proceedings of the 33rd annual ACM conference on human factors in computing systems. ACM, 2015

Algorithmic bias and influence on social networks is a growing research area. Algorithms can play an important role in shifting the tide of online opinions and public policies. Both Bakshy et. al’s and Eslami et. al’s papers discuss the effect of peer and algorithmic influence on the social media users. Seeing ideologically similar feed as well as the feed based on interactions can lead to having extremist views and associations online. The “echo-chambers” of opinions can go viral unchallenged within a network of friends and can range from harmless stereotypes to radical extremism. This exposure bias is not just limited to the posts but also to the comments. In any popular thread, the default setting shows only the comments either made by friends or are popular.

Eslami et. al’s work shows how exposing users to the algorithm can potentially improve the quality of online interaction. Having over 1000 friends on Facebook, I barely see stories or feeds from most of them. While Eslami et. al. do insightful qualitative research on how users perceive the difference between “all stories” and “shown stories” along with their future choices, I believe that the study is limited in the number of users as well as different user behaviors. To observe the universality of this phenomenon, a bigger group of users should be observed with users behaviors varying in the frequency of access, posting behavior, lurking, and users with promotional agenda. Such study can be performed with AMT. Even though it will restrict the open coding options and detailed accounts, this paper can serve as a basis for forming a more constrained and precisely defined questionnaire which can lead to quantitative analysis.

Bakshy et. al.’s work, on the other hand, ties the political polarity in online communities to the choices the user has made. It is interesting to understand the limitations of their data labeling process and the content. For example, they have selected only the users that volunteer their polarization on Facebook. Users who volunteer this information might not represent the average population on Facebook. A better classification of such users could have been done by just text classification on their posts without their proclaimed political affiliation. One more reason to avoid their political status is that many users can have a political label attached to them due to peer pressure or the negative stigma attached to their favored ideology in their “friend” circle.

Finally, even though getting exposed to similar or algorithmically influenced content may be potentially harmful or misleading, it also raises the questions about how much data privacy invasion is allowed to de-bias the feed on your timeline. Consciously building algorithms that show cross-cutting content can end up knowing more about a user than he intends. The question of solving this algorithmic influence should be approached with caution and with better legal policies.  

Leave a Reply

Your email address will not be published. Required fields are marked *