Reflection #5 – [09/11] – [Prerna Juneja]

Paper 1: Exposure to ideologically diverse news and opinion on Facebook

Summary:

In this paper the authors claim that our tendency to associate with like-minded people traps us into echo chambers. Basically the central premise is that “like attracts like”. The authors conduct a study on data set that includes 10.1 million active U.S. users who have reported their ideological affiliation and 7 million distinct URLs shared by them. They discover that the likelihood of individual clicks on a cross-cutting content relative to a consistent content is 17% for conservatives and 6% for liberals. After ranking the news, there is less amount of cross-cutting news since the ranking algorithm considers the way user interacts with friends as well as previous clicks.

Reflection:

Out of the 7 million URLs, only 7% were found to be hard content (politics, news etc.). This shows that facebook is meant more for sharing personal stuff. Since we don’t know the affiliation of all of the user’s friends it’s difficult to say if facebook friendships are based on shared political ideologies. Similar study should be conducted on platforms where people share more of the hard stuff….probably twitter….or google search history. The combined results will give better insights on whether people associate themselves with people having similar political ideologies on online platforms or not.

We can conduct a study to find out how adaptive and intelligent facebook’s news feed algorithm is by having a group of people who have declared their political ideology to start liking, clicking and sharing {both in support as well as disapproval} articles of opposing ideologies. We should then compare the before and after news feed to see if the ranking of the news articles change. Does the algorithm figure out whether the content was shared to show support or to denounce the news piece and modify the feed accordingly?

I wonder if users are actually interested in getting access to cross cutting content. A longitudinal study can be conducted where users are shown balanced news (half supporting their ideology and half opposing) and see if after a few months their click pattern changes: whether they click more cross cutting stuff or in the extreme case, do they change their political ideology. This kind of a study will show if people really care about getting trapped in an echo chamber or not. If not then we certainly can’t blame facebook’s algorithms.

This study is not generalizable. It was conducted on young population, specifically those who chose to reveal their political ideologies. Similar studies should be performed in different countries with users from different demographics. Also the paper doesn’t talk much about those who are neutral. How are political articles ranked for their news feed?

This kind of study will probably not hold for the soft content. People usually don’t hold extreme views about about soft content like music, sports etc.

Paper 2: “I always assumed that I wasn’t really that close to [her]”: Reasoning about invisible algorithms in the news feed &

Summary:

In this paper the authors want to study whether users should be made aware of the presence of news feed curation algorithm and how will this insight affect their future experience. They conduct a study where they made 40 FB users use FeedVis, a system that conveys the difference between the algorithmically altered and unadulterated news feed. More than half were not aware of the algorithm’s presence and got angry initially. They were upset that they couldn’t see close friends and family members in their feed and attributed this to their friend’s decision to either deactivate the account or to exclude them. Following up with the participants after a few months revealed that after the awareness about the presence of algorithm made them actively engage with facebook.

Reflection:

In paper “Experimental evidence of massive-scale emotional contagion through social networks”, authors did a scientific study on “emotion contagion”. The results of the study showed that displaying fewer positive updates in people’s feeds causes them to post fewer positive and more negative messages of their own. That’s how powerful Facebook’s algorithms can be!

In this paper authors try to answer two important questions: should users be made aware of the presence of algorithms in their daily digital lives and how will this insight affect their future experience with the online platform. We find out how ignorance about these algorithms can be dangerous. It can lead people to develop misconceptions about their personal relationships. How to educate users about the presence of these algorithms is still a challenge. Who will take up this role? Online platforms? Or do we need third party tools like FeedVis.

I found Manipulating the Manipulation’ section extremely interesting. It’s amazing to see the ways adopted by people to manipulate the algorithm. The author’s could have included a section describing how far were these users successful in this manipulation. Which technique worked the best. Were changes in the news feed quite evident?

Two favourite lines from the paper “Because I know now that not everything I post everyone else will see, I feel less snubbed when I make posts that get minimal or no response. It feels less personal”

whenever a software developer in Menlo Park adjusts a parameter, someone somewhere wrongly starts to believe themselves to be unloved “

It’s probably the best qualitative paper I’ve read so far.

Leave a Reply

Your email address will not be published. Required fields are marked *