Reflection #5 – [09/11] – [Eslam Hussein]

  1. “I always assumed that I wasn’t really that close to [her]: Reasoning about Invisible Algorithms in News Feeds.”
  2. “Exposure to ideologically diverse news and opinion on Facebook.”

 

Summary:

The first paper follows a very deep qualitative approach studying the awareness of Facebook users towards the algorithms that curates their news feed. And how people react when they know that their news feed is not random or inclusive. They developed a system – FeedVis – that shows users stories from their families News feed and allows users to control their own news feed. They try to answer three questions:

  • How aware are the users about their news feed curation algorithm?
    • 5 % were unaware of such algorithm
    • 5% were aware due to different reasons (inductive and deductive)
  • What is their reaction when they know about it? Do they prefer the old curated algorithm or the output generated by FeedVis?
  • How did the participation of this study affect their usage of Facebook?

 

The second paper studies data from more than 10 million Facebook users to study what factors affect the nature and ideology of news we receive in our news feed on Facebook. They defined three features that would affect our news feed: 1- User interaction (e.g. clicks) with the shown news, 2- friends’ network shared news and its diversity, and 3- algorithmically ranked news by Facebook news curation algorithm.

They found that what shapes our news feed is what we select to choose and interact with. That might trap us into echo-champers.

 

Reflection:

  • It is amazing how such algorithms could alter people’s feeling and ideas. Some participants lack self-confidence just because nobody reacted to their posts. Awareness of such algorithms increased their postings and interaction with Facebook knowing that nobody react to their posts was due to the curation algorithm
  • The authors might do further analysis about the similarities and differences, the backgrounds and beliefs of each participant and their friends who got their stories appear in the news feed. This analysis might help answer a few questions about the news feed curation algorithms of Facebook:
    • Does Facebook really connects people? Or creates more closed communities of common interests and backgrounds?
    • How much those algorithms contribute into increasing polarization? And the possibility to design new tools to alleviate it?
  • The second paper answered many of the questions raised from the first one, it highlights the reasons and factors that influence the algorithm in the first paper, which is our own choices and interaction with what is displayed in our news feed. We are the ones who – indirectly – direct our news ranking algorithm, I believe our news feed is just a reflection of our ideology and interests.
  • I think the Facebook news feed curation algorithm should be altered in order to alleviate the polarization of its users, creating more diverse interactive healthier environment instead of being trapped in closed-minded separated communities (or echo chambers as the authors call)

 

 

Read More

Reflection #5 – [09/11] – [Bipasha Banerjee]

Readings Assigned

  1. Bakshy, Eytan et al. – “Exposure to ideologically diverse news and opinion on Facebook” – Published in http://science.sciencemag.org Science 05 Jun 2015:
    348, Issue 6239, pp. 1130-1132 DOI: 10.1126/science.aaa1160
  2. Eslami, Motahhare et al.- “I always assumed that I wasn’t really that close to [her]”: Reasoning about invisible algorithms in the news feed” – Proceedings of CHI’15 (153-162)

 

Summary

The article and the paper both discuss how the Facebook notification feed is curated by an algorithm. The first article published in the science magazine talks about how a user of Facebook is exposed to diverse amount of news and opinion. The authors deduced that social median in fact do expose the users to ideologically diverse viewpoints. The authors of the second paper conducted a Facebook news feed case study comprising of 40 Facebook users. The users were from a diverse social as well as economic background. They classified the users as “aware” or “unaware” based on their knowledge of Facebook curation of the news feed which is based on an algorithm. The theme of this week’s reading was how users are influenced by the notification feed and how they react when knowing about the existence of the algorithm, their satisfaction level after being given a chance to see their unfiltered feed. Overall, it was found that the users were ultimately satisfied about the way the news feed was curated by the algorithm. All of them became more aware of the algorithm. Th experiment changed the the way they used and interacted with posts as that decision was a much informed one.

Reflection

 It is true that computer algorithm exposes us to content which sometimes do influence our ideology, beliefs and the way we perceive an issue. The only question that comes to my mind is, are algorithms reliable? We know that algorithms are used in almost all things we do on the internet. I read an article on how United Airlines had overbooked a flight [1]. This led to more people possessing tickets than the number of seats available. Upon discovery of this, a passenger was removed forcibly from the plane. The reason that the airline company had given was that an algorithm had sorted through the passenger list and took in some parameters like the price of the ticket, if they were frequent flyer and the time of their check in. It had thus given the output that the passenger who was removed was one who was “least valuable” to them.

Additionally, algorithms are used profusely in each and every aspects of the internet. Social media’s news feeds are curated. The one thing that companies could potentially do to improve user awareness of the algorithmic existence is to inform users about them. I do not mean the endless “terms and conditions”. What I do mean is, like Facebook reminds one of all the memories, birthday, they can remind or notify about the algorithm. Since social media users are varied in education status and background, and that not all are from “computer science” background, it is the responsibility of the company to make sure users are “aware”.

Moreover, they can also provide more flexibility to users to filter the notification. I know that these are already in place but are a bit ambiguous. It depends on users indicating their preference against each post. However, such filtering should be made more user friendly and easy. Similar to how we filter amazon search results, something like that can be implemented in the homepage globally, not just against each post. It can be chronologically by default and customization on demand. Facebook in particular starts showing posts related to what we have recently liked or visited. This generally leads to the feed being monopolized with certain posts and that generally is one of the main reasons I am repelled by the platform. Advanced filtering setting could have these parameters as well to help users even more and allow users to customize rather than the algorithm choosing for us.

[1] https://99percentinvisible.org/episode/the-age-of-the-algorithm/

Read More

Reflection #5 – [9/11] – [Dhruva Sahasrabudhe]

Papers –

[1] “I always assumed that I wasn’t really that close to [her]”: Reasoning about invisible algorithms in the news feed – Eslami et. al.

[2] Exposure to ideologically diverse news and opinion on Facebook – Bakshy et. al.

Summary –

[1] is a qualitative paper, discussing the awareness, impact and attitudes of end users towards “invisible” algorithms. Focusing on the Facebook Newsfeed curation algorithm, it tries to answer three research questions; whether the users are aware of the algorithm, what is their opinion of the algorithm, and how their behavior/attitudes changed long-term after being made aware of the impact the algorithm has. It finds that 62.5% of users were not initially aware that content on their feed was curated, and the reaction to finding out how much their feed was being altered ranged from surprised and angry initially, and that 83% of users reported that their behavior had changed in subsequent months, although their satisfaction with the algorithm remained the same afterwards, as it was before.
[2] is a short, quantitative paper which discusses the interactions between politically heterogenous groups on Facebook, and the extent of “cross-cutting”(i.e. exposure of posts belonging to the opposite ideological camp) of content belonging to either side.
Reflection –
Firstly, it must be noted, (as have the authors of the paper) that the sample size of the interview takers in [1] was very small and quite biased. The results could be made stronger by being replicated with a larger and more diverse sample size.
An interesting statement made by [1] was the fact that users make a “mental model” of the software, as if it works by some consistent, universal internal logic, which the users inherently learn to interpret, and abide by, e.g. the inference: if a group’s feed is curated, then there’s no reason a user’s feed should also not be curated. However, of course, this does not happen automatically, and is up to the developer to manually enforce. This highlights for me the importance of having an understanding of which “mental models” users will make, and not implement functionality which might cause them to make inaccurate mental models, and thus inaccurate inferences about using the software.
Another interesting observation made by [1] is likening the use of “hidden” algorithms which guide user behavior without them noticing to the design of urban spaces by architects. This of course, was talked about in depth in the video The Social Life of Small Urban Places by Whyte which was shown in class earlier this semester. 
[1] states that most users, upon being questioned after some months after taking the survey, were just as satisfied with their newsfeeds, but it also says that users on average moved 43% of their friends from one category to another when asked to switch friends between the categories “Rarely Shown”, “Sometimes Shown”, and “Mostly Shown” for their newsfeed. This indicates a sort of paradox, where users are satisfied with the status quo, but would still drastically alter the results given a choice. This might imply a sort of resigned acceptance of the users to the whims of the algorithm, knowing that the curated feed is better than the unedited mess of all their friends social media posts.
[1] ends by making a comment about the tradeoff between usability and control, where the developers of a software are incentivized to make software usable, at the cost of putting power out of the users hands. This is observed outside social media platforms too. Any software which gives too much control/customizability has a steep learning curve, and vice versa. This also brings up the point, how much control do users deserve, and who gets to decide that?
[2] focuses on the extent of interactions that happen between users who hold different political beliefs. It finds that there is a roughly 80/20 split between friends of the same ideology and friends of a different ideology. It makes the claim that ideologically diverse discussions are curtailed due to homophily, and that users themselves, despite being exposed on average to ideologically diverse material, by their own choosing, interact with posts they themselves align with.
[2] also finds that conservatives share more political articles than liberals. I wonder whether this is because of something inherent in the behavior/mentality of conservative people, or due to a trait of conservative culture.
[2] uses only political beliefs as the separator, treating sport, entertainment, etc. as neutral. However, sport is also subject to partisan behavior. There could be a study along the same lines, but using rival sports teams as the separator.

Read More

Reflection #5 – [09/11] – Subhash Holla H S

PAPERS:

  • Bakshy, “Exposure to ideologically diverse news and opinion on Facebook,” vol. 348, no. 6239, pp. 1130–1133, 2015.
  • M. Eslami et al., ““ I always assumed that I wasn’t really that close to [ her ]”: Reasoning about invisible algorithms in the news feed,” 2015.

SUMMARY:

Paper 1:

The question that was the central part of the paper was “How do [these] online networks influence exposure to perspectives that cut across ideological lines?” for which de-identified data of 10.1 million U.S. Facebook users were measured for their ideological homophily in friend networks. The influence of ideologically discordant and the relationship with the heterogeneity of friends with such data led the authors to conclude that “individuals’ choices played a stronger role in limiting exposure to cross-cutting content.”

The comparisons and observations were captured in:

  • comparing the logical diversity of the broad set of news and opinion shared on Facebook with that shared by individuals’ friend networks
  • comparing this with the subset of stories that appear in individuals’ algorithmically ranked News Feeds
  • observing what information individuals choose to consume, given exposure on News Feed.

A point of interest as a result of the study was the suggestion that the power to expose oneself to perspectives from the other side (liberal or conservative) in social media lies first and foremost with individuals.

Paper 2:

The objective of the paper was to find “whether it is useful to give users insight into these [social media] algorithms’ existence or functionality and how such insight might affect their experience”. The development of a Facebook application called FeedVis for this purpose helped them answer three questions:

  • How aware are users of the News Feed curation algorithm and what factors are associated with this awareness?
  • How do users evaluate the curation of their News Feed when shown the algorithm outputs? Given the opportunity to alter the outputs, how do users’ preferred outputs compare to the algorithm’s?
  • How does the knowledge users gain through an algorithm visualization tool transfer to their behavior?

During the study tools of Usability study like think aloud, walkthroughs, questionnaires were employed to extract information from users. The statistical tools of Welch’s test, Chi-square test, Fisher’s exact test helped corroborate findings. The features, both passive and active, that were extracted as a potential explanation for the questions: While all the participants were exposed to the algorithm outputs, why were the majority not aware of the algorithm’s existence? Were there any differences in Facebook usage associated with being aware or unaware of the News Feed manipulation?

REFLECTIONS:

My reflection on this paper might be biased as I am under the impression that the authors of the paper are also stakeholders in the findings resulting in a conflict of interest. I would like to support my impression with a few of the reporting done by the paper:

  • The indication or suggestion of individuals choice resulting in the content that one consumes seems to suggest that the algorithm is not controlling the things individuals see but humans indirectly are which is essentially arguing against the second paper we read.
  • The limitations as stated by the author make it seem as if the author is leading us to believe in a models findings which are not robust and has the potential to be skewed.

I will acknowledge the fact that the author has a basis for the claims on cross-cutting of data and given a more robust model compensating for all the drawbacks mentioned has the same findings I will be inclined to side with the author’s findings.

The notion of echo chambers and filter bubbles point us to the argument made by the second paper where through a study it shows the need for explainability and the option to choose. This was a paper that I gave a lot of attention to as I feel close to home. I feel that the paper is a proponent for explainable AI. It tries to address the issue of the black box approach most ML and AI algorithms have with even industry leaders only aware of the inputs and outcomes not able to completely reason with the physics or mechanics behind the processing agent or algorithm. As someone who sees the need for Explainability as a requirement to build Interactive AI, I thought the findings of the paper “but obvious” at points. The fact that people expressed anger and concern falls in line with a string of previous findings resulting in the work in [1], [2], [11]–[13], [3]–[10]. Reading through these papers helps one understand the need of the hour.

The paper also approaches the problem from a Human Factors perspective rather than an HCI one which I feel is warranted. I would argue that a textbook approach is not one that is required. I would tangentially propose a new approach for a new field. Expecting one to stick to design principles, analysis techniques that were coined or thought off in an era where the current algorithms were science fiction is ludicrous according to me. We need to approach the analysis of such Human-Centered systems partly with Human Factors, partly psychology and mostly HCI.

I will be really interested in working with developing more understandable AI systems for the layman.

 

REFERENCES:

[1]        I. John D. Lee and Katrina A. See, University of Iowa, Iowa City, “Trust in Automation: Designing for Appropriate Reliance,” Hum. Factors, vol. 46, no. 1, pp. 50–80, 2004.

[2]        M. T. Ribeiro, S. Singh, and C. Guestrin, “‘Why Should I Trust You?’: Explaining the Predictions of Any Classifier,” pp. 1135–1144, 2016.

[3]        A. Freedy, E. DeVisser, G. Weltman, and N. Coeyman, “Measurement of trust in human-robot collaboration,” in 2007 International Symposium on Collaborative Technologies and Systems, 2007, pp. 106–114.

[4]        M. Hengstler, E. Enkel, and S. Duelli, “Applied artificial intelligence and trust-The case of autonomous vehicles and medical assistance devices,” Technol. Forecast. Soc. Change, vol. 105, pp. 105–120, 2016.

[5]        K. A. Hoff and M. Bashir, “Trust in automation: Integrating empirical evidence on factors that influence trust,” Hum. Factors, vol. 57, no. 3, pp. 407–434, 2015.

[6]        E. J. de Visser et al., “Almost human: Anthropomorphism increases trust resilience in cognitive agents,” J. Exp. Psychol. Appl., vol. 22, no. 3, pp. 331–349, 2016.

[7]        M. T. Dzindolet, S. A. Peterson, R. A. Pomranky, L. G. Pierce, and H. P. Beck, “The role of trust in automation reliance,” Int. J. Hum. Comput. Stud., vol. 58, no. 6, pp. 697–718, 2003.

[8]        L. J. Molnar, L. H. Ryan, A. K. Pradhan, D. W. Eby, R. M. St. Louis, and J. S. Zakrajsek, “Understanding trust and acceptance of automated vehicles: An exploratory simulator study of transfer of control between automated and manual driving,” Transp. Res. Part F Traffic Psychol. Behav., vol. 58, pp. 319–328, Oct. 2018.

[9]        A. Freedy, E. DeVisser, G. Weltman, and N. Coeyman, “Measurement of trust in human-robot collaboration,” in 2007 International Symposium on Collaborative Technologies and Systems, 2007, pp. 106–114.

[10]      T. T. Kessler, C. Larios, T. Walker, V. Yerdon, and P. A. Hancock, “A Comparison of Trust Measures in Human–Robot Interaction Scenarios.”

[11]      M. Lewis, K. Sycara, and P. Walker, “The Role of Trust in Human-Robot Interaction.”

[12]      D. B. Quinn, “Exploring the Efficacy of Social Trust Repair in Human-Automation Interactions.”

[13]      M. Lewis et al., “The Effect of Culture on Trust in Automation: Reliability and Workload,” ACM Trans. Interact. Intell. Syst. ACM Trans. Interact. Intell. Syst. ACM Trans. xxxxxxxx Mon. YYYY, vol. 30, no. x, 2016.

Read More

Reflection #5 – [09/11] – [Prerna Juneja]

Paper 1: Exposure to ideologically diverse news and opinion on Facebook

Summary:

In this paper the authors claim that our tendency to associate with like-minded people traps us into echo chambers. Basically the central premise is that “like attracts like”. The authors conduct a study on data set that includes 10.1 million active U.S. users who have reported their ideological affiliation and 7 million distinct URLs shared by them. They discover that the likelihood of individual clicks on a cross-cutting content relative to a consistent content is 17% for conservatives and 6% for liberals. After ranking the news, there is less amount of cross-cutting news since the ranking algorithm considers the way user interacts with friends as well as previous clicks.

Reflection:

Out of the 7 million URLs, only 7% were found to be hard content (politics, news etc.). This shows that facebook is meant more for sharing personal stuff. Since we don’t know the affiliation of all of the user’s friends it’s difficult to say if facebook friendships are based on shared political ideologies. Similar study should be conducted on platforms where people share more of the hard stuff….probably twitter….or google search history. The combined results will give better insights on whether people associate themselves with people having similar political ideologies on online platforms or not.

We can conduct a study to find out how adaptive and intelligent facebook’s news feed algorithm is by having a group of people who have declared their political ideology to start liking, clicking and sharing {both in support as well as disapproval} articles of opposing ideologies. We should then compare the before and after news feed to see if the ranking of the news articles change. Does the algorithm figure out whether the content was shared to show support or to denounce the news piece and modify the feed accordingly?

I wonder if users are actually interested in getting access to cross cutting content. A longitudinal study can be conducted where users are shown balanced news (half supporting their ideology and half opposing) and see if after a few months their click pattern changes: whether they click more cross cutting stuff or in the extreme case, do they change their political ideology. This kind of a study will show if people really care about getting trapped in an echo chamber or not. If not then we certainly can’t blame facebook’s algorithms.

This study is not generalizable. It was conducted on young population, specifically those who chose to reveal their political ideologies. Similar studies should be performed in different countries with users from different demographics. Also the paper doesn’t talk much about those who are neutral. How are political articles ranked for their news feed?

This kind of study will probably not hold for the soft content. People usually don’t hold extreme views about about soft content like music, sports etc.

Paper 2: “I always assumed that I wasn’t really that close to [her]”: Reasoning about invisible algorithms in the news feed &

Summary:

In this paper the authors want to study whether users should be made aware of the presence of news feed curation algorithm and how will this insight affect their future experience. They conduct a study where they made 40 FB users use FeedVis, a system that conveys the difference between the algorithmically altered and unadulterated news feed. More than half were not aware of the algorithm’s presence and got angry initially. They were upset that they couldn’t see close friends and family members in their feed and attributed this to their friend’s decision to either deactivate the account or to exclude them. Following up with the participants after a few months revealed that after the awareness about the presence of algorithm made them actively engage with facebook.

Reflection:

In paper “Experimental evidence of massive-scale emotional contagion through social networks”, authors did a scientific study on “emotion contagion”. The results of the study showed that displaying fewer positive updates in people’s feeds causes them to post fewer positive and more negative messages of their own. That’s how powerful Facebook’s algorithms can be!

In this paper authors try to answer two important questions: should users be made aware of the presence of algorithms in their daily digital lives and how will this insight affect their future experience with the online platform. We find out how ignorance about these algorithms can be dangerous. It can lead people to develop misconceptions about their personal relationships. How to educate users about the presence of these algorithms is still a challenge. Who will take up this role? Online platforms? Or do we need third party tools like FeedVis.

I found Manipulating the Manipulation’ section extremely interesting. It’s amazing to see the ways adopted by people to manipulate the algorithm. The author’s could have included a section describing how far were these users successful in this manipulation. Which technique worked the best. Were changes in the news feed quite evident?

Two favourite lines from the paper “Because I know now that not everything I post everyone else will see, I feel less snubbed when I make posts that get minimal or no response. It feels less personal”

whenever a software developer in Menlo Park adjusts a parameter, someone somewhere wrongly starts to believe themselves to be unloved “

It’s probably the best qualitative paper I’ve read so far.

Read More

Reflection #5 – [09/10] – [Deepika Rama Subramanian]

Eslami, Motahhare, et al. “I always assumed that I wasn’t really that close to [her]: Reasoning about Invisible Algorithms in News Feeds.”

Bakshy, Eytan, Solomon Messing, and Lada A. Adamic. “Exposure to ideologically diverse news and opinion on Facebook.”

SUMMARY

Both the papers deal with Facebook’s algorithm and how it influences people in its everyday life.

The first paper deals with the ‘hidden’ news feed curation algorithm employed by Facebook. Through a series of interviews, they do a qualitative analysis of:

  • Algorithm Awareness – Whether users are aware that an algorithm is behind what they see on their news feed and how they found out about this
  • Evaluation (user) of the algorithm – The study tested if the users thought that the algorithm was providing them with what they needed/wanted to see
  • Algorithm Awareness to Future Behaviour – The study also asked their users if, after they discovered the algorithm and the possible parameters, if they tried to manipulate it in order to personalise their own view or to boost their posts on the platform

The second paper deals with how Facebook’s newsfeed algorithm’s bias leads to the platform being an echo chamber, i.e., where your ideas are going to reinforced with no challenges because you tend to engage with posts that you believe in.

REFLECTION

Eslami et al.’s work shows how a majority of users are unaware that an algorithm controls what they see on their newsfeed. In turn, they will believe that either Facebook is blocking them out or their friends are blocking them out. It is possible to personalize the Facebook feed extensively under News Feed Preferences – prioritize what we see first and the choice to unfollow people and groups. The issue with the feed algorithm is that the ‘unaware participants’ who form a large chunk of the population don’t know that they can tailor their experience. If it is let known, through more than a small header under settings, that an algorithm is tailoring the newsfeed, it would be more helpful and they are less likely to cause an outrage among their users. Placing the News Feed Preferences on either side of the newsfeed itself is a good option.

There was a recent rumour in January that had users believe that Facebook was limiting their feed to 25 friends. Many users were asked to copy-paste a message against this so that Facebook took notice and made alterations to their algorithm. Twitter has made sure that their newsfeed posts are in reverse-chronological order from followed accounts and occasionally the suggested tweets that is liked by someone else you follow. Reddit has two newsfeeds of sorts – best and hot. Best contains posts that are tailored to your tastes based on how you engaged with the posts, hot on the other hand shows the posts trending worldwide. This gives an eclectic and obvious mix, therefore, making sure it doesn’t become an echo-chamber.

Most recently, Zuckerberg had announced that Facebook’s goal was now not ‘helping you find relevant content’ but to ‘have more meaningful interactions’. Facebook tried the Reddit style two newsfeed model in an experiment. They removed posts from reputed media houses and placed them in an explore feed. This was to ensure that (the social media site) promoted interactions, i.e., increase organic content (not just those that were shared from other sites). They also hoped to do away with their platform acting like an echo chamber. This experiment was run in six small countries – Sri Lanka, Guatemala, Bolivia, Cambodia, Serbia and Slovakia. Following this major news sites in these countries (especially Bolivia and Guatemala) showed a sharp decrease in traffic. Unfortunately, this means that Facebook has become one of the biggest sources of news making it a ripe platform to spread fake news (for which, currently, it has limited or no checks).

However, I wonder how Facebook now is responsible for producing complete news, views from both sides. It began purely to support interactions between individuals and has evolved to its current form. Its role in news providing has not become entirely clear yet. However, as far as echo chambers go, this isn’t new. Print media, TV, talk show hosts – their ideologies influence the content they provide. People only tend to watch and enjoy shows that agree with them in general.

Read More

Reflection #5 – [09/10] – [Vibhav Nanda]

Readings:

[1] “I always assumed that I wasn’t really that close to [her]”: Reasoning about invisible algorithms in the news feed

Summary:

This paper focused on a plethora of items regarding our digital life and the ubiquitous curation algorithms. The authors talk about varying awareness levels in different users, pre study conception of facebook newsfeed, post study conception of new feed, participants reaction to finding out about a hidden curation algorithm and how it changed the perception of the participants of the study. In order to show the difference between raw feeds and curated feeds the authors of the paper decided to create a tool called FeedVis that would display users unfiltered feed from friends and pages. By asking open and close ended questions the authors were able to gauge understanding levels of the curation algorithm, the users had. Authors of the paper tried to answer three different research questions within one paper, and were successful in delivering adequate answer for future work.

Reflection/Questions:

It was interesting for me to read that various users had started actively making an effort towards manipulating the algorithm, especially because I am aware of it and it doesn’t bother me at all. In the initial part of the paper the authors discuss the idea about disclosure of the mechanisms of curation algorithm in order to create a trust bond between the users and the platform, howbeit I would argue that if the working mechanism of curation algorithm is made public then trolls, fake news agencies, and other malicious actors could use such information to further increase the reach of their posts/propaganda.  The authors also describe their participants as “typical facebook user”, which I would disagree with this statement because the meaning of a “typical” facebook user is fluid — it meant something different a few years ago (millennials) and now means something different (baby boomers and generation x). According to me facebook should some days show users unfiltered results, other days show them curated results; then track their activity online —   if there is an increase/decrease in user activity, for instance likes/comments/shares — then from that extensive data they should decide if the user would prefer curated results or unfiltered results. Facebook should also give users the option to let the algorithm know which friends/pages the specific would be more interested in — this might also help the algorithm learn more about the user.

[2 ] Exposure to ideologically diverse news and opinion on Facebook

Summary:

The authors of this paper focused on understanding how various facebook users interact with news on social media, the diversity of news spread on facebook in general and diversity of news spread among friend networks. Authors also studied the kind of information that the curation algorithms decide to display to a user and how does selective consumption of news affect the user. The authors also explained that selective consumption is a combination of two different factors: people tend to have more friends with same ideologies so they see reinforcing news, and the curation algorithms tend to display what it thinks the user would like the most — which is news reinforcing its ideologies ( I would argue that this is the reason fake news will never die).

Reflection/Questions:

According to me people with a certain ideological stand point will never be able to fathom the other side hence [for the most part] will never even put an effort into reading/watching news relating to a different ideological point of view. Historically we can see this in cable television, conservative people tend to watch Fox more often , moderates/ liberals tend to watch CNN. Each of these channels also understood their user base and delivered content bespoke to them. Now instead of companies determining news content, it is a curation algorithm that does it for us. I don’t think this is something that needs to be fixed or a problem that needs to be tackled (unless of course it is fake news). It is basic human psychology to find comfort in familiar and if users are forced to digest news content they are unfamiliar with, it will, on a very basic level make them uncomfortable. I also think it will be crossing the line when developers try to manipulate a users news feed, in a way that is not consistent with their usage of facebook, their friend circle and the pages they follow.

Read More

Reflection #5 – [09/10] – [Shruti Phadke]

Paper 1: Bakshy, Eytan, Solomon Messing, and Lada A. Adamic. “Exposure to ideologically diverse news and opinion on Facebook.” Science 348.6239 (2015): 1130-1132.

Paper 2: Eslami, Motahhare, et al. “I always assumed that I wasn’t really that close to [her]: Reasoning about Invisible Algorithms in News Feeds.” Proceedings of the 33rd annual ACM conference on human factors in computing systems. ACM, 2015

Algorithmic bias and influence on social networks is a growing research area. Algorithms can play an important role in shifting the tide of online opinions and public policies. Both Bakshy et. al’s and Eslami et. al’s papers discuss the effect of peer and algorithmic influence on the social media users. Seeing ideologically similar feed as well as the feed based on interactions can lead to having extremist views and associations online. The “echo-chambers” of opinions can go viral unchallenged within a network of friends and can range from harmless stereotypes to radical extremism. This exposure bias is not just limited to the posts but also to the comments. In any popular thread, the default setting shows only the comments either made by friends or are popular.

Eslami et. al’s work shows how exposing users to the algorithm can potentially improve the quality of online interaction. Having over 1000 friends on Facebook, I barely see stories or feeds from most of them. While Eslami et. al. do insightful qualitative research on how users perceive the difference between “all stories” and “shown stories” along with their future choices, I believe that the study is limited in the number of users as well as different user behaviors. To observe the universality of this phenomenon, a bigger group of users should be observed with users behaviors varying in the frequency of access, posting behavior, lurking, and users with promotional agenda. Such study can be performed with AMT. Even though it will restrict the open coding options and detailed accounts, this paper can serve as a basis for forming a more constrained and precisely defined questionnaire which can lead to quantitative analysis.

Bakshy et. al.’s work, on the other hand, ties the political polarity in online communities to the choices the user has made. It is interesting to understand the limitations of their data labeling process and the content. For example, they have selected only the users that volunteer their polarization on Facebook. Users who volunteer this information might not represent the average population on Facebook. A better classification of such users could have been done by just text classification on their posts without their proclaimed political affiliation. One more reason to avoid their political status is that many users can have a political label attached to them due to peer pressure or the negative stigma attached to their favored ideology in their “friend” circle.

Finally, even though getting exposed to similar or algorithmically influenced content may be potentially harmful or misleading, it also raises the questions about how much data privacy invasion is allowed to de-bias the feed on your timeline. Consciously building algorithms that show cross-cutting content can end up knowing more about a user than he intends. The question of solving this algorithmic influence should be approached with caution and with better legal policies.  

Read More

Reflection #5 – [09/10] – [Lindah Kotut]

  • Motahhare Elsami et. al. “‘I always assumed that I wasn’t really that close to [her]’: Reasoning about invisible algorithms in the news feed”.
  • Eytan Bakshy, Solomon Messing and Lada A. Adamic. “Exposure to ideologically diverse news and opinion on Facebook”

Reflection: 
1.Ethics
Facebook study is apt, given the recent New Yorker spotlight on Mark Zuckerberg. The piece while focusing on Zuckerberg and not the company, gives a good insight on the company ethos — that also give context to Bakshy’s claims . Considering the question of invisible algorithm: Elsami’s paper addresses it directly in outlining the benefits of not making the consequences of changes in  algorithm public, not the algorithm itselfGiven the anecdotes of users who changed their mind about which users the’d like to hear more of, this is a good decision — allowing for the sense of control and trust in the algorithm curation process. Elsami’s paper proceeds to raise the concern about the effect of what the unknowns have on the decision making: When considering the (in)famous Nature paper on the large-scale experimental social vs informational messaging in affecting election turnouts and the other infamous paper on experimenting on information contagion especiall, both used millions of users’ data raise the issue of Ethics. Under GDPR for instance, Facebook is obligated to let the user know when and how their data is collected and used. How about how when the information is manipulated? This question is explicitly considered by Elsami’s paper where they found users felt angered (I thought it was betrayal more than anger from the anecdotes) after having found out design decisions that had a real-life impact — explicitly: “it may be that whenever software developer in Menlo Park adjusts a parameter, someone elsewhere wrongly starts to believe themselves to be unloved.”  

2. Irony
Bakshy’s considers their work as a neutral party in the debate about whether (over)exposure to politics is key to a healthy democracy, or whether they lead to a decreased level of participation in democratic processes. They then conclude with the power to expose oneself to differing viewpoint lies in the individual. Yet Facebook curates what a user sees in their newsfeed, and their own research showed that contentious issues promote engagement, and that engagement raises the prominence of the same content — raising the chances of a typical user viewing it. They attempt to temper this in defending the nature of the newsfeed to be dependent on the users logging/activity behavior, but this goes to show that they place the onus again on the user again … to behave in a certain manner for the algorithm to succeed and obtain consistent data?

3. Access, Scale and Subjectivity
I found it interesting about how the two papers sourced the data. Elsami et al, though they had access to respondents data, still had to deal with the throttle imposed by Facebook API. Bakshy’s on the other hand had millions of data, anonymized this disparity does not present a threat on the validity of the study, it’s just a glaring point. It would be interesting if Elsami’s work could be scaled to a larger audience — the interview process is not very scalable, but elements such as users’ knowledge on the effects of the algorithm is especially important to know how well it scales.

The issue of subjectivity manifested differently in these two works: Elsami was able to probe users on personal reasons for their actions on Facebook, giving interesting insights about decisions. Bakshy’s work regarded the use of sharing of content as a marker of ideology. What of sharing for criticism, irony, or reference?  (From what I understood, alignment was measured from the source – and click of shared link, rather than also including the commentary on the measurement). The reasons why posts are shared range from support to criticism in two extremes, and the motivation beyond the sharing makes a consequential difference in what we can conclude based on engagement. The authors note this in both the source of data (from self-reported ideological affiliation) and in their vague distinction between exposure and consumption.

Read More

Reflection #5 – [09/10] – [Neelma Bhatti]

  • Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science348(6239), 1130-1132.
  • Eslami, M., Rickman, A., Vaccaro, K., Aleyasen, A., Vuong, A., Karahalios, K., … & Sandvig, C. (2015, April). I always assumed that I wasn’t really that close to [her]: Reasoning about Invisible Algorithms in News Feeds. In Proceedings of the 33rd annual ACM conference on human factors in computing systems (pp. 153-162). ACM.

Reading reflections:

Most of us have been wondering at some point if a friend who doesn’t show up on our Facebook news feed anymore, blocked or restricted us? At times, we forget about them until they react on some post on our timeline, bringing their existence into notice.

People are becoming more aware of some mechanism used to populate their news feed with stories from their friends, the groups they have joined and the pages they have liked. However, not all of them know whether the displayed content is just randomly selected and displayed, or if there is a more sophisticated way of not only arranging and prioritizing what is displayed, but also filtering out what is “deemed” unnecessary or uninteresting for us by Facebook, which is by using a curation algorithm.

  • There needs to be some randomization in what is displayed to us to break the echo chambers and filter bubbles created around us. It applies both to news we want to read as well as stories displayed in the news feed. Just like going to Target to get a water bottle and finding an oddly placed but awesome pair of  headphones in the aisle. One might not end up buying it, but it will certainly catch the attention and might even lead you to the electronics section to explore around.
  • As regards to political news, not all people choose to read only what is aligned with their ideologically. Some people prefer reading the opposite party’s agenda, if only to pick points to use against the opponent in an argument, or simply to be in the know. Personalizing the news displayed to them based on what they “like” may not exactly be what they are looking for, whatever the intention for reading that news may be.
  • Eslami et. al. talk about the difference in acceptance of the new knowledge with some users demanding to know the back story, while more than half (n=21) ultimately appreciating the algorithm. While some users felt betrayed by the invisible curation algorithm, knowing about the existence of an algorithm controlling what is displayed on their news feed overwhelmed some participants. This sounds true for some elderly people who haven’t been social media users from a long time, or users who are not very educated.  Authors also talk about future work in determination of optimal amount of information  displayed to users  “to satisfy the needs of trustworthy interaction” and “protection of propriety interest”.  An editable log maintaining the changes to news feed content made by hiding a story or lack of interaction with a friend’s/page’s/group’s story etc., which is accessible to user if only he chooses to see it seems to be a reasonable solution to this issue.
  • I liked the clear and interesting narrative of participant selection to data analysis in the second paper, especially after reading the follow up paper [1]. I do think there should have been more information about how participants reacted to stories missing from the groups they follow or pages they’ve liked, or about the extent to which they preferred keeping them as displayed to them. It would’ve given some useful insights into their thought process(or “folk theories”) about what they think goes on with the news feed curation algorithm.

 

[1] Eslami, M., Karahalios, K., Sandvig, C., Vaccaro, K., Rickman, A., Hamilton, K., & Kirlik, A. (2016, May). First i like it, then i hide it: Folk theories of social feeds. In Proceedings of the 2016 cHI conference on human factors in computing systems(pp. 2371-2382). ACM.

 

 

Read More