Video Reflection #9 – [09/27] – [Bipasha Banerjee]

The video we had been assigned was about partisanship and search for engaging news by Dr Natalie Stroud. We were exposed to the filter bubble effect in the previous readings and this video gave another perspective into selective exposure, moral foundation and stereotype exposure model. News organization these days are trying to engage with the audience and yes, they do succeed in effecting the thought process of the audience. At least in India, political election results are often determined by the party the dominant news channel of the region supports. This support has led to stable parties being toppled by new emerging political parties. I am focusing on the influence of media and other outlets on audience.

My major concern is as a newbie or a new joiner to any community one must have all the information and then decide if he wants to believe or oppose the particular ideology. This would make sure that unknowingly the person does not start following and believing the ideology. There is something known as a Media Bias chart [1] which depicts all major media outlets on a chart based on their liberal and conservative stance. Similar to that, a “bias index” could be created which would determine from past history the tendency of the particular media outlet to be bias and not only in terms of political belief. It is a way to ensure that people are informed about the stance of the news outlet. Other affiliations should also be made transparent. Social media platforms like YouTube, Twitter, Facebook have incorporated a sort of #Ad feature. This helps the user of the platform to form an informed judgement.

Journalism should be unbiased, however as the speaker had pointed out the factor of “What sells?” plays an important role. This determines what articles are promoted, the breaking news etc. It is true that human nature demands controversy. Negative publicity is what is popular and is received well by a certain sector. So, to analyze how an information is perceived by the audience a live voting mechanism can be incorporated to measure people perception, something similar to the feedback model of Facebook live for news outlets. A simple upvote and downvote system, where the user can text in their “Yes” or “No” opinion and according to the result the number of votes change. This would help formulate the user rating into an index. Let’s call this the “perception index”. It would give a comprehensive idea of what the audience feels about the media outlet. This will be applicable to live television broadcast or digital media and not the print media. Both the perception and the bias index would be a great judgement of the partisanship of the news outlet.

The bias index would be an index reflecting from prior data (considering certain parameters for determining bias). The bias parameters should be known and available to the audience to avoid prejudice in determining the index and to promote awareness. The perception index on the other hand is an indicator of what is perceived at the moment (rather than the past). This would make things transparent as it would not have any predefined parameters. Considering both the indexes a total index can be calculated. I believe that this sort of a model would encourage people to get out of the filter bubble and be more aware.

[1] https://www.adfontesmedia.com/media-bias-chart-3-1-minor-updates-based-constructive-feedback/

Read More

Reflection #9 – [09/27] – [Deepika Rama Subramanian]

Dr. Talia Stroud spoke at length about partisanship in the media.  I’ve jotted down the following things that occurred to me as I watched the lecture:

  1. To keep a check on the degree of partisanship, I would want to study if the degree has increased after the 2016 elections. The public in the America have been more divided than ever after 2016. If we study the degree of incivility in discussion forums before and after the 2016 elections, we may be able to tell conclusively that the political temperature can learn to more divide in public opinion. For a fixed active users, we can check to see if they have flipped their stance (in public atleast) since before and after the elections on any major issues – gay marriage, abortion, etc.
  2. One could imagine that there would be a lot of trolls in the mix just to cause the confusion and have more people engage in the (sometimes) pointless conversations. If we are able to efficiently identify trolls and sockpuppets in the comments (automatically) we may be able to, to a degree, control the off topic conversations in the forums. Does this reduce the amount of polarization on the forums? If it does, this may imply that many people want to put their points forth civilly but are incited into wars in the comments section. We must note that not all news houses have the resources to have 13 people weed out unwanted comments.
  3. While this may not be possible with larger news organizations, small organizations with left-right leaning partisan following can feature stories from the opposition news organization of similar scale. We can have the partisans visit their preferred websites and check to see if this has improved tolerance in the two parties. I would like to mention a study undertaken by the Duke University Polarization Lab (https://www.washingtonpost.com/science/2018/09/07/bursting-peoples-political-bubbles-could-make-them-even-more-partisan/) that suggests that the attempt to burst people’s political bubbles could make them even more partisan. This means that pushing people to acknowledge that they’re living in a bubble could be counter-productive.
  4. There was something that Dr. Stroud mentions that caught my attention- that we might have to feed information to the general populace without them being aware of this. The Emmy nominated sitcom ‘Blackish’ currently streams on Hulu deals with an African American family and their take on America’s social and political fabric. While this show received a lot of flak for their presentation of their issues, this is a way to inject information into the public without this being explicitly called ‘news’. After I binge-watched the series, I realised that they tried their best to give the most balanced information they could and it was quite effective.
  5. Design-wise we can have a ‘The Balancer’ style widget that can help us display the bias in the comment section as users post. By gently asking the user if he wants to post a heavily polarizing comment, we may be able to guilt some users into not posting their comment.

Read More

Reflection #9 – [09/27] – [Eslam Hussein]

The talk given by Talia Stroud, inspired me to design a tool that will facilitate a longitudinal study. The tool briefly is a political news aggregator website (I will explain it later in more details). The study aims to measure and monitor the reading habits of the site’s users, and try to measure techniques that try to expose users to news articles that opposes their political preferences and if those techniques could affect their reading habits and political preference. This tools could help in mitigating the level of polarization between the community of online news readers.

The site will allow users to build their profiles and personalize their news feed. That is by providing details about their social identity, political preferences, and their opinions/preferences regarding a set of the most common controversial topics that being actively discussed and would affect their political/voting preferences (such as, gay marriage, abortion, immigration, Animal Testing, Gun Control … etc).

The site would also give each user a profile color that ranges from red and blue (we could add another colors if the community is represented my multi-polar parties instead of being a bipolar community), which will represent the two main poles of the current political environment (For example, liberals and conservatives, those set of poles could change from community to other and from time to time). The same badge or color would be given to each article. There will also be profiles for news sources (newspapers, blogs, TV shows … etc) that will indicate their political leaning. This coloring metric will be used to filter news feed according to the user profile/color). They will also be displayed beside each news article and news source.

The experiment will start with few phases (left to be crafted by the experiment designer) and after each phase a survey will be provided to users in order to measure how each phase affected their perspective of the opposite pole and their own pole. Each phase will build a news feed filter based on different tactics (will be left to the experiment designer to define the level of exposure, the content being displayed, the number of phases … etc). Gradually the filters will be modified to pass more news that promotes or expose the other pole opinions.

The first phase’s filter will filter out news that are most likely to be controversial to the user, and will pass mainly mainstream news and news that match their preferences.

The second phase filter will be changed to include some news from the other pole but are not controversial or triggering the user to accept or reject to read the article. Suppose that the most competing poles are poles A and B, and the user politically belong to pole A. The aggregator filter will pass those news that will contain humanitarian works done by group B public figures, That would emotionally (the Like dimension) affect the people in group A, I would expose them to more commons between those groups than expose them to the most controversial topics. After some period of exposure we would do how that affected their perception of the other pole(s).

In the next few phases we would expose (increase the amount displayed in their news feed) users to more news that discusses the other pole opinions. Then perform the similar surveys to measure their movement from one pole to another.

Read More

Reflection #9 – [09/27] – [Prerna Juneja]

Video: Partisanship and the search for engaging news

In the video, Natalie Stroud discusses two studies. In the first she examines how different buttons affect people’s response to comments in a comment section. She introduces buttons with three different labels: like, respect and recommend. Button “like” I believe forces people to have extreme views especially when it comes to hard content like politics. You like it means you strongly agree with it. So it makes sense to have labels that help people in supporting an opposing viewpoint, basically have a tool to signal that although I don’t agree with the content, I think the argument presented is actually a strong one. These tools will ensure that a person doesn’t out rightly disregards a comment just by looking at the number of “likes”. And what I really like is that these buttons serve two purposes: business (it had significant number of clicks) and democracy (since people started interacting with information that was counter to their beliefs). The business aspect is very important and is the one that is hardly ever considered. Almost all online platforms are actually businesses. So until and unless the suggested research increases user engagement or benefits the platform in some way, why would a platform implement it in the first place?

I can think of another label for a button which might have similar affects like respect does. News platforms can promote “share” button if they don’t already. If a person sees an article of opposing view being shared multiple number of times, curiosity might make him click that article and check what’s so special about it.

So conclusion from this study is that a design component should not make people to have extreme views (agree/like or dislike/disagree). Rather it should make people want to listen to others especially the ones who are singing a different tune. There is one aspect of this research that we should further extend. While people can be more receptive to opposing comments when it comes to topics like “gay rights” or “favorite music genre and pop artists” are they equally receptive to political content or do they end up “respecting” the politically like-minded comments. Study should include more topics to quantify the effects of changing button labels on different topics. Another line of research could be to study the after effects of “respecting” an opposing viewpoint. After reading and reacting to a strong opposing comment/post, will the user click/google search more on opposing viewpoints? Or will he return to reading like-minded news.

In the second study, Natalie studies effects of punishment (flagging a comment) and incentives (recommendations and top news picks) on partisanship. Swearing/Profanity increases the chances of a comment to get rejected and flagged while decreases the chances of getting selected as NYT Pick. But on the other hand, partisanship and incivility also increases recommendations. Natalie suspects that this makes the news room moderators to treat partisan incivility differently. While a comment containing profane language and swear words might get out-rightly rejected, content containing partisan incivility might get accepted. Does that mean one should extensively start moderating uncivil comments? Won’t that make the comment section bland and highly uninteresting and will probably decrease user engagement. It would be interesting to see how user behavior varies with varying strictness in moderation.

Can constructive comments always promote user engagement? Will a platform where everyone is nice and right attract diverse opinions in the first place? I believe a little incivility can add the essential flavor to the discussions. Probably we can measure the extent of incivility in comments that would promote healthy discussions & debates and develop automated tools that can detect and predict when uncivil comments will deteriorate the quality of discussion and promote heated exchanges among readers. To detect uncivil comments that do not contain swear words is further challenging.

Read More

Reflection #9 – [09/27] – [Viral Pasad]

Natalie Jomini Stroud. “Partisanship and the Search for Engaging News”

Dr  Stroud’s work motivates me to think towards the following (no pun intended) research and solutions.

Selective Exposure and Selective Judgement can be hacked and are very susceptible to being attacked by sock-puppets or bots. Inadvertent Selective Exposure by humans was something that social media platforms exploited to get more traction and engagement on their sites and platforms, but people try to understand or reverse engineer the algorithm (for display of posts on their feed) and hack into the system. If everyone knows that ‘a generic social media/online news’ site mostly shows everyone only what they find reasonable, or agree with, then this is a very great tool for marketers, sock-puppets (created for ulterior motives) to blindly put out content that they would like their (ideological) ‘followers’ to be ‘immersed’ in. This a black mirror recipe episode waiting for disaster and bound to create Echo Chambers and Incompletely Informed Opinions. Not only Incompletely Informed Opinions, it also causes Misinformation as the users who already agree with a certain ideology are unlikely to try to pick it apart in search for a shady clause or outright incorrect information altogether!

This is what was employed by Facebook via Dark Posts in 2016 where a ‘follower’ would see certain posts sent out by their influencers, but if forwarded to ‘non-followers’, they would simply be unable to open those links at all (because it is common knowledge that a non-follower would scrutinize that very post)

 

Thus, I would like to consider a design project/study in two parts, hoping to disrupt Selective Exposure and Selective Judgement. The two parts are as follows:

 

I] Algorithmic Design/Audit – How are the posts are shown (selected)

This deals with not how users’ see their posts visually, but how users’ feed are curated to show them certain kinds of posts more than certain others. With three phase design approach, this we can probably attempt to understand Algorithmic Exploitation of Selectivity Process and User bias towards or against feeds which do not follow/over exploit the inherent Selectivity Process employed users.

The users can be exposed to three kinds of feeds,

  • one, heavily exploiting the selectivity bias (almost creating an echo chamber)
  • two, a neutral feed, equivocating and displaying opposite opinions with equal weightage.
  • three, a hybrid custom feed, which shows the user, agreeable opinionated posts, but also a warning/disclaimer that “this is an echo chamber” and a way to get to other opinions as well, such as tabs or sliders saying “this is what others are saying”

With the third feed, we can also hope to learn behavioural tendencies of users when they learn that they are only seeing one side of the coin.

 

II]Feed Design – How are the posts shown (visually)

This deals with how the posts are visually displayed on user feeds. An approach to perhaps create an equivocating feed which puts the user in charge of his/her opinion by showing the mere facts.

Often, news conforming to the majority opinion has way more number of post s as compared to news conforming to the minority opinion and thus an inadvertent echo chamber is created. A News Aggregator could be employed to group the majority and minority posts in the feed. Selective Exposure will drive the user to peek into the agreeable opinion but Selective Judgement will drive the user to scrutinize more and pick apart the not so agreeable opinion. This, I believe can help disrupt Selective Exposure and Selective Judgements to a certain extent, (hopefully) creating a community of well-informed users.

Read More

Reflection #9 – [09/27] – [Dhruva Sahasrabudhe]

Video-

Partisanship and the search for engaging news – Natalie Stroud.

Reflection-

I found this video particularly interesting, since just last week, my project proposal submission was relevant to selective exposure in online spaces. The idea I had to tackle this problem, especially on platforms which have automatically recommended content, was to create an option for a sort of Anti-Recommender system, which clusters users into groups based on their likes and dislikes, and then serves up recommendations which users in the completely opposite cluster would prefer. This would serve to make people aware about the motivations, arguments, and sources which the “opposite” side has. It could be used not just in politics, but also for a book platform like Goodreads, or even a music platform to help people be exposed to different types of music.

It would also be interesting to explore in more detail the effects of such a system on users; does it incite empathy, anger, curiosity or indifference? does it actually change opinions, or does it make people think of counterarguments which support their beliefs? (this was dealt with in last week’s papers on selective exposure).

Besides analyzing the partisan influences on how people write and interact with comments, it would also be interesting to further break down the categories from two “sides”, down into their constituents, and examine the differences in how the subcategories of these two categories engage with the comments section. For example, how does the interaction vary in both sides, considering minorities, men, women, young, old, etc.

In my opinion, the two keys to understanding selective exposure, and improving how users engage with other users with opposite beliefs are as follows:

  1. Understanding the cases where users are exposed to counterattitudinal information, when and why they actively seek it out, and how they respond to it.
  2. Designing systems which encourage users to : (i) be more accepting of different viewpoints, and (ii) to critically examine their own viewpoints.

Both of these are of course, addressed in depth in the video. I find that these two areas have huge scope for interesting research ideas, more data analysis driven for point 1, and more design driven for point 2.

For example, a system could be designed which takes data from extensions like the balancer, (which was referred to in the bursting your filter bubble paper from last week), or any similar browser extensions which categorize the political content a person views, and analyze that data to see if a “red” person ever binges on “blue” content for an extended period of time, or vice versa, and identifying any triggers which may have caused this to happen. Historical data can also be collected to find out how these users “used” the data they collected from this binge of counterattitudinal data. That is, did they use it as ammunition for their next comments supporting their own side? were they convinced by it, and did they slowly start browsing more counterattitudinal data?

Similarly, systems can be designed which transform a webpage to “nice-ify” it. This could be a browser extension, which provides little messages at the top of a web-page, reminding users to be nice, or respectful. It could also detect uncivil language, and display a message asking them to reconsider.This ties into the discussion about the effectiveness of priming users to behave in certain ways.

Systems could also be designed to humanize people on chat forums, by adding some (user decided) facts about them to emphasize their personhood, without revealing their identity. It is a lot harder to insult Sarah, who has a 6 month old kitten named Snowball, and likes roller blading, than it is to insult the user sarah_1996. This would also bridge partisan gaps by emphasizing that the other side consists of humans with identities beyond their political beliefs.

Read More

Video Reflection #9 – [09/27] – [Lindah Kotut]

Natalie Jomini Stroud. “Partisanship and the Search for Engaging News”

We can two take lessons from Stroud work in their approach:

  • Using sociological bent in studying how people make decisions, and how these decisions are reinforced, and therefore how they can be changed.
  • The impact of tone, opposing view points, engagement by journalists and the interventions by moderators (carrot + stick approach) towards the impact of discourse online.

And use them to consider a “news diet” that in conjunction with previous reading on Resnick approach to showing news bent, to propose a design featuring a nutritional label.

The design considerations are/should be  in line with the hypothesis/ concerns laid out in Strouds talk, that is:

  1. Something that does not lend to people’s predilection. If you confirm that I am a conservative, I am proud to wear that label regardless of whether that is a good thing or not.
  2. The design should not try to change a person opinion:
    a) It is dangerous and may backfire
    b) First amendment – prescribes that everyone has a right to an opinion. Civility != opinions agreed with
    c)  The entire moderation structure is subjective
  3. It should nudge towards the willingness to “listen” to the other team
  4. Nudge the opposing side to contribute in a “healthy” constructive way
  5. Points 3 and 4 are a necessary and supportive loops.

We can encapsulate these ideas in a Nutritional label – a mechanism that a user knows/understands the functions of at a glance. This fact is appreciated and has been used in previous work to classify online documents, articulate rankings  and reveal privacy considerations.

As we do not need to explain the function of the labels, we are able to concentrate on providing pertinent information to the user that can be appreciated at a glance, and that can also feature buttons (as recommended by Stroud) to nudge users towards a certain behavior.

The design is included below:

 

We can use different measurements to “nudge” the user towards civil behavior and a tendency to view more diverse news sources. An additional function would be to add thumbs up/down at the end of each bar denoting at a glance how good the user’s “diet” is.

PS: A transcription of the written notes

Ingredient Facts

  • Your diet consists of mostly right-leaning news sources, but also a number of mainstream ones. This is good, as it provides you a balanced view of the news
  • Your language: While it contains little profanity, it contains language that is considered uncivil.
    • This impacts the likelihood of your comment being featured
    • The respect other readers accord you
    • The likelihood of readers with opposing viewpoints reading your comments.

Notes

  • Data is collected from user’s comment archive e.g. Disqus/NYT
  • “Balanced diet” depends on the bend of the news source: right/left/mainstream, together with the variability of sources
  • “Respect” is a factor of “flagged comments” and “recommended”
  • Commenter’s audience: How do they lean?
  • Civility and Profanity are based on textual features
  • “Featured likelihood” can be considered a reward, something to cement user’s respect i.e. the carrot.

Read More

Video Reflection #9 – [09/27] – [Shruti Phadke]

Video Reflection #9 – [09/27] – [Shruti Phadke]

The effect of cognitive biases on information selection and processing is a well-established phenomenon. According to Eli Pariser, the person who coined the term “filter bubble”:

“A world constructed from the familiar is a world in which there’s nothing to learn … (since there is) invisible auto propaganda, indoctrinating us with our own ideas.”

Some previous research has been put into how to nudge the reader towards more cross-cutting content. For example, ConsiderIt exposes users to the choices made by others to provoke perspective re-evaluation. Similarly, OpinionSpace encourages users to seek out diversity through the graphical reflection of their own content. Other than this, the formation of diverse views mainly depends on “serendipitous discovery”, the conscience of mainstream media or the user’s willingness to accept the opposing content. Dr. Stroud mentions that selectivity can be influenced using forced exposure and prompting for accuracy or informing users of their filter bubble. But, is just “exposure” to anti attitudinal content enough to promote real diversity? Does is matter how the information is framed? Jonathan Haidt’s Moral foundation theory interests me the most in this regard. Haidt and colleagues found that liberals are more sensitive to care and fairness while the conservatives emphasize more on the loyalty, authority, and sanctity.  This raises a question of whether a conservative reader can be encouraged to read a left-leaning news by using words associated with conservative moral foundations? Similarly, will a liberal entertain conservative thoughts simply if they highlight fairness and empathy? Stroud’s findings in the second research present strengthen the argument. She reports observing that newsrooms encourage controversy. Also, partisan comments attract more incivility. This might make newsrooms a discouraging place to get exposed to cross-cutting content especially if it is associated with unfavorable framing.

This can form a basis for an experiment that studies how the framing of information affects the acceptance of a cross-cutting content. This research can be done in the collaboration of linguistic experts who can attest to various framings of a similar information consistent with the moral foundations of the user group. Participants can be self-identified liberals and conservatives not getting exposed to the differently polarized news. A control group can consist of users who are exposed to cross-cutting content without reframing the news/information. There can be two treatment groups with the following treatments
1. Exposure to cross-cutting content with conservative moral framing
2. Exposure to cross-cutting content with liberal moral framing

Finally, the effect can be observed in terms are how likely conservatives/liberals are to select a cross-cutting information which is wrapped up in a language corresponding to a specific moral foundation. Further, instead of limiting the study to conservative/liberal moral foundations, the experiment can also explore the effect of all moral foundation dimensions. (care, fairness, loyalty, authority, sanctity)
This type of study can inform what makes cross-cutting news more appealing to specific users and how it can promote diverse ideologies.

 

Read More

Video Reflection #9 – [09/27] – [Subil Abraham]

Dr. Stroud’s talk on her research on partisanship and its effects in the comments was an enlightening look at how news organizations need to consider their incentives and the design for their comment system. One thing that Dr. Stroud talked about during the NY Times study was considering the business incentives of the news organizations themselves, something we as a class have not discussed in detail (besides a few mentions here and there). We have been focused mainly on the user side of things and I think it is important to consider how one could incentivize the organization to take part to solve the problems because right now they see the partisanship in their comment section as good for business. More engagement means that you can serve more ads to more people and bring in more money. You could make conspiracies that engagement and the revenue generated is why Reddit doesn’t ban controversial subreddits unless they attract a lot of negative media attention, but that is a rabbit hole we don’t want to dive into.

I would agree with Dr. Stroud that severe partisanship is an obviously bad thing, but I don’t think that enforcing civility in every comment conversation is the right way to go. Humans are passionate, emotional creatures, prone to wild gesticulation to try and get their point across. People will blow their tops when talking about a topic they feel strongly about, especially when arguing with someone who has an opposing view. And like Dr. Stroud said, the idea of civility is subjective. What is stopping an organization from morphing this idea of civility over time into something that means “anything that opposes the organization’s views”? Remember that no great change has ever been brought about by people being civil. Even Gandhi, the icon of peace, wasn’t civil. His movements were peaceful, yes. But they were disruptive (i.e. most certainly not civil) which is why they were so effective and popular. The goal should be to incentivize people to listen to each other and help them find common ground, not to try and enforce civility which will at best create a facade of good vibes while not actually producing any understanding between the two groups.

Let’s try and speculate what a discussion system that provides the ability for users to listen and understand the opposing side (while allowing for passionate discussions. First thing we would like is for users to declare their alliances by setting where they stand politically (on the left or the right) on a sliding scale that would be visible when they comment. This allows other users know where this user stands and keep that in mind while engaging them. For now, let us assume that we don’t have to deal with trolls and that everyone set their position on the scale honestly. Now, when a comment (or reply) is posted, other users could vote on how well articulated and well argued the post is (we are not using ‘like’ and ‘recommend’ here because, as Dr. Stroud said, the choice of wording is important and lead to different results). If someone on the right made a well written reply that refutes a comment written by someone on the left, and this is acknowledged by other people by leaving votes on how well written and articulated the reply is (giving more weight to votes from the people on the other side of the scale); it could serve as a point for people on the left to think about, even if they are ideologically opposed to it.

If the comments just devolve into name calling and general rudeness, then nobody is getting votes on how well written and articulated it is. But this system could allow passionate discussions that do not necessarily fall into the bucket of “civility” but are still found to be valuable to be voted up and brought to notice of the people who oppose it. Seeing votes from people on their side will provide a strong incentive to try and understand a point that they might otherwise be opposed to and not think too deeply about.

Read More

Reflection #8 – [09/25] – [Karim Youssef]

Nowadays, thanks to the abundance and the accessibility of online information sources, people have access to an overwhelmingly wide range of information. Since the early days of this online information explosion, many researchers were concerned about how the internet will affect and shape the individual’s exposure to information. In other words, how the selective exposure theory will manifest in online news and information sources.

One of the decent studies in this field was presented by R. Kelly Garrett in his work Echo chambers online?: Politically motivated selective exposure among Internet news users. This work analyses factors that affect the selection of an online news source by a user, as well as the time a user will spend reading the selected source. The results of this study tend to reinforce the set of initial hypotheses made by the author. These hypotheses could be summarized in that the motivation for a user to favor a news item that matches his opinion over another that challenges it is to seek opinion reinforcement rather than to avert opinion challenge.

Although the author mentions that these results are somehow reassuring in terms of the worries that the internet contributes to creating “Echo Chambers”there is an important missing piece. This paper studies the effect of the internet as a resource that gives users abundant choices and control over what they read. The fear here was that users selectivity may directly create the Echo Chamber effect. The missing piece here is the contribution of the technology itself in creating this effect through personalization techniques. Although the study shows that users are less likely to avert an opinion challenging information item by itself, the continuous tendency to favor opinion-reinforcing information under the presence of these personalization techniques could lead to a misperceived dominance of their own opinions and an gradual isolation from opposing ideas.

The effect of selective exposure along with the online recommendation and personalization technologies was of concern to Paul Resnick et al. as presented in their work Bursting Your (Filter) Bubble: Strategies for Promoting Diverse Exposure. In their work, they survey existing solutions that aim to encourage the exposure to diverse and cross-cutting content. The surveyed solutions include user interface designs that encourage a user to read opposing opinions or that shows to a user how balanced is his reads.

Despite the attractiveness and creativity of the solutions proposed to promote exposure to diversity, it is necessary to keep moving forward towards a comprehensive understanding of why these “Filter Bubbles” exist. R. Kelly’s study, as well as Eytan Bakshy et al.’s work Exposure to ideologically diverse news and opinion on Facebook, suggest that the choices of individuals play the most significant role in shaping their online exposure. Suppose that we agree to this fact, an important question is: Do hidden personalization algorithms by themselves cause more limit to diversified exposure? or are they only a reflection of the individual’s behavior?. To answer these questions and to have a complete understanding and enhancement of online exposure, we need to connect some dots from the research on selective exposure as a human nature, auditing of online personalization algorithms, and techniques to promote a more diversified online exposure

Understanding an individual’s motivations, studying their role, as well as that of other effects in driving the online recommendation algorithms, could lead to the best strategy to develop a more diversity-promoting online world.

 

Read More