Video Reflection #9 – [09/27] – [Lindah Kotut]

Natalie Jomini Stroud. “Partisanship and the Search for Engaging News”

We can two take lessons from Stroud work in their approach:

  • Using sociological bent in studying how people make decisions, and how these decisions are reinforced, and therefore how they can be changed.
  • The impact of tone, opposing view points, engagement by journalists and the interventions by moderators (carrot + stick approach) towards the impact of discourse online.

And use them to consider a “news diet” that in conjunction with previous reading on Resnick approach to showing news bent, to propose a design featuring a nutritional label.

The design considerations are/should be  in line with the hypothesis/ concerns laid out in Strouds talk, that is:

  1. Something that does not lend to people’s predilection. If you confirm that I am a conservative, I am proud to wear that label regardless of whether that is a good thing or not.
  2. The design should not try to change a person opinion:
    a) It is dangerous and may backfire
    b) First amendment – prescribes that everyone has a right to an opinion. Civility != opinions agreed with
    c)  The entire moderation structure is subjective
  3. It should nudge towards the willingness to “listen” to the other team
  4. Nudge the opposing side to contribute in a “healthy” constructive way
  5. Points 3 and 4 are a necessary and supportive loops.

We can encapsulate these ideas in a Nutritional label – a mechanism that a user knows/understands the functions of at a glance. This fact is appreciated and has been used in previous work to classify online documents, articulate rankings  and reveal privacy considerations.

As we do not need to explain the function of the labels, we are able to concentrate on providing pertinent information to the user that can be appreciated at a glance, and that can also feature buttons (as recommended by Stroud) to nudge users towards a certain behavior.

The design is included below:

 

We can use different measurements to “nudge” the user towards civil behavior and a tendency to view more diverse news sources. An additional function would be to add thumbs up/down at the end of each bar denoting at a glance how good the user’s “diet” is.

PS: A transcription of the written notes

Ingredient Facts

  • Your diet consists of mostly right-leaning news sources, but also a number of mainstream ones. This is good, as it provides you a balanced view of the news
  • Your language: While it contains little profanity, it contains language that is considered uncivil.
    • This impacts the likelihood of your comment being featured
    • The respect other readers accord you
    • The likelihood of readers with opposing viewpoints reading your comments.

Notes

  • Data is collected from user’s comment archive e.g. Disqus/NYT
  • “Balanced diet” depends on the bend of the news source: right/left/mainstream, together with the variability of sources
  • “Respect” is a factor of “flagged comments” and “recommended”
  • Commenter’s audience: How do they lean?
  • Civility and Profanity are based on textual features
  • “Featured likelihood” can be considered a reward, something to cement user’s respect i.e. the carrot.

Read More

Video Reflection #9 – [09/27] – [Shruti Phadke]

Video Reflection #9 – [09/27] – [Shruti Phadke]

The effect of cognitive biases on information selection and processing is a well-established phenomenon. According to Eli Pariser, the person who coined the term “filter bubble”:

“A world constructed from the familiar is a world in which there’s nothing to learn … (since there is) invisible auto propaganda, indoctrinating us with our own ideas.”

Some previous research has been put into how to nudge the reader towards more cross-cutting content. For example, ConsiderIt exposes users to the choices made by others to provoke perspective re-evaluation. Similarly, OpinionSpace encourages users to seek out diversity through the graphical reflection of their own content. Other than this, the formation of diverse views mainly depends on “serendipitous discovery”, the conscience of mainstream media or the user’s willingness to accept the opposing content. Dr. Stroud mentions that selectivity can be influenced using forced exposure and prompting for accuracy or informing users of their filter bubble. But, is just “exposure” to anti attitudinal content enough to promote real diversity? Does is matter how the information is framed? Jonathan Haidt’s Moral foundation theory interests me the most in this regard. Haidt and colleagues found that liberals are more sensitive to care and fairness while the conservatives emphasize more on the loyalty, authority, and sanctity.  This raises a question of whether a conservative reader can be encouraged to read a left-leaning news by using words associated with conservative moral foundations? Similarly, will a liberal entertain conservative thoughts simply if they highlight fairness and empathy? Stroud’s findings in the second research present strengthen the argument. She reports observing that newsrooms encourage controversy. Also, partisan comments attract more incivility. This might make newsrooms a discouraging place to get exposed to cross-cutting content especially if it is associated with unfavorable framing.

This can form a basis for an experiment that studies how the framing of information affects the acceptance of a cross-cutting content. This research can be done in the collaboration of linguistic experts who can attest to various framings of a similar information consistent with the moral foundations of the user group. Participants can be self-identified liberals and conservatives not getting exposed to the differently polarized news. A control group can consist of users who are exposed to cross-cutting content without reframing the news/information. There can be two treatment groups with the following treatments
1. Exposure to cross-cutting content with conservative moral framing
2. Exposure to cross-cutting content with liberal moral framing

Finally, the effect can be observed in terms are how likely conservatives/liberals are to select a cross-cutting information which is wrapped up in a language corresponding to a specific moral foundation. Further, instead of limiting the study to conservative/liberal moral foundations, the experiment can also explore the effect of all moral foundation dimensions. (care, fairness, loyalty, authority, sanctity)
This type of study can inform what makes cross-cutting news more appealing to specific users and how it can promote diverse ideologies.

 

Read More

Video Reflection #9 – [09/27] – [Subil Abraham]

Dr. Stroud’s talk on her research on partisanship and its effects in the comments was an enlightening look at how news organizations need to consider their incentives and the design for their comment system. One thing that Dr. Stroud talked about during the NY Times study was considering the business incentives of the news organizations themselves, something we as a class have not discussed in detail (besides a few mentions here and there). We have been focused mainly on the user side of things and I think it is important to consider how one could incentivize the organization to take part to solve the problems because right now they see the partisanship in their comment section as good for business. More engagement means that you can serve more ads to more people and bring in more money. You could make conspiracies that engagement and the revenue generated is why Reddit doesn’t ban controversial subreddits unless they attract a lot of negative media attention, but that is a rabbit hole we don’t want to dive into.

I would agree with Dr. Stroud that severe partisanship is an obviously bad thing, but I don’t think that enforcing civility in every comment conversation is the right way to go. Humans are passionate, emotional creatures, prone to wild gesticulation to try and get their point across. People will blow their tops when talking about a topic they feel strongly about, especially when arguing with someone who has an opposing view. And like Dr. Stroud said, the idea of civility is subjective. What is stopping an organization from morphing this idea of civility over time into something that means “anything that opposes the organization’s views”? Remember that no great change has ever been brought about by people being civil. Even Gandhi, the icon of peace, wasn’t civil. His movements were peaceful, yes. But they were disruptive (i.e. most certainly not civil) which is why they were so effective and popular. The goal should be to incentivize people to listen to each other and help them find common ground, not to try and enforce civility which will at best create a facade of good vibes while not actually producing any understanding between the two groups.

Let’s try and speculate what a discussion system that provides the ability for users to listen and understand the opposing side (while allowing for passionate discussions. First thing we would like is for users to declare their alliances by setting where they stand politically (on the left or the right) on a sliding scale that would be visible when they comment. This allows other users know where this user stands and keep that in mind while engaging them. For now, let us assume that we don’t have to deal with trolls and that everyone set their position on the scale honestly. Now, when a comment (or reply) is posted, other users could vote on how well articulated and well argued the post is (we are not using ‘like’ and ‘recommend’ here because, as Dr. Stroud said, the choice of wording is important and lead to different results). If someone on the right made a well written reply that refutes a comment written by someone on the left, and this is acknowledged by other people by leaving votes on how well written and articulated the reply is (giving more weight to votes from the people on the other side of the scale); it could serve as a point for people on the left to think about, even if they are ideologically opposed to it.

If the comments just devolve into name calling and general rudeness, then nobody is getting votes on how well written and articulated it is. But this system could allow passionate discussions that do not necessarily fall into the bucket of “civility” but are still found to be valuable to be voted up and brought to notice of the people who oppose it. Seeing votes from people on their side will provide a strong incentive to try and understand a point that they might otherwise be opposed to and not think too deeply about.

Read More

Reflection #8 – [09/25] – [Karim Youssef]

Nowadays, thanks to the abundance and the accessibility of online information sources, people have access to an overwhelmingly wide range of information. Since the early days of this online information explosion, many researchers were concerned about how the internet will affect and shape the individual’s exposure to information. In other words, how the selective exposure theory will manifest in online news and information sources.

One of the decent studies in this field was presented by R. Kelly Garrett in his work Echo chambers online?: Politically motivated selective exposure among Internet news users. This work analyses factors that affect the selection of an online news source by a user, as well as the time a user will spend reading the selected source. The results of this study tend to reinforce the set of initial hypotheses made by the author. These hypotheses could be summarized in that the motivation for a user to favor a news item that matches his opinion over another that challenges it is to seek opinion reinforcement rather than to avert opinion challenge.

Although the author mentions that these results are somehow reassuring in terms of the worries that the internet contributes to creating “Echo Chambers”there is an important missing piece. This paper studies the effect of the internet as a resource that gives users abundant choices and control over what they read. The fear here was that users selectivity may directly create the Echo Chamber effect. The missing piece here is the contribution of the technology itself in creating this effect through personalization techniques. Although the study shows that users are less likely to avert an opinion challenging information item by itself, the continuous tendency to favor opinion-reinforcing information under the presence of these personalization techniques could lead to a misperceived dominance of their own opinions and an gradual isolation from opposing ideas.

The effect of selective exposure along with the online recommendation and personalization technologies was of concern to Paul Resnick et al. as presented in their work Bursting Your (Filter) Bubble: Strategies for Promoting Diverse Exposure. In their work, they survey existing solutions that aim to encourage the exposure to diverse and cross-cutting content. The surveyed solutions include user interface designs that encourage a user to read opposing opinions or that shows to a user how balanced is his reads.

Despite the attractiveness and creativity of the solutions proposed to promote exposure to diversity, it is necessary to keep moving forward towards a comprehensive understanding of why these “Filter Bubbles” exist. R. Kelly’s study, as well as Eytan Bakshy et al.’s work Exposure to ideologically diverse news and opinion on Facebook, suggest that the choices of individuals play the most significant role in shaping their online exposure. Suppose that we agree to this fact, an important question is: Do hidden personalization algorithms by themselves cause more limit to diversified exposure? or are they only a reflection of the individual’s behavior?. To answer these questions and to have a complete understanding and enhancement of online exposure, we need to connect some dots from the research on selective exposure as a human nature, auditing of online personalization algorithms, and techniques to promote a more diversified online exposure

Understanding an individual’s motivations, studying their role, as well as that of other effects in driving the online recommendation algorithms, could lead to the best strategy to develop a more diversity-promoting online world.

 

Read More

Reflection #8 – [09/25] – [Nitin Nair]

  1. Garett, R. Kelly (2009) – “Echo chambers online? Politically motivated selective exposure among Internet news users
  2. Resnick, Paul (2013) – “Bursting Your (Filter) Bubble: Strategies for Promoting Diverse Exposure”- Proceedings of CSCW ’13 Companion, Feb. 2013.

One of the essential elements of having a democracy is the presence of free press. This right to unrestricted information although with some exceptions, has given us people, the ability to make informed decision. But in recent years, the delivery of such news is through channels which aren’t fair, through the use of personalized recommendation systems. The paper discussed below tries to answer pressing questions from this domain.

In [1], the author tries to look into selective exposure in news readers and tries to see if they are motivated colored by one’s political opinion through the use of a web administered behavior-tracking study. In order to gain better insights the author gives out five hypotheses listed below.

  1. The more opinion-reinforcing information an individual expects a news story to contain, the more likely he or she is to look at it.
  2. The more opinion-reinforcing information a news story contains, the more time an individual will spend viewing it.
  3. The more opinion-challenging information the reader expects a news story to contain, the less likely he or she is to look at it.
  4. The influence of opinion-challenging information on the decision to look at a news story will be smaller than the influence of opinion-reinforcing information.
  5. The more opinion-challenging information a news story contains, the more time the individual will spend viewing it.

Given the how dated the publication is, I wonder if the conclusions of the paper are still relevant. The major channel through which news is delivered has shifted to social media. Here the options are limited given that the content is prefiltered and delivered only if the chance of one clicking it is high. Also, the content you are given exposure to, depends on your “network”. Given these features of the mode of delivery, the authors conclusion, I believe, would definitely be challenged.

Also, one might even question the validity of the authors claim due to lack of diversity of the sample group and how the sample group was selected. Given how the exposure of survey on different news sources were, the group of people who were willing to participate may not have been the true representative of their groups. 

It would have been an interesting experiment if the author chose a wider variety of groups from a diverse political background and analyzed the group behaviour of the same and compared them with each other.

Another experiment that would be have been interesting if conducted would be to see how the behaviour of the user group changes when reading about a particular topic post exposure. Do they stick with the opinions of the first article or do they venture out to challenge it? Given that we are exposed to topics of interest everyday, I believe, a long term exposure study is needed to track the echo chamber effect which is missing in paper [1].

Paper [2] gives reasons for the need to develop products which promote diversity and exposes users to many opinions fostering deliberative discussion. The paper, then goes on to discuss few examples of such products.

Can the user groups be nudged towards good behaviour? I believe that is definitely possible. But, how can one achieve that in a equitable manner? Could it be that certain users are more vulnerable to nudging and others aren’t? Would the data obtained to do so by private entities like social media companies be put to use in the right manner?

I believe some oversight on the above by some third-party is necessary to ensure the same.

Read More

Reflection #8 – [09/25] – [Eslam Hussein]

1. Garrett, R. K., “Echo chambers online?: Politically motivated selective exposure among Internet news users”, (2009)

2. Resnick, Paul, “Bursting Your (Filter) Bubble: Strategies for Promoting Diverse Exposure”, (2013)

Summary:

The first paper is about the selective exposure of online news, whether a user’s consumption of online news based on his political background or not. The author conducted an experiment on 727 online users from two news websites (AlterNet and WorldNetDaily) with different political leaning (left and right respectively). The author tracked their usage and browsing behavior. Each user was given a set news articles about different political controversial topics. The results of this study suggest that opinion-reinforcing storied gets more exposure while opinion-challenging articles get less exposure. He also found that users do not avoid opinion-challenging news and spend some time reading them.

The second paper mentions different strategies developed to diminish selective exposure and promote diverse exposure of information among online users.

Reflection:

– I would prefer if the author of the first paper conducted a longitudinal study and later asked those users how the exposure to opposite point-of-view would challenge their beliefs and how far it might change it.

– I would like also to design a method that presents the counter-attitudinal opinion/news in an acceptable way for the users without triggering their beliefs’ self-defense and reject the information from the opposite side. May be this could be achieved by merging different strategies mentioned in the second paper.

Those papers inspire me to build a database of profiles of news media (broadcast and online). I would collect data such as their political leaning, their stance towards popular and controversial topics, their credibility. I would also record their connectivity to each other and to real world entities (such as countries, governments, parties, businessmen … etc). I would also give each of them different metrics representing how much they broadcast misinformation (rumors and fake news). I believe such dataset would be very beneficial.

– I would like to measure how far the selective exposure of news articles on online news website different from the news that appear in the news feed of Facebook and twitter. I mean would the personalization of news feed on Facebook and twitter be similar to our selection on online news websites.

– I want also to study if people of similar political backgrounds are clustered together on Facebook and twitter (I mean from the network analysis view). Does me and my online friends (on Facebook) share similar beliefs and political preferences and how often we appear in each others news feed (our posts and comments).

In my opinion the reading time metric is irrelevant and misleading. Since the reading time of each article depends on different factors such as the article length (longer articles need longer time to read), the vocabulary and language difficulty (which might the user reading speed) and also the education levels of the users (which will clearly affect their reading speed and information digestion).

Read More

Reflection #8 – [09/25] – [Dhruva Sahasrabudhe]

Papers-

[1] Echo chambers online?: Politically motivated selective exposure among Internet news users – Kelly et. al.

[2] Bursting your (filter) bubble: strategies for promoting diverse exposure – Resnick et. al.

Summaries-

[1] discusses how user’s political leanings affect how they interact with news articles. It collects data from hundreds of users of news sites, and conducts a behavioral tracking experiment to see whether users prefer to interact with content they agree with or content they disagree with. It finds that users are less likely to interact with information they disagree with, but they do not actively avoid it. It constructs five hypotheses, considering whether users look at information which supports or detracts from their own viewpoints, and how long they spend looking at these articles.

[2] is a very short survey type paper, which, after quickly defining the need to design tools to provide diverse exposure and discourse on the Internet, goes on to discuss some implementations try to address this problem by helping users understand the biases of the content they consume, or to consider/explore alternate perspectives, or engage in discourse with a wide variety of viewpoints.

Reflection-

[1] is an interesting read, and makes some fascinating claims, but it has a few flaws. Firstly, it was published around 2009, which was right at the dawn of the age of machine learning for recommender systems. This meant that most websites did not have user specific curated content at that time. The hypothesis discussed by the article, which suggests that the internet may not create echo chambers, since users are not particularly averse to looking at views which go against their own, is not as valid in today’s world. Due to automatic recommender systems, users do not have a choice in this matter anymore, and may be continually exposed to partisan information simply because of their prior information usage patterns.

Secondly, the paper admits that the selection of candidates for the study was not exactly a good representation of the entire nation. The users who signed up for this study already had strong political views, since they were active on either a left leaning or a right leaning website from before. Moreover, more than half of them had a college degree, and the ethnicities of the participants were heavily skewed.

Interestingly, [1] mentions that while only 20,000 people saw the recruitment statement on the left-leaning website (AlterNet), 100,000 people saw the statement on the right-leaning website (WorldNetDaily). However, both sites were almost equally represented in the final selection of candidates, despite the recruitment statement being seen by 5 times as many people on WorldNetDaily. This could hint at an inherent “willingness to participate” of left-leaning people, or might simply be because the readers of the left-leaning site had a lower income on average (as claimed by the paper), and thus desired the participation prize more.

[1] also makes a claim that opinion challenging posts would also lead to an increase in the duration for which the user engages with the content, which is later backed by the data. However, users would probably be less inclined to immediately close articles which they disagreed while interacting with a new unfamiliar software interface, when they know they were taking part in a monitored survey, as they would be when browsing privately.

It is interesting to see that fears of rising political polarization catalyzed by Internet technologies were prevalent not just in 2009, but also as early as 2001, as indicated by the citations made by [1]. It is almost eerie to see these insights become relevant again, more than a decade later.

Many of the systems discussed in [2] would also have a tendency to become biased, depending on the beliefs of the majority share of the users of the systems. For example, if more liberals used Reflect or Opinion Space, then those comments would be more prevalent, and would receive more positive reviews from other liberals.

Opinion Space in [2] reminded me of the abstract interfaces mentioned in the Chat Circles paper, as it creates a space for users to navigate with, where users interact with different types of comments. It also changes the physical characteristics of the comments based on how the users interact with them.

Read More

Reflection #8 – [09/25] – [Viral Pasad]

Papers : 

[1] Garrett, R. Kelly. “Echo chambers online?: Politically motivated selective exposure among Internet news users.” Journal of Computer-Mediated Communication 14.2 (2009): 265-285.

[2] Resnick, Paul, et al. “Bursting your (filter) bubble: strategies for promoting diverse exposure.” Proceedings of the 2013 conference on Computer supported cooperative work companion. ACM, 2013.

 

Summary : 

In the first paper, Garrett addresses the presence of echo chambers on our social media feeds. It studies exposure among online news readers motivated by political opinions. The paper describes the effect of opinion reinforcement and opinion challenging on exposure of online news as well as the read time for each article depending on the content of each article.

In the second paper, Resnick et al deal with strategies to curb the effects of the said echo chambers in social media feeds by introducing the concept of news aggregators and subtle nudges to users. It describes approaches such as ‘ConsiderIt’, ‘Reflect’, ‘OpinionSpace’ as mediums to do so.

 

Reflection :

The question which arises is the safety of the user data obtained, which contains the opinions of participants and how favourable they are to the reinforcement or challenge of a particular topic.

The topics of both the topic take me to my idea of the project proposal. News Aggregators seem really utile, harmless and subtle ways of curbing the Echo Chamber Effect on online platforms. Opinion Grouping could be performed to group articles and posts with similar interests and opinions into concise blocks (which can be expanded to the normal view on demand)  Thus, the concise view clubs multiple posts and articles of the same majority opinion held by the user, thereby leaving space for contrary minority opinions causing opinion challenge. This way average users get balanced opinions about the subject at hand and yet be able to scrutinize more on any opinion that they agree with or disagree with. A FeedViz like interface can be developed to solve the problem and compare which approach leads to a more informed user.

One would think that users would spend more time on opinion reinforcement, but opinion challenging articles also get more read time if the user clicks on the article even once. This is because opinion challenging articles make a user scrutinize more and dig deeper to find flaws.

Read More

Reflection #8 – [09/25] – Subhash Holla H S

[1] R. K. Garrett, “Echo chambers online?: Politically motivated selective exposure among Internet news users,” J. Comput. Commun., vol. 14, no. 2, pp. 265–285, 2009.

[2] P. Resnick, R. Garrett, and T. Kriplean, “Bursting your (filter) bubble: strategies for promoting diverse exposure,” Proc. …, pp. 95–100, 2013.

The first paper talks about selective exposure in great detail. The novelty that is presented in this paper is concerning the opinion challenge avoidance with the explanation “that people’s desire for opinion reinforcement is stronger than their aversion to opinion challenges”. The best way to capture the entirety of the argument is to present the different hypotheses of the paper as the authors present that to be the goal and set out to defend them.

H1: The more opinion-reinforcing information an individual expects a news story to contain, the more likely he or she is to look at it.

H2: The more opinion-reinforcing information a news story contains, the more time an individual will spend viewing it.

H3: The more opinion-challenging information the reader expects a news story to contain, the less likely he or she is to look at it.

H3a: The influence of opinion-challenging information on the decision to look at a news story will be smaller than the influence of opinion-reinforcing information.

H4: The more opinion-challenging information a news story contains, the more time the individual will spend viewing it.

This is reminiscent of a set of design principles that every practitioner is asked to follow, the Gestalt Principles. They are:

  • Similarity
  • Continuation
  • Closure
  • Proximity
  • Figure and ground

The above principles can be interpreted to fit the current context. Humans generally try to find similarity in information to perceive information. They also always form a mental model about most subjects and task which they try and associate to the real world. In the current context, this can be related to the first hypothesis which is reaffirmed by the cognitive dissonance theory as well. Humans have the tendency to see continuity in information, even when it might not inherently exist. This is along the same lines of thought of hypothesis 3a where the influence of opinion-reinforcing information is assumed to have more influence than opinion-challenging ones. The fact that humans always try to find closure, which I would link to trying to read between the lines, is reflected in the fourth hypothesis as people generally want to know the whole story just so that they can twist it to their own narrative when necessary. The second hypothesis can be directly linked to proximity and figure and ground, in a way could be said to map to the third hypothesis as we always see what we want to as the figure and dissociate the rest to be the ground.

In general, when I try and dissect the paper there are few queries that I am left with.

  • Why were the subjects not allowed to go back once their answers were submitted on a page? Would it not reveal that a participant is disinterested in going with the fact that the paper states is a dichotomous variable?
  • What was the rationale with the 15 minutes? Since the entire study was carefully planned out, was there a test done to determine the time allotment for the participants?
  • With the demographics of the audience as a definite skewing factor on the user data, why was no matching conducted to normalize the data to make it more representative?

A few supporting theories that I was reminded of was Group Thinking. This was presented in a book by Irving Janis in 1982 where the idea of an individual being overridden by that of the group is explained. It is relevant here as this explains how some people might be so involved in thinking in a particular way that even if they genuinely believe in an article that is opinion-challenging they just might not go with it. Another one is the paper by Z. Kunda on Motivated Reasoning. Here the author talks about how people just search for things that confirm and reaffirm what they already believe rather than searching for the actual truth.

This is a good transition to the second conference panel paper which we read on diverse exposure. This is essentially a proponent of ways to ensure that we do not have selective exposure. Though most of the panel papers background is already discussed above, the suggestion of using engagement tools like ConsiderIt, Reflect and OpinionSpace are very interesting. At the end of this reading, I have a couple of questions which I am hoping to get people’s opinion on.

  1. Should social and news media nudge users to have diverse exposure? If yes, how much?
  2. Does educating people about selective exposure solve this problem?

DISCLAIMER: I ask these under the assumption that diverse exposure is good.

Read More

Reflection #8 – [09/25] – [Deepika Rama Subramanian]

R.Kelly Garrett. “Echo chambers online?: Politically motivated selective exposure among Internet news users.”

Paul Resnick et al. “Bursting your (filter) bubble: strategies for promoting diverse exposure.”

The assigned readings for this week speaks about the filter bubble that we’ve previously spoken about in this class. Garrett’s paper talks about the likelihood of an individual to pick a news article and the amount of time that he would spend on it depending on their ideological point. He hypothesised and concluded that individuals were more like to look at opinion-reinforcing news and would spend more time reading it if it agreed with their view point strongly. He also concluded that the more opinion-challenging information the reader anticipates in a story, the less likely he was to read it. However, he also realized that the opinion-challenging information had less effect than opinion-reinforcing information.  Resnick’s work talks about the various ways to get around the filter bubble – to be aware of them and to overcome its effects.

Many of Resnick’s proposed methods involved keeping the individual informed of the kind of news that they were reading whether it leaned left or right. In other cases, where motivated information processing was at work, his methods encourage us to identify and understand the arguments posed by the another individual with opposing views. This still does not give us a way to successfully pass on all the information that is available to us. I wonder if the most effective way to deliver such news is to present it through mediums that don’t know partisanship yet. Social and political commentary is often offered by popular sitcoms.

A dent in our hopes of eliminating partisanship through more exposure is dealt by a recent study at Duke University Polarization Lab[1]. They designed an experiment to disrupt people’s echo chambers on Twitter by having Republicans and Democrats follow accounts (automated) that retweeted messages from the opposition. After a month, he discovered that the Republicans exposed to the Democratic account became much more liberal and the democrats who had been exposed to the Republican tweets became slightly more liberal.

 

[1] https://www.washingtonpost.com/science/2018/09/07/bursting-peoples-political-bubbles-could-make-them-even-more-partisan/

Read More