Reflection 10 – [10/02] – [Subil Abraham]

Starbird’s paper is an interesting examination of the connections between the many “alternative news” domains, with mass shooter conspiracies being the theme connecting them all.

The paper mentions that the author themselves is a politically left-leaning individual and points out that this may bias their perceptions when writing this paper. The author mentioning their bias made me think about my own possible left-leaning bias when consuming the news. When I see some news from an “alt-left” site that someone on the right would call a conspiracy theory, am I taking that news as the truth because it probably agrees with my perceptions; perceptions which may have been sculpted by ages of consuming left-leaning news. How would I, as an individual on the left, be able to conduct a study on left-leaning alternative narratives without my bias skewing the conclusions? Scientists are humans and you cannot eliminate bias from a human entirely. You could bring on people from the right as equal partners when conducting the study and then keep each other in check to try and cancel out each other’s bias. How well you would be able to work together and produce good, solid research considering that this is essentially an adversarial relationship, I do not know.

It’s interesting that Infowars has such a large share of the tweets but only has one edge connecting to it. GIven how prominent Infowars is, one would think that they would have a lot more edges i.e. users that tweet out other alt-right websites would tweet out Infowars too. But it seems like a bulk of the users just tweet out Infowars and nothing else. This means that the audience of Infowars, for the most part, does not overlap with the audience of other alt-right news sites. Now, why would that be? Is it because Infowars’ audience is satisfied with the news they get from there and don’t go anywhere else? Is it also because the audience of other alt-right sites think Infowars is unreliable or maybe they think Infowars is too nutty? Who knows. A larger examination of the reading habits of Infowars’ audience would be interesting. Since this study focuses only on mass shooter conspiracies, it would be interesting to know if and how widely Infowars’ audience read when it comes to the wider field of topics the alt-right websites talk about.

The conclusions from this paper ties really well into the theme of Selective Exposure we talked about the last two reflections. People see different sources all spouting the same thing in different ways and repeated exposure reinforces their opinions. You only need to plant a seed that something is maybe true and then barrage them with sources that are seemingly confirming that truth. Confirmation Bias will take care of the rest. It is especially problematic when it comes to exposure to alternative narratives because the extreme nature of the opinions that will form will be particularly damaging. This is how the anti-vaxxer movement grew and now we have the problem that diseases like measles are coming back because of the loss of herd immunity thanks to anti-vaxxers not vaccinating their children [1]. Trying to suppress alternative narratives is a lost cause as banning one account or website will just lead to the creation of two others. How can we delegitimize false alternative narratives in people who are deep in the throes of their selective exposure? Simply pointing to the truth clearly doesn’t work, otherwise it would be easy and we wouldn’t be having this problem. People need to be deprogrammed by replacing their information diet and enforcing this change for a long time. This is basically how propaganda works and it is fairly effective. Conducting a study of how long it takes to change someone’s entrenched mindset from one to the opposite with only information consumption (controlling for things like personal life events) would be a good first step to understand how we can change people (of course, we run into ethical questions of should we be changing people en masse at all but that is a can of worms I don’t want to open just yet).

Read More

Video Reflection #9 – [09/27] – [Nitin Nair]

How we humans select information depends on a few known factors which make the information selection process biased. This is a well established phenomenon. Given such cognitive biases exist and we live in a democratic system in an age where information overload exists, how does it impact social conversation and debate?
As mentioned in the talk this selective exposure or judgment can be used for good, for example to increase voter turnout. But this gets me thinking, is this nudging sustainable? Relating to the discussions after reading reflection #1, about different kinds of signals, is this nudge an assessment or conventional signal? One could definitely think about an instance where users exposed to barrage of news instances which bolsters their positions get desensitized, resulting in neglection of these cues.
The portion of the talk where the speaker discusses about behaviour of people when exposed to counter-attitudinal positions is an interesting one. This portion coupled with one of the project ideas proposed by me got me thinking about a particular news feed design.

Given that we solve the issue of mapping the position of different news sources in the political spectrum, in order to expose users to sources from outside their spectrum, we could design a slide bar whose position will decide the news sources which will populate the news feed as shown above. The lightning symbol next to each article allows one to shift to a feed populated by articles talking about the same topic. The topic tags which are found through keyword extraction (Rose et al. (2010)) combined with the time of publishing of the article could help us suggest news articles talking about the same issue from a source with a different political leaning.
Given such a design, we could identify the trends on how and when the users enter the sphere of counter-attitudinal positions which is an idea the speaker mentions in the video.
Do people linger more on comments which go against their beliefs or which suits their beliefs? One could run experiments on consented users to see which comments they spend more time reading. Pick and analyze posts which top the list, accounting for the length of the post. My hypothesis is that comments which go against one’s belief would warrant more time as one would take time to comprehend the position first, compare and contrast with their own belief systems and then take action which can be replying or reacting to the comment. If using temporal information is useful, it could pave way to a potential method through which one can find “top comments”, uncivil comments(more time taken) along with explicit content(less time taken). During the extraction of top comments one has to have a human in the loop along with accounting for the personal political position in the spectrum.
The discussion by the speaker on “priming” of users using the stereotype content model, is extremely fascinating (Fiske et al. (2018)). Given that priming has a significant impact on the way users react to certain information, can it be possible to identify “priming” in news articles or news videos?
One could build an automated tool to do so to detect and identify the kind of priming, may it be “like”, “respect” or other orthogonal dimensional primes. The orthogonal prime could be “recommend” the one the speaker points out in her research (Stroud et al. (2017)). Given such an automated tool exists, it would be interesting to use it on large number of sources to identify these nudges.

 

References

Susan T Fiske, Amy JC Cuddy, Peter Glick, and Jun Xu. 2018. A model of (often mixed) stereotype content: Competence and warmth respectively follow from perceived status and competition (2002). In Social Cognition. Routledge, 171–222.

Stuart Rose, Dave Engel, Nick Cramer, and Wendy Cowley. 2010. Automatic keyword extraction from individual documents. Text Mining: Applications and Theory (2010), 1–20.

Natalie Jomini Stroud, Ashley Muddiman, and Joshua M Scacco. 2017. Like, recommend, or respect? Altering political behavior in news comment sections. New Media & Society 19, 11 (2017), 1727–1743.

Read More

Reflection #9 – [09/27] – [Vibhav Nanda]

Video: Partisanship and the search for engaging news

Summary: In this blog I am proposing a system which will nudge readers towards the other side — based on their current emotional and mental state.

Introduction: Natalie Stroud’s video inspired me to come up with a system which can encourage bipartisanship and burst the echo chamber effect. From the video and the previous papers that I have read, I have gathered that we need to work on and worry about people with extreme political standpoints(extreme left leaning and extreme right leaning); people with a more balanced standpoint already read news from disparate sources — their balance is what makes them supporters of the center politics. Extreme political takes can usually be traced down to belief systems, and to nudge people out of their belief system is risky — sometimes leading to resentment towards others’ belief system. Howbeit, based on an individuals mental and emotional state they are more or less likely to try to understand other side of the story. I am proposing a system which will track users’ behavior online, understand how they usually behave given a circumstance and if their behavior is deviant from usual then nudge them towards the other side.

Assumption: For the sake of simplicity and brevity, I am going to make the following assumptions:

  1. The system only tracks behavior in the comment section (inspiration drawn by the video)
  2. User is a male liberal and aggressive towards anyone opposing his opinion (on an online platform)

Explanation of the system through an example:  Now lets say our hypothetical user gets his daily dose of news from articles shared on facebook (extremely realistic situation), and because of all the filters, the news he gets is usually published by CNN and MSNBC. He reads the news, scrolls through the comments section and responds aggressively to users whose comments are in opposition to either the article or the topic it is about(lets say gay rights). Aggression is the users usual response to top 5 opposing comments — this is our users’ online persona and has been recognized and developed by our system. Now one day our user reads an article about gay rights and either doesn’t respond so aggressively towards opposing comments or doesn’t respond at all — an aberration that would be detected by our system and would be flagged as “open”, meaning this user is open today to opposing ideas. Taking advantage of this open mindedness, our system will subtly nudge the user towards a gay rights article written by Fox News.

 

Novelty:  The system leverages changes in moods and emotions to nudge readers towards the other side, instead of a constant nudge. A constant nudge can lead to ignorance of the nudge’s presence, frustrate the user into switching off the feature and if that is not possible, then pushing the user to a different platform. This timely nudge is important for it to be successful in promoting the user to be empathetic towards the other side and to engage in a more civil and logical conversation.

Read More

Video Reflection #9 – [09/27] – [Bipasha Banerjee]

The video we had been assigned was about partisanship and search for engaging news by Dr Natalie Stroud. We were exposed to the filter bubble effect in the previous readings and this video gave another perspective into selective exposure, moral foundation and stereotype exposure model. News organization these days are trying to engage with the audience and yes, they do succeed in effecting the thought process of the audience. At least in India, political election results are often determined by the party the dominant news channel of the region supports. This support has led to stable parties being toppled by new emerging political parties. I am focusing on the influence of media and other outlets on audience.

My major concern is as a newbie or a new joiner to any community one must have all the information and then decide if he wants to believe or oppose the particular ideology. This would make sure that unknowingly the person does not start following and believing the ideology. There is something known as a Media Bias chart [1] which depicts all major media outlets on a chart based on their liberal and conservative stance. Similar to that, a “bias index” could be created which would determine from past history the tendency of the particular media outlet to be bias and not only in terms of political belief. It is a way to ensure that people are informed about the stance of the news outlet. Other affiliations should also be made transparent. Social media platforms like YouTube, Twitter, Facebook have incorporated a sort of #Ad feature. This helps the user of the platform to form an informed judgement.

Journalism should be unbiased, however as the speaker had pointed out the factor of “What sells?” plays an important role. This determines what articles are promoted, the breaking news etc. It is true that human nature demands controversy. Negative publicity is what is popular and is received well by a certain sector. So, to analyze how an information is perceived by the audience a live voting mechanism can be incorporated to measure people perception, something similar to the feedback model of Facebook live for news outlets. A simple upvote and downvote system, where the user can text in their “Yes” or “No” opinion and according to the result the number of votes change. This would help formulate the user rating into an index. Let’s call this the “perception index”. It would give a comprehensive idea of what the audience feels about the media outlet. This will be applicable to live television broadcast or digital media and not the print media. Both the perception and the bias index would be a great judgement of the partisanship of the news outlet.

The bias index would be an index reflecting from prior data (considering certain parameters for determining bias). The bias parameters should be known and available to the audience to avoid prejudice in determining the index and to promote awareness. The perception index on the other hand is an indicator of what is perceived at the moment (rather than the past). This would make things transparent as it would not have any predefined parameters. Considering both the indexes a total index can be calculated. I believe that this sort of a model would encourage people to get out of the filter bubble and be more aware.

[1] https://www.adfontesmedia.com/media-bias-chart-3-1-minor-updates-based-constructive-feedback/

Read More

Video Reflection #9 – [09/27] – [Karim Youssef]

Partisanship is an inherently social phenomenon where people tend to form groups around different ideologies and their representatives. However, if partisans get isolated inside their groups without a constructive exposure to ideas and opinions of other groups, the society may start to deviate from being a healthy and democratic society. Talia Stroud works towards promoting a constructive engagement between partisans of different groups and mitigating the negative effect of partisanship in online news media.

The first study presented by Stroud leverages the Stereotype content model to promote the idea of distinguishing likeness and respect. Results of this study show a significant effect for changing the names of the reaction buttons on comments. Some questions could arise here such as: what is the long-term effect of such a solution in terms of actually refining any negative partisan behavior?. The results of the study partially answer this question by showing that people actually “respect” opposing ideas. But from all people who pass by an opposing comment, how many are actually willing to positively engage with an opposing comment? How to encourage people to engage and respect an opposing comment that deserves this respect?

From my perspective, I would suggest answering these questions by the following:

  1. extending the study within the context of selective exposure and selective judgment by studying the percentage of people who stop by opposing comments, read them, and give them a deserved respect.
  2.  extending the design to include a feedback to the user. For example, including a healthy engagement score that increases when a user reads and respects an opposing opinion.

The second study presented in the video analyzes the effect of incivility in online news media comments by analyzing the triggers of reward and punishment for comments. In this regard, the study compares three behavioral acts; profanity, incivility, and partisanship.  It is of no surprise that profanity is the agreed-upon rejected act by both commenters and moderators. However, it is a fact that sometimes conversations with incivility attract views and even engagement. Many among my generation grew up watching these TV shows with political opponents fighting on air. These types of media always have the good cause of promoting fruitful discussions between opposing mindsets, however, as she mentioned, there are business incentives behind promoting some controversial discussions.

In a perfect world, we may wish that fruitful interactions between partisans of different groups become as encouraging for engagement as those situations when partisans engage in a fighter mode to defend their ideologies. The question is how to encourage news organization to define clear thresholds for the amount of acceptable incivility in discussions about hot issues?. From another perspective, is it feasible to do so? Or should researchers focus on promoting desirable engagement among users rather than moving towards a stricter moderation of online comments?

From my perspective, the current model for news organizations is the best we can do in terms of having a set of rules and( humans and/or automated ) moderators enforcing these rules to some extent. The changes that we apply could be changed in the user interface design of online news organizations to promote a healthier engagement ( e.g. the first study with my suggestions added to it ) integrated with some of the ideas surveyed in the work Bursting Your (Filter) Bubble: Strategies for Promoting Diverse Exposure. Another important step could be auditing ( and maybe redesigning ) the recommendation algorithms to ensure that they do not contribute to this called filter bubble effect.

 

Read More

Reflection #9 – [09/27] – [Deepika Rama Subramanian]

Dr. Talia Stroud spoke at length about partisanship in the media.  I’ve jotted down the following things that occurred to me as I watched the lecture:

  1. To keep a check on the degree of partisanship, I would want to study if the degree has increased after the 2016 elections. The public in the America have been more divided than ever after 2016. If we study the degree of incivility in discussion forums before and after the 2016 elections, we may be able to tell conclusively that the political temperature can learn to more divide in public opinion. For a fixed active users, we can check to see if they have flipped their stance (in public atleast) since before and after the elections on any major issues – gay marriage, abortion, etc.
  2. One could imagine that there would be a lot of trolls in the mix just to cause the confusion and have more people engage in the (sometimes) pointless conversations. If we are able to efficiently identify trolls and sockpuppets in the comments (automatically) we may be able to, to a degree, control the off topic conversations in the forums. Does this reduce the amount of polarization on the forums? If it does, this may imply that many people want to put their points forth civilly but are incited into wars in the comments section. We must note that not all news houses have the resources to have 13 people weed out unwanted comments.
  3. While this may not be possible with larger news organizations, small organizations with left-right leaning partisan following can feature stories from the opposition news organization of similar scale. We can have the partisans visit their preferred websites and check to see if this has improved tolerance in the two parties. I would like to mention a study undertaken by the Duke University Polarization Lab (https://www.washingtonpost.com/science/2018/09/07/bursting-peoples-political-bubbles-could-make-them-even-more-partisan/) that suggests that the attempt to burst people’s political bubbles could make them even more partisan. This means that pushing people to acknowledge that they’re living in a bubble could be counter-productive.
  4. There was something that Dr. Stroud mentions that caught my attention- that we might have to feed information to the general populace without them being aware of this. The Emmy nominated sitcom ‘Blackish’ currently streams on Hulu deals with an African American family and their take on America’s social and political fabric. While this show received a lot of flak for their presentation of their issues, this is a way to inject information into the public without this being explicitly called ‘news’. After I binge-watched the series, I realised that they tried their best to give the most balanced information they could and it was quite effective.
  5. Design-wise we can have a ‘The Balancer’ style widget that can help us display the bias in the comment section as users post. By gently asking the user if he wants to post a heavily polarizing comment, we may be able to guilt some users into not posting their comment.

Read More

Reflection #9 – [09/27] – [Eslam Hussein]

The talk given by Talia Stroud, inspired me to design a tool that will facilitate a longitudinal study. The tool briefly is a political news aggregator website (I will explain it later in more details). The study aims to measure and monitor the reading habits of the site’s users, and try to measure techniques that try to expose users to news articles that opposes their political preferences and if those techniques could affect their reading habits and political preference. This tools could help in mitigating the level of polarization between the community of online news readers.

The site will allow users to build their profiles and personalize their news feed. That is by providing details about their social identity, political preferences, and their opinions/preferences regarding a set of the most common controversial topics that being actively discussed and would affect their political/voting preferences (such as, gay marriage, abortion, immigration, Animal Testing, Gun Control … etc).

The site would also give each user a profile color that ranges from red and blue (we could add another colors if the community is represented my multi-polar parties instead of being a bipolar community), which will represent the two main poles of the current political environment (For example, liberals and conservatives, those set of poles could change from community to other and from time to time). The same badge or color would be given to each article. There will also be profiles for news sources (newspapers, blogs, TV shows … etc) that will indicate their political leaning. This coloring metric will be used to filter news feed according to the user profile/color). They will also be displayed beside each news article and news source.

The experiment will start with few phases (left to be crafted by the experiment designer) and after each phase a survey will be provided to users in order to measure how each phase affected their perspective of the opposite pole and their own pole. Each phase will build a news feed filter based on different tactics (will be left to the experiment designer to define the level of exposure, the content being displayed, the number of phases … etc). Gradually the filters will be modified to pass more news that promotes or expose the other pole opinions.

The first phase’s filter will filter out news that are most likely to be controversial to the user, and will pass mainly mainstream news and news that match their preferences.

The second phase filter will be changed to include some news from the other pole but are not controversial or triggering the user to accept or reject to read the article. Suppose that the most competing poles are poles A and B, and the user politically belong to pole A. The aggregator filter will pass those news that will contain humanitarian works done by group B public figures, That would emotionally (the Like dimension) affect the people in group A, I would expose them to more commons between those groups than expose them to the most controversial topics. After some period of exposure we would do how that affected their perception of the other pole(s).

In the next few phases we would expose (increase the amount displayed in their news feed) users to more news that discusses the other pole opinions. Then perform the similar surveys to measure their movement from one pole to another.

Read More

Reflection #9 – [09/27] – [Prerna Juneja]

Video: Partisanship and the search for engaging news

In the video, Natalie Stroud discusses two studies. In the first she examines how different buttons affect people’s response to comments in a comment section. She introduces buttons with three different labels: like, respect and recommend. Button “like” I believe forces people to have extreme views especially when it comes to hard content like politics. You like it means you strongly agree with it. So it makes sense to have labels that help people in supporting an opposing viewpoint, basically have a tool to signal that although I don’t agree with the content, I think the argument presented is actually a strong one. These tools will ensure that a person doesn’t out rightly disregards a comment just by looking at the number of “likes”. And what I really like is that these buttons serve two purposes: business (it had significant number of clicks) and democracy (since people started interacting with information that was counter to their beliefs). The business aspect is very important and is the one that is hardly ever considered. Almost all online platforms are actually businesses. So until and unless the suggested research increases user engagement or benefits the platform in some way, why would a platform implement it in the first place?

I can think of another label for a button which might have similar affects like respect does. News platforms can promote “share” button if they don’t already. If a person sees an article of opposing view being shared multiple number of times, curiosity might make him click that article and check what’s so special about it.

So conclusion from this study is that a design component should not make people to have extreme views (agree/like or dislike/disagree). Rather it should make people want to listen to others especially the ones who are singing a different tune. There is one aspect of this research that we should further extend. While people can be more receptive to opposing comments when it comes to topics like “gay rights” or “favorite music genre and pop artists” are they equally receptive to political content or do they end up “respecting” the politically like-minded comments. Study should include more topics to quantify the effects of changing button labels on different topics. Another line of research could be to study the after effects of “respecting” an opposing viewpoint. After reading and reacting to a strong opposing comment/post, will the user click/google search more on opposing viewpoints? Or will he return to reading like-minded news.

In the second study, Natalie studies effects of punishment (flagging a comment) and incentives (recommendations and top news picks) on partisanship. Swearing/Profanity increases the chances of a comment to get rejected and flagged while decreases the chances of getting selected as NYT Pick. But on the other hand, partisanship and incivility also increases recommendations. Natalie suspects that this makes the news room moderators to treat partisan incivility differently. While a comment containing profane language and swear words might get out-rightly rejected, content containing partisan incivility might get accepted. Does that mean one should extensively start moderating uncivil comments? Won’t that make the comment section bland and highly uninteresting and will probably decrease user engagement. It would be interesting to see how user behavior varies with varying strictness in moderation.

Can constructive comments always promote user engagement? Will a platform where everyone is nice and right attract diverse opinions in the first place? I believe a little incivility can add the essential flavor to the discussions. Probably we can measure the extent of incivility in comments that would promote healthy discussions & debates and develop automated tools that can detect and predict when uncivil comments will deteriorate the quality of discussion and promote heated exchanges among readers. To detect uncivil comments that do not contain swear words is further challenging.

Read More

Reflection #9 – [09/27] – [Viral Pasad]

Natalie Jomini Stroud. “Partisanship and the Search for Engaging News”

Dr  Stroud’s work motivates me to think towards the following (no pun intended) research and solutions.

Selective Exposure and Selective Judgement can be hacked and are very susceptible to being attacked by sock-puppets or bots. Inadvertent Selective Exposure by humans was something that social media platforms exploited to get more traction and engagement on their sites and platforms, but people try to understand or reverse engineer the algorithm (for display of posts on their feed) and hack into the system. If everyone knows that ‘a generic social media/online news’ site mostly shows everyone only what they find reasonable, or agree with, then this is a very great tool for marketers, sock-puppets (created for ulterior motives) to blindly put out content that they would like their (ideological) ‘followers’ to be ‘immersed’ in. This a black mirror recipe episode waiting for disaster and bound to create Echo Chambers and Incompletely Informed Opinions. Not only Incompletely Informed Opinions, it also causes Misinformation as the users who already agree with a certain ideology are unlikely to try to pick it apart in search for a shady clause or outright incorrect information altogether!

This is what was employed by Facebook via Dark Posts in 2016 where a ‘follower’ would see certain posts sent out by their influencers, but if forwarded to ‘non-followers’, they would simply be unable to open those links at all (because it is common knowledge that a non-follower would scrutinize that very post)

 

Thus, I would like to consider a design project/study in two parts, hoping to disrupt Selective Exposure and Selective Judgement. The two parts are as follows:

 

I] Algorithmic Design/Audit – How are the posts are shown (selected)

This deals with not how users’ see their posts visually, but how users’ feed are curated to show them certain kinds of posts more than certain others. With three phase design approach, this we can probably attempt to understand Algorithmic Exploitation of Selectivity Process and User bias towards or against feeds which do not follow/over exploit the inherent Selectivity Process employed users.

The users can be exposed to three kinds of feeds,

  • one, heavily exploiting the selectivity bias (almost creating an echo chamber)
  • two, a neutral feed, equivocating and displaying opposite opinions with equal weightage.
  • three, a hybrid custom feed, which shows the user, agreeable opinionated posts, but also a warning/disclaimer that “this is an echo chamber” and a way to get to other opinions as well, such as tabs or sliders saying “this is what others are saying”

With the third feed, we can also hope to learn behavioural tendencies of users when they learn that they are only seeing one side of the coin.

 

II]Feed Design – How are the posts shown (visually)

This deals with how the posts are visually displayed on user feeds. An approach to perhaps create an equivocating feed which puts the user in charge of his/her opinion by showing the mere facts.

Often, news conforming to the majority opinion has way more number of post s as compared to news conforming to the minority opinion and thus an inadvertent echo chamber is created. A News Aggregator could be employed to group the majority and minority posts in the feed. Selective Exposure will drive the user to peek into the agreeable opinion but Selective Judgement will drive the user to scrutinize more and pick apart the not so agreeable opinion. This, I believe can help disrupt Selective Exposure and Selective Judgements to a certain extent, (hopefully) creating a community of well-informed users.

Read More

Reflection #9 – [09/27] – [Dhruva Sahasrabudhe]

Video-

Partisanship and the search for engaging news – Natalie Stroud.

Reflection-

I found this video particularly interesting, since just last week, my project proposal submission was relevant to selective exposure in online spaces. The idea I had to tackle this problem, especially on platforms which have automatically recommended content, was to create an option for a sort of Anti-Recommender system, which clusters users into groups based on their likes and dislikes, and then serves up recommendations which users in the completely opposite cluster would prefer. This would serve to make people aware about the motivations, arguments, and sources which the “opposite” side has. It could be used not just in politics, but also for a book platform like Goodreads, or even a music platform to help people be exposed to different types of music.

It would also be interesting to explore in more detail the effects of such a system on users; does it incite empathy, anger, curiosity or indifference? does it actually change opinions, or does it make people think of counterarguments which support their beliefs? (this was dealt with in last week’s papers on selective exposure).

Besides analyzing the partisan influences on how people write and interact with comments, it would also be interesting to further break down the categories from two “sides”, down into their constituents, and examine the differences in how the subcategories of these two categories engage with the comments section. For example, how does the interaction vary in both sides, considering minorities, men, women, young, old, etc.

In my opinion, the two keys to understanding selective exposure, and improving how users engage with other users with opposite beliefs are as follows:

  1. Understanding the cases where users are exposed to counterattitudinal information, when and why they actively seek it out, and how they respond to it.
  2. Designing systems which encourage users to : (i) be more accepting of different viewpoints, and (ii) to critically examine their own viewpoints.

Both of these are of course, addressed in depth in the video. I find that these two areas have huge scope for interesting research ideas, more data analysis driven for point 1, and more design driven for point 2.

For example, a system could be designed which takes data from extensions like the balancer, (which was referred to in the bursting your filter bubble paper from last week), or any similar browser extensions which categorize the political content a person views, and analyze that data to see if a “red” person ever binges on “blue” content for an extended period of time, or vice versa, and identifying any triggers which may have caused this to happen. Historical data can also be collected to find out how these users “used” the data they collected from this binge of counterattitudinal data. That is, did they use it as ammunition for their next comments supporting their own side? were they convinced by it, and did they slowly start browsing more counterattitudinal data?

Similarly, systems can be designed which transform a webpage to “nice-ify” it. This could be a browser extension, which provides little messages at the top of a web-page, reminding users to be nice, or respectful. It could also detect uncivil language, and display a message asking them to reconsider.This ties into the discussion about the effectiveness of priming users to behave in certain ways.

Systems could also be designed to humanize people on chat forums, by adding some (user decided) facts about them to emphasize their personhood, without revealing their identity. It is a lot harder to insult Sarah, who has a 6 month old kitten named Snowball, and likes roller blading, than it is to insult the user sarah_1996. This would also bridge partisan gaps by emphasizing that the other side consists of humans with identities beyond their political beliefs.

Read More