Reflection #9 – [09/27] – [Dhruva Sahasrabudhe]

Video-

Partisanship and the search for engaging news – Natalie Stroud.

Reflection-

I found this video particularly interesting, since just last week, my project proposal submission was relevant to selective exposure in online spaces. The idea I had to tackle this problem, especially on platforms which have automatically recommended content, was to create an option for a sort of Anti-Recommender system, which clusters users into groups based on their likes and dislikes, and then serves up recommendations which users in the completely opposite cluster would prefer. This would serve to make people aware about the motivations, arguments, and sources which the “opposite” side has. It could be used not just in politics, but also for a book platform like Goodreads, or even a music platform to help people be exposed to different types of music.

It would also be interesting to explore in more detail the effects of such a system on users; does it incite empathy, anger, curiosity or indifference? does it actually change opinions, or does it make people think of counterarguments which support their beliefs? (this was dealt with in last week’s papers on selective exposure).

Besides analyzing the partisan influences on how people write and interact with comments, it would also be interesting to further break down the categories from two “sides”, down into their constituents, and examine the differences in how the subcategories of these two categories engage with the comments section. For example, how does the interaction vary in both sides, considering minorities, men, women, young, old, etc.

In my opinion, the two keys to understanding selective exposure, and improving how users engage with other users with opposite beliefs are as follows:

  1. Understanding the cases where users are exposed to counterattitudinal information, when and why they actively seek it out, and how they respond to it.
  2. Designing systems which encourage users to : (i) be more accepting of different viewpoints, and (ii) to critically examine their own viewpoints.

Both of these are of course, addressed in depth in the video. I find that these two areas have huge scope for interesting research ideas, more data analysis driven for point 1, and more design driven for point 2.

For example, a system could be designed which takes data from extensions like the balancer, (which was referred to in the bursting your filter bubble paper from last week), or any similar browser extensions which categorize the political content a person views, and analyze that data to see if a “red” person ever binges on “blue” content for an extended period of time, or vice versa, and identifying any triggers which may have caused this to happen. Historical data can also be collected to find out how these users “used” the data they collected from this binge of counterattitudinal data. That is, did they use it as ammunition for their next comments supporting their own side? were they convinced by it, and did they slowly start browsing more counterattitudinal data?

Similarly, systems can be designed which transform a webpage to “nice-ify” it. This could be a browser extension, which provides little messages at the top of a web-page, reminding users to be nice, or respectful. It could also detect uncivil language, and display a message asking them to reconsider.This ties into the discussion about the effectiveness of priming users to behave in certain ways.

Systems could also be designed to humanize people on chat forums, by adding some (user decided) facts about them to emphasize their personhood, without revealing their identity. It is a lot harder to insult Sarah, who has a 6 month old kitten named Snowball, and likes roller blading, than it is to insult the user sarah_1996. This would also bridge partisan gaps by emphasizing that the other side consists of humans with identities beyond their political beliefs.

Leave a Reply

Your email address will not be published. Required fields are marked *