Reflection #9 – [09/27] – [Viral Pasad]

Natalie Jomini Stroud. “Partisanship and the Search for Engaging News”

Dr  Stroud’s work motivates me to think towards the following (no pun intended) research and solutions.

Selective Exposure and Selective Judgement can be hacked and are very susceptible to being attacked by sock-puppets or bots. Inadvertent Selective Exposure by humans was something that social media platforms exploited to get more traction and engagement on their sites and platforms, but people try to understand or reverse engineer the algorithm (for display of posts on their feed) and hack into the system. If everyone knows that ‘a generic social media/online news’ site mostly shows everyone only what they find reasonable, or agree with, then this is a very great tool for marketers, sock-puppets (created for ulterior motives) to blindly put out content that they would like their (ideological) ‘followers’ to be ‘immersed’ in. This a black mirror recipe episode waiting for disaster and bound to create Echo Chambers and Incompletely Informed Opinions. Not only Incompletely Informed Opinions, it also causes Misinformation as the users who already agree with a certain ideology are unlikely to try to pick it apart in search for a shady clause or outright incorrect information altogether!

This is what was employed by Facebook via Dark Posts in 2016 where a ‘follower’ would see certain posts sent out by their influencers, but if forwarded to ‘non-followers’, they would simply be unable to open those links at all (because it is common knowledge that a non-follower would scrutinize that very post)

 

Thus, I would like to consider a design project/study in two parts, hoping to disrupt Selective Exposure and Selective Judgement. The two parts are as follows:

 

I] Algorithmic Design/Audit – How are the posts are shown (selected)

This deals with not how users’ see their posts visually, but how users’ feed are curated to show them certain kinds of posts more than certain others. With three phase design approach, this we can probably attempt to understand Algorithmic Exploitation of Selectivity Process and User bias towards or against feeds which do not follow/over exploit the inherent Selectivity Process employed users.

The users can be exposed to three kinds of feeds,

  • one, heavily exploiting the selectivity bias (almost creating an echo chamber)
  • two, a neutral feed, equivocating and displaying opposite opinions with equal weightage.
  • three, a hybrid custom feed, which shows the user, agreeable opinionated posts, but also a warning/disclaimer that “this is an echo chamber” and a way to get to other opinions as well, such as tabs or sliders saying “this is what others are saying”

With the third feed, we can also hope to learn behavioural tendencies of users when they learn that they are only seeing one side of the coin.

 

II]Feed Design – How are the posts shown (visually)

This deals with how the posts are visually displayed on user feeds. An approach to perhaps create an equivocating feed which puts the user in charge of his/her opinion by showing the mere facts.

Often, news conforming to the majority opinion has way more number of post s as compared to news conforming to the minority opinion and thus an inadvertent echo chamber is created. A News Aggregator could be employed to group the majority and minority posts in the feed. Selective Exposure will drive the user to peek into the agreeable opinion but Selective Judgement will drive the user to scrutinize more and pick apart the not so agreeable opinion. This, I believe can help disrupt Selective Exposure and Selective Judgements to a certain extent, (hopefully) creating a community of well-informed users.

Leave a Reply

Your email address will not be published. Required fields are marked *