Partisanship is an inherently social phenomenon where people tend to form groups around different ideologies and their representatives. However, if partisans get isolated inside their groups without a constructive exposure to ideas and opinions of other groups, the society may start to deviate from being a healthy and democratic society. Talia Stroud works towards promoting a constructive engagement between partisans of different groups and mitigating the negative effect of partisanship in online news media.
The first study presented by Stroud leverages the Stereotype content model to promote the idea of distinguishing likeness and respect. Results of this study show a significant effect for changing the names of the reaction buttons on comments. Some questions could arise here such as: what is the long-term effect of such a solution in terms of actually refining any negative partisan behavior?. The results of the study partially answer this question by showing that people actually “respect” opposing ideas. But from all people who pass by an opposing comment, how many are actually willing to positively engage with an opposing comment? How to encourage people to engage and respect an opposing comment that deserves this respect?
From my perspective, I would suggest answering these questions by the following:
- extending the study within the context of selective exposure and selective judgment by studying the percentage of people who stop by opposing comments, read them, and give them a deserved respect.
- extending the design to include a feedback to the user. For example, including a healthy engagement score that increases when a user reads and respects an opposing opinion.
The second study presented in the video analyzes the effect of incivility in online news media comments by analyzing the triggers of reward and punishment for comments. In this regard, the study compares three behavioral acts; profanity, incivility, and partisanship. It is of no surprise that profanity is the agreed-upon rejected act by both commenters and moderators. However, it is a fact that sometimes conversations with incivility attract views and even engagement. Many among my generation grew up watching these TV shows with political opponents fighting on air. These types of media always have the good cause of promoting fruitful discussions between opposing mindsets, however, as she mentioned, there are business incentives behind promoting some controversial discussions.
In a perfect world, we may wish that fruitful interactions between partisans of different groups become as encouraging for engagement as those situations when partisans engage in a fighter mode to defend their ideologies. The question is how to encourage news organization to define clear thresholds for the amount of acceptable incivility in discussions about hot issues?. From another perspective, is it feasible to do so? Or should researchers focus on promoting desirable engagement among users rather than moving towards a stricter moderation of online comments?
From my perspective, the current model for news organizations is the best we can do in terms of having a set of rules and( humans and/or automated ) moderators enforcing these rules to some extent. The changes that we apply could be changed in the user interface design of online news organizations to promote a healthier engagement ( e.g. the first study with my suggestions added to it ) integrated with some of the ideas surveyed in the work Bursting Your (Filter) Bubble: Strategies for Promoting Diverse Exposure. Another important step could be auditing ( and maybe redesigning ) the recommendation algorithms to ensure that they do not contribute to this called filter bubble effect.