Reflection #4 – [1/29] – Ashish Baghudana

Garrett, R. Kelly, and Brian E. Weeks. “The promise and peril of real-time corrections to political misperceptions.” Proceedings of the 2013 conference on Computer supported cooperative work. ACM, 2013.

Reflection

In this paper, the authors pose two interesting research questions – can political misperceptions and inaccuracies be corrected by a fact-checker, and if so, does real-time correction work better than delayed correction on users? The authors run an experiment where they first read an accurate post about EHRs followed by an inaccurate post from a political blog. The readers were divided into three groups where they were presented with corrections in the inaccurate post:

  • immediately after reading a false post
  • after a distraction activity
  • never

Garrett et al. report that corrections can be effective, even on politically charged topics. Based on the questionnaire at the end of the experiment, the authors concluded that users who were presented with corrections were more accurate on their knowledge on EHRs in general. Specifically, immediate-correction users were more accurate than delayed-correction. However, immediate correction also accentuated the attitudinal bias of these users. People who viewed the issue negatively had an increase in resistance to correction.

This paper is unlike any of the papers we have read in this class till now. In many senses, I feel this paper deals entirely with psychology. While it is applicable to computer scientists in designing fact-checking tools, it has more far-reaching effects. The authors created separate material for each group in their experiment and physically administered the experiment for each of their users. This research paper is a demonstration of meticulous planning and execution.

An immediate question from the paper is – would this experiment be possible using Amazon Mechanical Turk (mTurk)? This would have helped collect more data easily. It would also enable the authors to run multiple experiments with different cases – i.e. more contentious issues than EHRs. The authors mention that second (factually incorrect) article was associated with a popular political blog. If the political blog was right-leaning or left-leaning and this was known to the users, did it affect their ratings in the questionnaire? The authors could have kept an intermediate survey (after stage 1) to understand their prior biases.

A limitation that the authors mention is that of reinforcement of corrections. Unfortunately, running experiments involving humans is a massive exercise, and it would be difficult to repeat this several times. Another issue with these experiments is that users are likely to treat the questionnaire as a memory test and answer based on that, rather than their true beliefs. I also had a contention with the racial diversity in the sample population. The population is majorly white (~86%).

This study can be extended to study the correlation between party affiliation and political views with the willingness of the user for correction. Are certain groups of people more prone to incorrect beliefs?

Leave a Reply

Your email address will not be published. Required fields are marked *