In this paper the questions posed by the authors are quite thought provoking and are ones which, i believe, most of us in computer science would simply overlook, mostly perhaps due to lack of knowledge in psychology. The main question is what sort of corrective approach would lead people to let go of political misconception? is it an immediate correction? or is it a correction provided after some delay? or perhaps no corrections are the way to go? Given the fact that this is a computational social science course, i found this paper, though interesting, a bit out of place. The methodologies, questions and the way results and discussion are built up, make it quite inline to be psychology paper.
Nevertheless, coming back to the paper, I felt that two aspects completely ignored during analysis were gender and age:
- How do men and women react to same stimuli of correction?
- Are younger people more open to differing opinions and/or are young people more objective at judging the evidence countering their misperceptions?
It would’ve been great to see how the results extrapolate among different racial groups. Though, i guess that’s a bit unreasonable to expect given that ~87% of the sample population comprises of one racial group. This highlighted snippet from the paper made a chuckle:
There’s nothing wrong in making claims but one should have the numbers to back it up, which doesn’t seem to be the case here.
The second question that arose in my mind comes from personal experience is whether the results of this study would hold true in, say, Pakistan or India? The reason I ask this question is that politics there is driven by different factors such as religion and what not, so the behavior of people and their tendency to stick to a certain views regardless of evidence flouting them would be different.
The third point, would be the the relationship of the aforementioned concerns and level of education of the subject:
- Do more educated have the ability to push aside their preconceived notions and point of views when presented with corrective evidence?
- How is level of education co-related with the notion of whether a media/new outlet is biased or not and the ability to ignore that notion?
Before moving onto the results and discussions, i have a few concerns about how some of the data was collected from the participants. In particular:
- They have a 1-7 scale for some questions, 1 being the worst and 7 being the best case. How do people even put themselves on such a scale that has no reference point? Given that was no reference point, or at least none that authors mentioned, any answer given by the participants to such questions will be ambiguous and highly biased relative to what they consider to be a 1 or 7 on the scale. Hence results drawn from such questions would be misleading at best.
- The second concern has more to do with the the timing assigned to the reading? why 1 minute or even 2 at that? why was it not considered mandatory for the participants to read the entire document/piece of text? what motivated this method and what merits does it have if any.
- MTurk was launched publicly on November 2, 2005 and the paper published in 2013. Was it not possible to gather more data using remote participants.
Now the results section managed to pull all sorts of triggers for me so i’m not going to get into details of them and just pose three questions:
- Graphs with unlabelled y-axis? though i don’t doubt the authenticity or intentions of the authors but this makes the results so much less credible for me.
- Supposing the y-axis are in 0-1 range, why are all the threshold at 0.5?
- Why linear regression? won’t that force all results to be artifacts of the fit rather than actual trends? Logistic regression or Regression Trees i believe would have been a better choice without sacrificing interpretability.
Now the results drawn are quite interesting e.g. One of thefindings that I didn’t expect was that real-time correction don’t actually provoke heightened counter arguments but that the problem comes into play via biases stemming from prior attitudes when comparing credibility of the claims. So the question then arises, how do we correct people when they’re wrong about ideas they feel strongly about and when the strength of their belief might dominate the ability to reason? In this regard, I like the first recommendation made by the authors of presenting news from sources that the users trust, which these days can be easily extracted from their browsing history or even input form the users themselves. Given the fact that extraction of such information form users is already common place i.e. google and Facebook use to place ads, I we needn’t worry about privacy. What we do need to worry about is as the authors mentioned it’s tendency to backfire and reinforce the mispercetptions. So, then the question transforms to how do we not make this customization tool a two-edged-sword? one idea is that maybe we could provide users a scale of how left or right leaning the source is when presenting the information and tailor the list of sources to include more neutral ones or tailor the raking to make sources seem neutral and occasionally sprinkle in the opposite leaning sources as well to spice things up.
I would like to close off by saying that, we as computer scientist do have the ability, far above many, to build tools that shape society and it falls upon to us to understand the populace and human behavior and the peoples’ psychology much more deeply than already we do. If we don’t then we run into the danger of the tools generating results contrary to what they had been designed to. As Uncle Ben put it “With great power comes great responsibility“.