Reflection #4 – [1/30] – Hamza Manzoor

[1]. Garrett, R. Kelly, and Brian E. Weeks. “The promise and peril of real-time corrections to political misperceptions.”

[2]. Mitra, Tanushree, Graham P. Wright, and Eric Gilbert. “A parsimonious language model of social media credibility across disparate events.”

 

These papers are very relevant in this digital age where everyone has a voice and as a result there is plethora of misinformation around the web. In [1], the authors compare the effects of real-time corrections to corrections that are presented after a distraction. To study the implications of correcting the incorrect information, they conducted a between-participants experiment on electronic health records (EHRs) to examine how effective is real-time corrections to corrections that are presented later. Their experiment consisted of demographically diverse sample of 574 participants. In [2], Mitra et. al. present a study to assess the credibility of social media events. In this paper, they present a model that captures language used in Twitter messages of 1,377 real-world events (66M messages) using CREDBANK corpus. The CREDBANK corpus used Mechanical Turks to obtain credibility annotations, after that the authors trained penalized logistic regression using 15 linguistic and other control features present to predict the credibility level (Low, Medium or High) of the event streams.

Garrett et. al. claim that real-time correction even though is more effective than delayed correction but it can have implications especially with people who are predisposed to a certain ideology. First of all, the sample that they had was US-based, which makes me question that will these results hold in other societies? Is sample diverse enough to generalize it? Can we even generalize it for US only? The sample has 86% white people whereas, US has over 14% non-resident immigrants only.

The experiment also does not explain what factors contribute towards people sticking to their preconceived notions? Is it education or age? Are educated people more open to corrections? Are older people less likely to change their opinions?

Also, one experiment on EHRs is inconclusive. Can one topic generalize the results? Can we repeat these experiments with more controversial topics using Mechanical Turks?  

Finally, throughout the paper I felt that delayed correction was not thoroughly discussed. I felt that paper focused so much on psychological aspects of preconceived notions that they neglected (or forgot) to discuss delayed correction. How much delay is suitable? How and when should delayed correction be shown? What if reader closes the article right after reading it? These are the few key questions that should have been answered regarding delayed corrections.

In second paper, Mitra et. al. presented a study to assess the credibility of social media events. They use penalized logistic regression, which in my opinion was a correct choice because linguistic features would add multi co-linearity and penalizing features seems to be the correct approach. But since they use CREDBANK corpus, which used Mechanical Turks, it raises the same questions we discuss in every lecture that did Turkers thoroughly went through every tweet? Can we neglect Turkers bias? Secondly, can we generalize that Pca based credibility classification technique will always better than data-driven classification approaches?

The creation of features though raises few questions. The authors make a lot of assumption in linguistic features for example, they hypothesize that coherent narrative can be associated with higher level of credibility which even though does make sense but can we hypothesize something and not prove it later? Which makes me questions on feature space that were they right features? Finally, can we extend this study to other social media? Will a corpus generated through twitter events work for other social medias?

 

Leave a Reply

Your email address will not be published. Required fields are marked *