Reflection #4 – [1/30] – Aparna Gupta

Reflection 4:

  1. Garrett, R. Kelly, and Brian E. Weeks. “The promise and peril of real-time corrections to political misperceptions.” Proceedings of the 2013 conference on Computer supported cooperative work. ACM, 2013.
  2. Mitra, Tanushree, Graham P. Wright, and Eric Gilbert. “A parsimonious language model of social media credibility across disparate events.” Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. ACM, 2017.

Summary:

Both papers talk about social media Credibility of the content posted on social media websites like Twitter. Mitra et. al., have presented a parsimonious model that maps language cues to the perceived level of credibility and their results show that certain linguistic categories and their associated phrases are strong predictors surrounding disparate social media events. Their dataset contains 1,377 real-world events. Whereas, Garrett et.al., have presented a study that focuses on comparing the effects of real-time corrections to corrections that are presented after a short distractor task.

Reflection:

Both the papers present interesting findings of information credibility across world wide web can be interpreted.

In the first paper Garrett et. al., have shown how political facts and information can be misstated. According to them, real-time corrections are better than making corrections after a delay. I feel that this is true to a certain level since a user hardly revisits an already read post. If the corrections are made real-time it helps to understand that a mistake has been corrected and hence credible information has now been posted. However, I feel that the experiment about what users perceive – 1. When provided with an inaccurate statement and no correction, 2. When provided a correction after a delay and 3. When provided with messages in which disputed information is highlighted and accompanied by correction. – can be biased based on user’s interest in the content.

An interesting part of this paper was the listing of various tools (Truthy, Videolyzer, etc.,) which can be used to either identify and highlight inaccurate phases.

The second paper Mitra et. al., have tried to map language cues with perceived levels of credibility. They have targeted a problem which is now quite prevalent. Since world wide web is open to everyone, people have the freedom to post any content without caring about the credibility of the information being posted. For example, there are times when I have come across same information (with exact same words) being posted by multiple users. This makes me wonder about the authenticity of the content and raises a doubt about the content credibility. I really liked the approach adopted by the authors to identify expressions which leads to the low or high credibility of the content. However, the authors have focussed on the perceived credibility in this paper. Can “perceived” credibility be considered same as the “actual” credibility of the information? How can the bias be eliminated, if there is any? I feel these are more psychology and theory-based questions and extremely difficult to quantify.

In conclusion, I found both papers very intriguing. I felt that these papers present a perfect amalgamation of human psychology and problems at hand and how they can be addressed using statistical models.

 

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *