“The Promise and Peril of Real-Time Corrections to Political Misperceptions”
Garrett and Weeks are finding ways to respond to inaccurate political claims online.
“A Parsimonious Language Model of Social Media Credibility Across Disparate Events”
Mitra, Wright, and Gilbert are using language analysis to predict the credibility of Twitter posts.
Garrett and Weeks rightfully point out that a longer-term study is a priority goal of future studies. People are naturally inclined to defend their worldview, so they resist changing their opinions within a short amount of time. But the effect of the repeated corrections over a period of time might have more influence on a person. Participants might need more time to be able to build trust in the corrections before accepting them. The added insight from the corrections might also lead them to consider that there are more nuance to many of their other views, and that they are worth looking into. There are many Psychological elements to consider here in terms of persuasion, trust, participant’s backgrounds, and social media.
I think the truth might be more aligned with Garrett and Week’s hypotheses than the results show. Self-reporting from participants on changes to their opinion likely keeps some participants from reporting an actual change. The study notes how participants are defensive of their original position before the experiment and resist change. If a correction does change a participant’s view, then they could be quite embarrassed for being manipulated with misinformation and not being as open-minded or unbiased as they believed. This is a version of a well-known psychological reaction called cognitive dissonance. People usually resolve cognitive dissonance over time, tuning their opinions slowly until they are supported by the subject’s experiences. Again, this can be investigated in a longer-term study of the corrections.
Mitra, Wright, and Gilbert all consider credibility has a direct connection to language and vocabulary. I don’t know if they can correctly account for context and complexities such as sarcasm. The CREDBANK corpus may be quite useful for training using labeled social media concerning events, but real world data could still have these complications to overcome. Perhaps there are ways of measuring intent or underlying message of social media posts in other studies. Otherwise, using humor or sarcasm in social media could produce error since they are not measured as such in the variables of language.
With both of these papers, we know we can identify dubious claims made online and how to present corrections to users in a non-harmful way. But I believe that computers are likely not adept at crafting the corrections themselves. This would be an opportune time for human-computer collaboration, where the computer gathers claims to an expert user, who checks the claim and crafts a correction, which is then given to the computer to distribute widely to others who make the same claim. This type of system both adapts to new misinformation being reported and can be tuned to fit each expert’s area uniquely.