Reflection #3 – [09/04] – [Prerna Juneja]

A Parsimonious Language Model of Social Media Credibility Across Disparate Events

Summary:

In this paper, the authors uncover that language used in tweets can indicate whether an event will be perceived as highly credible or less credible. After examining millions of tweets corresponding to thousands of twitter events they identify 15 linguistic features that can act as predictors of credibility and present a parsimonious model that maps linguistic constructs to different levels of credibility (“Low” < ”Medium” < ”High” < ”Perfect”). The events were annotated based on the proportion of annotations rating the reportage as ‘Certainly Accurate’. Authors use Penalised Logistic Regression for modeling and find that subjectivity is the most explanatory feature followed by positive & negative emotion. Overall, the results show that certain words and phrases are strong predictors of credibility.

Reflection:

As defined by Hiemstra et al in their paper “Parsimonious Language Models for Information Retrieval”, a parsimonious model optimizes its ability to predict language use but minimizes the total number of parameters needed to model the data.

All the papers we read for the last two reflections stressed the fact how linguistic constructs define the identity of a group/individual on online social communities. While the he use of racist, homophobic and sexist language was part of the identity of /b/ board in 4chan, users of a group in Usenet used “Geek Code” to proclaim their geek identity. We also learned how banned users on CNN used more negative emotion words and less conciliatory language.

I liked how authors validated their method of annotating the events with the HAC based clustering approach to group the events. They use Rand similarity coefficient to find the similarity between the two clustering techniques. The high R value indicates agreement between the two. I agree with author’s selection of annotation technique since it’s more generalizable.

Each mechanical turk needs to be aware of the event before annotating it. Otherwise they need to search for it online. How can can we ensure that the online news is not making the turker biased. Are turkers reading all the tweets in the pop up window before selecting a category or do they just base their decision by reading the first few tweets. I believe how an event is reported can greatly vary. So making a judgment by reading the first few tweets might not give a clear picture. Also, was the order of tweets in pop up window same for all the turkers? I believe I’ll find the answers to these questions after reading the Credbank paper.

The unit of analysis in this paper is an event rather than a tweet. And an event is perceived highly credible if large number of annotators rate the reportage as ‘certainly accurate’. But is the news perceived as credible actually credible? It will be interesting to see whether events perceived as credible are actually credible or not. A lot of work is going on in fake news detection and rumor propagation on social media platforms. Also, can people/organizations make use of this research to structure rumors in such a way that they are perceived credible? Will this reverse approach work too?

I believe a better way of calculating the value of “Questions” feature would be to calculate the proportion of tweets carrying question mark rather than counting the total number of question marks present in the tweets corresponding to an event.

One of the other features to determine credibility could be presence of URL in tweets. Specially URLs of trusted news agencies like CNN.

In the end I’ll reiterate the author and say that linguistic features combined with other domain specific features could act as foundation for an automated system to detect fake news.

Leave a Reply

Your email address will not be published. Required fields are marked *