Reflection #3 – [09/04] – [Neelma Bhatti]

  1. Garrett, R.K. and Weeks, B.E., 2013, February. The promise and peril of real-time corrections to political misperceptions. In Proceedings of the 2013 conference on Computer supported cooperative work (pp. 1047-1058). ACM.
  2. Mitra, T., Wright, G.P. and Gilbert, E., 2017, February. A parsimonious language model of social media credibility across disparate events. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (pp. 126-145). ACM.

Summary and Reflection for paper 1

Article one talks about how people tend to be picky and choosy when it comes to rumors and their correction. They find a news hard to believe if it doesn’t align with their preconceived notions about an idea, and even harder to made amends for proliferation of a false news if it does align with their agenda/belief.  It presents plausible recommendations about fine graining the correction into users view so that it is more easily digestible and acceptable. I personally related with recommendation 2 about letting the user know about the risks associated with hanging on to the rumor, or the moral obligation of correcting their views.  However, does the same user profiling and algorithms for guessing preferences work across sources of news other than the traditional ones i.e. twitter, CNN etc.?

As delayed correction seemed to work better in most of the cases, can a system decide how likely the user is to pass on the news to other sources based on his/her profile, present real-time corrections to users who tend to proliferate fake news faster than others by using a mix of all three recommendations presented in this paper?

 

Summary for paper 2

As long as there’s market for juicy gossips and misinterpretation of events, rumors will keep spreading in one form or the other. People have a tendency to anticipate, and readily believe things which are either consistent with their existing beliefs, or give an adrenaline rush without potentially bringing any harm to them.  Article 2 talks about using language markers and cues to authenticate a news or its source which can, when subsumed with other approaches of classifying credibility, work as an early detector of false news.

Reflection and Questions

  • Credibility score can be maintained and publicly displayed for each user, which starts from 0 and is decreased every time the user is reported for posting or spreading a misleading news .Can such credibility score be used to determine how factual someone’s tweets/posts are?
  • Can such a score be maintained for news too?
  • Can a more generous language model be developed, which also takes multilingual postings into account?
  • How can number of words used in a tweet, retweet and replies be an indicator of authenticity of a news?
  • Sometimes users use emoticons/emojis in the end of the tweet to indicate satire and mockery of the otherwise seriously portrayed news. Does the model include their effect on the authenticity of the news?
  • What about rumors posted via images?
  • So much propaganda is spread via videos or edited images on the social media. Sometimes, all textual news that follows is the outcome of a viral video or picture circulating around the internet. What mechanism can be developed to stop such false news from being rapidly spread and shared?

Leave a Reply

Your email address will not be published. Required fields are marked *