Reading Reflection#3 -[2/5/19]-[Kibur Girum]

Title: Early Public Responses to the Zika-Virus on YouTube: Prevalence of and Differences Between Conspiracy Theory and Informational Videos

Summary:

The purpose of the study was to determine the difference in user response (views, replies, and likes) between informative and conspiracy-based YouTube videos.  The research was conducted over a set of data collected on the most popular videos on YouTube during the zika-virus outbreak in 2016.  Their result showed that 12 out of the 35 videos in the data set focused on conspiracy theories, but no statistical differences were found. The result of research can be used to improve future online health promotion campaigns and combat the spread of false information.  Based on multiple findings, the study provided the following conclusions:

  • Results on user activity showed no statistically significant differences across the video types
  • YouTube users respond in similar ways, in terms of views, shares and likes, to videos containing informational and conspiracy theory content. 
  • Understanding the various types of contestation present in YouTube video user responses on the Zika-virus is important for future online health promotion campaigns 

Reflection

YouTube has changed the way we acquire and spread information in our society. Everyone has now easy access to start a podcast or channel to spread information. This also brings a lot of challenges and one of them is the spread of Conspiracy theories. We don’t have to look no more than the Ebola outbreak in 2016 to see the threat it posed in our society (see the New York’s time article titled “Ebola Conspiracy Theories” for more information). I believe that this study provided a step forward in tackling this problem. Even though, I am really impressed by their findings and conclusion, the study lacked concreate arguments and a broader data set to back up their findings. Moreover, a lot of assumptions were taken which affects the creditability of their study. Nevertheless, their research gives a great insight for future studies. Considering their findings and summarization, we can reflect on different aspects and their implications. 

Part 1: From their findings that amazes me the most is that results on user activity showed no statistically significant differences across the video types. 

 Questions and further research 

  1. One question we can ask is there any difference in terms of video types. I believe that conducting a research on users across different videos give insights about why conspiracy videos spread easily.  
  2. Can we determine the standard a video based on users’ activity or account information? This will help us to perfectly identify. 

Part 2: that stack out of me after reading the study is that do conspiracy videos differ in terms of their approach? Do they change their approach time to time or stay consistent? I believe that doing more research on multiple Conspiracy videos on YouTube will help us to solve this problem. 

Title: Automated Hate Speech Detection and the Problem of Offensive Language

Summery: 

The purpose of the study was to improve the detection method for hate speech from other instances of offensive language. They used a crowd-sourced hate speech lexicon to collect tweets containing hate speech keywords and trained a multi-class classifier to distinguish between these different categories. Based on multiple findings, the study provided the following conclusions:

  • racist and homophobic tweets are more likely to be classified as hate speech but that sexist tweets are generally classified as offensive 
  • Tweets with the highest predicted probabilities of being hate speech tend to contain multiple racial or homophobic slurs 
  • Tweets without explicit hate keywords are also more difficult to classify 

Reflection: 

With Twitter and Facebook becoming the most powerful medium to reach the public, it is essential that we combat the spread of heat speech through those platforms. The research did a great job in terms reliably separating hate speech from other offensive language. But I believe still more work has to be done to improve the classifier. I am not convinced that the human classification is the perfect way to classify tweets. Maybe using smart algorithms can improve the results. 

 Questions and further research 

  • Does a difference in culture affect or influence hate speech? Conducting research on different group of people will provide some meaningful findings  
  • What kind of content does hateful users consume? We can easily identify the root cause of hate speech by studding users who consume or spread heat speech. 
  • Is there any significant difference in word usage between hate and offensive speech? We might be able to determine what type of a speech is based on usage stop-words and nouns, proper nouns and verb phrases. 

Leave a Reply

Your email address will not be published. Required fields are marked *