Early Public Responses to the Zika-Virus on YouTube: Prevalence of and Differences Between Conspiracy Theory and Information Video
Summary:
The study analyzes 35 videos on Youtube that were posted during the first phase of the Zika Virus in 2016. The videos were posted between December 2015 and July 2016, and they all have at least 40,000 views. Of the 35 videos, 12 are focused on conspiracy theories, and the rest are informational videos. User responses to the videos are analyzed to see if there are different sentiments, and the implications for future online health promotion campaigns are discussed.
Reflection:
The study provided the following conclusions:
- Users respond similarly to both informational and conspiracy type videos.
- Results contradict Vousoughi, Roy and Aral who found that false news triggered more negative sentiments than real news.
With the prevalence of the internet, news travels much more quickly than it did during the pre-internet days. While this is generally a good thing, it unfortunately means that fake news also travels more quickly. It is concerning that user responses on conspiracy videos are similar to user responses on legitimate, informational videos. This brings up the questions of:
How can we best distinguish whether or not an article/video is a conspiracy theory or not? Comments on articles and video can influence viewers/readers. For some, it can be the main factor in determining if certain news is legitimate, or fake. Since this study found that there is no significant difference, how should we go about informing the public on spotting conspiracy theories?A
Why do these results contradict Vousoughi, Roy and Aral?
Since these results contradict Vousoughi, Roy and Aral, there may be another variable to consider besides video type (information or conspiracy).
It is important to keep this study in mind when publishing information on health. It has been found that at least for the Zika virus, it can be hard to tell the difference between conspiracy videos and informational videos.
Automated Hate Speech Detection and the Problem of Offensive Language
Summary:
It is hard for amateur coders to automate the process of detecting hate speech. If hate speech is associated with offensive words, then we inaccurately consider many people to be hate speakers, when in fact, they only use profane language. Hate speech is difficult to determine simply from looking for key words.
Reflection:
The following conclusions were found in this study:
- Hate speech is a difficult phenomenom, and it can not easily be categorized.
- What we consider hate speech tends to reflect our own subjective biases.
It turns out it is not as easy to point out hate speech as we may think it is. People perform well at identifying certain types of hate speech, but seem to miscategorize other types. For instance, people see racist and homophobic slurs as hate speech, but tend to see sexist language as offensive, but not hate speech. The article brings up the following question:
What really differentiates hate speech from offensive language?
As humans, we can subconsciously sort of tell when something is hate speech vs merely offensive. The question, however, is what exactly is it that makes us realize that hate speech is hate speech? Is there a way to quantify how hateful a particular statement is? Hate speech is monolithic, and comes in many forms, but is there a way to quantify what it is that makes hate speech hate speech, and not simply offensive?
It is important to be able to recognize hate speech online. Because of the ambiguity that exists in determining hate speech vs offensive speech, it can be hard to tell if groups are being attacked or praised, especially when you factor in that sarcasm can be used to both praise and attack various groups of people.