Reflection #3 – 02/04 – Tucker Crull

Early Public Responses to the Zika-Virus on YouTube: 

Summary: 

This is a study about the authors analyzed the content of the most popular videos posted on YouTube in the first phase of the Zika-virus outbreak in 2016 and how the user responded to the videos. They examined how much informational and conspiracy theory videos differ in number of comments, shares, likes and dislikes, and the sentiment and content of the user responses. Their study shows that there are no statistical differences in the number of user activity and sentiment between the two types if videos. Also they found that the user response content was different between the two videos but the user of the videos do not in engage with the conversation.

Reflection: 

The low engagement of YouTube users viewing Zika-virus related content is an important finding, showing that these users express their opinion in their responses without further participating in conversations: Getting user to engage and interact with social media content is probably one of the hardest and most valuable things that a content creator can do. So, it’s not suspiring that the users only post their opinion in the comment section. It would be interesting to see if user engage more with this topic on other social media platforms. Also it would fascinating to see how the engage would change if the creator of the video asked a question at the end of the video.

To counter the spread of misinformation, the monitoring of the content posted on YouTube deserves more attention by health organizations: I feel like monitoring content on YouTube would be a very costly endeavor for Health organizations. This is because there is no good way of finding the misinformation in a video. I think a better solution to fight against misinformation is to have “online health interventions can be targeted on the most active social media users. However, this solution could have a problem that the interventions could give the misinformation a big platform.

Automated Hate Speech Detection and the Problem of Offensive Language:

Summary: 

This study set out to solve a key challenge for automatic hate speech detection on social media. This challenge is how to separate of hate speech from other forms of offensive languages. The challenge comes from the fact that lexical detection methods tend to have low precision because they overclassify messages as high speech. The results show that racist and homophobic tweets are more likely to be classified as hate speech but that sexist tweets are generally classified as offensive and tweets without hate keywords are also harder to classify.

Reflection:

I found this paper very helpful that the paper show examples how hard it is to determine hate speech from offensive language. I thought this paper did a great job reflecting and finding reasons why some tweets were misclassified. Like how they found a few recurring phrases such as these h*es ain’t loyal that were actually lyrics from rap songs that users were quoting. This is really interesting because I believe that our society is trying to eliminate hate speech but also listens to popular rap music that promotes hate speech.

We also found a small number of cases where the coders appear to have missed hate speech that was correctly identified by our model:

I thought that is was surprising that the coders missed hate speech because their classification are what the model is train on. This could have led to the study results being off or could have improve the results.

Leave a Reply

Your email address will not be published. Required fields are marked *