Reading Reflection #3 – [2/04/2019] – [Taber, Fisher]

Early Public Responses to the Zika-Virus on YouTube: Prevalence of and Differences Between Conspiracy Theory and Informational Videos

Summary:

This paper analyzed the most popular videos on YouTube about the Zika outbreak in 2016. The authors wanted to analyze how the user responses varied on informational versus conspiracy videos. The research questions that they wanted to answer were:

1. What type of Zika-related videos (informational vs. conspiracy) were most often viewed on YouTube?
2. How did the number of comments, replies, likes and shares differ across the two video types?
3. How did the sentiment of the user responses differ between the two video types?
4. How did the content of the user responses differ between the video types?

The team found that “results on user activity showed no statistically significant difference across the video types“.

Reflection:

My first impression of this paper was that it did not really accomplish anything. I think there were about seven times that the authors said that there were no findings that were statistically significant across the video types. I understand that finding nothing significant should still be published so that others will know what to look for in future work, but I think this paper was based off too small of a sample size. I don’t think 35 videos was enough data points to gather. I think I would have been more ok with the authors finding no statistical significance if there was a larger amount of videos analyzed.

Future Work:

Expand this work to include multiple conspiracy videos on a variety of different sites.

This would address my main concern with the paper. If multiple videos of different conspiracy and real information were covered, instead of a very niche topic. The study of many videos will most likely have more insightful results.

Can you determine the validity of a video based off of the interactions between users?

If you study a large data set of videos and comments will it be possible to tell what videos are conspiracy vs informative based off of the community comments?

Automated Hate Speech Detection and the Problem of Offensive Language

Summary:

In this paper the authors wanted to find out if there was a way to separate offensive language from hate speech. Since offensive language often can get classified incorreclty by algorithms as hate speech because it contains many lexical similarities.

Reflection:

I thought this paper did a nice job explaining the methods behind the machine learning that they were doing. But, the models did not seem very accurate. I think computers might still be a little be away from being able to classify different types of language correctly. I also think that it would have been interesting to see how other python libraries algorithms stacked up. Such as TensorFlow or Keras to see how ‘deep learning’ would do on this data set.

Future Work:

  • Finding algorithms that will be able to cut down offensive versus hate speech posts and feed posts in a gray area to human moderators.

Leave a Reply

Your email address will not be published. Required fields are marked *