[Reading Reflection #3] -[02/04] – [Kyle Czech]

Title:

Early Public Responses to the Zika-Virus on YouTube: Prevalence of and Differences Between Conspiracy Theory and Informational Videos

Brief Summary:

This video analyzed the content of the most popular videos posted on YouTube in the first phase of the Zika-virus outbreak in 2016, and the user responses to those videos. The results show that 12 out of the 35 videos in the data set focused on conspiracy theories, but no statistical differences were found in the number of user activity and sentiment between the two types of videos. The content of the user responses shows that users respond differently to sub-topics related to Zika-virus.

Reflection:

There were a few memorable quotes in this paper that stood out to me:

Quote 1: “YouTube videos have been discussed as a source of misinformation on various health related issues such as….”

Reflection 1: I found this interesting as YouTube is notorious for taking down videos due to copyright issues, however, is it YouTube’s responsibility to maintain the integrity of their videos as well? This debate can build off of Facebook’s current issue with fake news and whether they should be held accountable for the fake accounts that are allowed to post the things that they do to mislead people.

Quote 2: “… in an analysis of videos related to rheumatoid arthritis, 30% was qualified as misleading, …”

Reflection 2: I found this line interesting, as the article doesn’t clarify what would label a video as “misleading”. For example, if there was only one line in the entire video that wasn’t accurate, does that make the entire video “misleading”, or is there a certain amount of content needed in the video that is false in order for it to be labeled as “misleading”?

Future Research:

The following ideas were thought of to expand the research of this paper:

  • Future research in the use of detecting sarcasm – Given that the article examined conspiracy theories in relation to real news, there is a probability that some of the comments on the conspiracy videos were marked as “positive”, although a viewer might have been sarcastic (a more negative intention). Some conspiracy videos are very “out there” and some viewers might be simply commenting on that aspect of the conspiracy video in a more teasing way.
  • Exploring relationship between news outlets reporting about a topic and number of conspiracy videos – Some future research that might be interesting is analyzing the trend in conspiracy videos and how frequently (either by amount of time, or number of videos) that news outlets cover an outbreak. It would be interesting to see if there is a connection to number of more “creative” thoughts about an outbreak if the media over reports something.

Title:

Automated Hate Speech Detection and the Problem of Offensive Language

Brief Summary:

By using crowd-sourcing to label a sample of these tweets into three categories: those containing hate speech, only offensive language, and those with neither, we can train a multi-class classifier to distinguish between these different categories. Close analysis of the predictions and the errors shows when we can reliably separate hate speech from other offensive language and when this differentiation is more difficult.

Reflection:

There were a number of quotes that stood out to me:

Quote 1: “Both Facebook and Twitter have responded to criticism for not doing enough to prevent hate speech on their sites by instituting policies to prohibit the use of their platforms for attacks on people based on characteristics like race, ethnicity, gender, and sexual orientation, or threats of violence towards others”

Reflection 1: After reading this quote, I wondered if their new policy would extend to the spread of known fake news sites that have their own accounts on these social media platforms. If the fake news that they spread on their platforms leads to attacks on people based on the above criteria, would that be a violation of their policy? Although its not directly liked to the policy above, the argument could be made that it certainly has the ability to cause issues depending on the content of fake news released.

Quote 2: “… people use terms like h*e and b*tch when quoting rap lyrics…”

Reflection 2: I wonder to what extend will the program begin to identify lyrics when it comes to hate speech. For example, if I was just quoting a song and how it relates to my life on twitter, it’s not hate speech, although I might use those words. However, if I directed it at someone who I got into an argument with recently with the “@” symbol, using the above words, would it be able to identify that as possible hate speech based upon the context of the lyrics and who my directed audience is?

Future Research: 

Future work that can build off of this research is the possibility of looking into the specific targets of the slang that certain culture groups use, such as “n*gga”, and if it might be classified as hate speech, if the person using it isn’t from that background associated with it. Although that kind of language is used and deemed “socially acceptable” when used by some cultures, it becomes more subjective when other cultures use that kind of language, depending on the intended target of the message.

Leave a Reply

Your email address will not be published. Required fields are marked *