Reading Reflection #3 – [2/05/2019] – [Matthew, Fishman]

Automated Hate Speech Detection and the Problem of Offensive Language

Quick Summary

Why is this study important? Classifying hate speech vs. offensive language. Hate speech targets and potentially harms disadvantaged social groups (promotes violence or social disorder). Without differentiating the two, we erroneously consider many to be hate speakers.

How did they do it? The research team got a lexicon of “hate words” from hatebase.org to find over 30k twitter users who used those hate words. They extracted each user’s timeline and took a random sample of 25k tweets containing terms from the lexicon. They used CrowdFlower crowdsourcing to manually label each tweet as hate speech, offensive, or neither. They then considered many features from these tweets and used them to train a classifier. They used a logistic regression model with L2 regularization, making a separate classifier for each class using scikit-learn. Each tweet was classified by the most confident classifier.

What were the results? They found that racist and homophobic tweets are more likely to be classified as hate speech but that sexist tweets are generally classified as offensive. Human coders appear to consider sexist/derogatory words towards women are only offensive. While the classifier was great at predicting non-offensive or merely offensive tweets, its struggled to distinguish true hate speech from offensive language.

Reflection

Ways to Improve the Study:

  • Only a small percentage of the tweets flagged by the hatebase lexicon were considered hate speech by human coders. This means that the dictionary used in identifying “hateful” words was extremely broad. Using a more precise lexicon would increase the accuracy of the classifier.
  • I think studying the syntax of hate speech could be particularly interesting. It would be interesting to try to train a classifier without using particular keywords.

What I liked About the Study:

  • The use of CrowdFlower was a very interesting approach to labeling the tweets. Although there were clearly some user-errors in the classifications, the idea of crowd-sourcing to get a human perspective it intriguing, and I plan on looking into this for my future project.
  • They used a TON of features for classifications and tested many models. I think a big reason why the classifiers were so accurate was because of the detail the team took in creating their classifier.

Early Public Responses to the Zika-Virus on YouTube: Prevalence of and Differences Between Conspiracy Theory and Informational Videos

Quick Summary

Why is this Study Important? When alarming news breaks, many internet users consider it a chance to spread conspiracy theories to garner attention. It is important that we learn to distinguish between the truth and these fake news/conspiracy theories.

How did they do it? The team collected the user reactions (comments, shares, likes, dislikes, and the content/sentiment of user responses) to the 35 most popular videos posted on YouTube when the Zika virus began its outbreak in 2016,

What were the Results? The results were not surprising. 12 of the 35 videos in the data set focus on conspiracy theories, but there were no statistical differences between the two types of videos. Both true/informational videos and conspiracy theory videos shared similar numbers of responses, unique users per view, and additional responding per unique user. Both types of videos has similarly negative responses, but informational videos’ comments are concerned with the outcome of the virus, while conspiracy theory videos’ comments were concerned with where the virus came from.

Reflection:

What I would do to Improve the Study:

  • Study user interactions.responses more closely. User demographics might tell a much bigger story about the reactions to these types of videos in comparison to each other. For example, older people might be less susceptible to conspiracy theories and respond less than younger people.
  • Study different aspects of the videos all together. Clearly, user responses/interaction with informational videos and conspiracy theory videos are similar. However, looking at differences in content, titles, and publisher credibility of the video would make a lot more sense in distinguishing the two.

What I liked About the Study:

  • The semantic map of user comments was highly interesting and I wish I had seen more studies using a similar form of expressing data. The informational videos actually used more offensive words and were more clustered than the conspiracy theory videos. A lot of the information in this graphic seemed obvious (conspiracy theory comments were more concerned with foreign entities), but much of the data we could pull from it was useful. I will definitely be looking into making cluster graphs like this a part of my project.

feesh96

Aggressive driver and talented shower-singer

Leave a Reply

Your email address will not be published. Required fields are marked *