Summary:
This study focused on right-wing YouTube videos that promote hate, violence, and discriminatory acts. It compared comments to video content and right-wing channels to baseline channels while analyzing lexicon, topics, and implicit biases. The research questions were
- Is the presence of hateful vocabulary, violent content and discriminatory biases more, less or equally accentuated in right-wing channels?
- Are, in general, commentators more, less or equally exacerbated than video hosts in an effort to express hate and discrimination?
The results concluded that right-wing channels usually contained a higher degree of words from negative semantic fields, included more topics relating to war and terrorism, had more discriminatory biases against Muslims in the videos and LGBT people in the comments. One of the main points of this study is better understand of the right wing speech and how the content is related to its reaction.
Reflection:
Even though this is an analysis, how would this research be used to better understand hate speech. Right-wing users post videos and people who subscribe to them, or people with similar views and ideals, watch these videos and respond to them with their own similar viewpoints. Same with non-right-wing users. This research doesn’t add too much into the discussion of understanding the rise of alt-right extremism in relation to the internet. However, the methodology used in the study was interesting and very beneficial, particularly the WEAT. Other studies can also benefit from using multi-layered investigation to better understand context.
YouTube’s recommendations often lead users to channels that feature highly partisan viewpoints
I’ve read about how YouTube’s algorithm does this in order to keep users online. An example of this occurring was after the Las Vegas shooting. YouTube recommended viewers of videos saying that the event was a government conspiracy[1]. YouTube has since changed its algorithm, but it still suggests some highly partisan and conspiracy content. A future study could be done on the amount of videosit takes for YouTube to recommend a highly partisan viewpoint or conspiracy theory. This could help YouTube engineers who work on the suggestion algorithm to better understand how or why this is occurring and help mitigate it.
It is important to notice that this selection of baseline channels does not intend to represent, by any means, a “neutral” users’ behavior (if it even exists at all)
I thought this idea was interesting. There’s bias all around us, but some people unintentionally or intentionally draw a blind eye towards it. A future studycould be on the implicit biases popular sites, such as social media and news sites, have and the different types of viewers’ biases. This could help open the eyes of several people and also help them take their own implicit biases into account before performing an action. This may not create “neutral” behaviors in users, but might help users become more neutral.
[1] https://www.techspot.com/news/73178-youtube-recommended-videos-algorithm-keeps-surfacing-controversial-content.html