Reading Reflection #4 – [02/07] – [Alon Bendelac]

Summary:

This paper studies the presence of hateful, violent, and discriminatory content in YouTube channels of two categories: right-wing and baseline. The study analyzes the lexicon, topics, and implicit biases in the text of the videos. More specifically, the text in video titles, captions, and comments were studied. The dataset used consists of over 7,000 videos and 17 million comments. It was found the right-wing channels contain more “negative” words, contain more topics about war and terrorism, and are more discriminating against minorities such as Muslims and the LGBT community.

Reflection:

Compare against left-wing channels: Instead of comparing right-wing YouTube channels to the rest of YouTube, I think the study should have narrowed their baseline to a set of left-wing channels. This would make the comparison more symmetric and would emphasize the results.

Video Transcript: A significant aspect of a YouTube video is the transcript (i.e. what is being said). I think if the study included that they would have a lot more data to look at.

I think the introduction could be improved. Although the idea of the study is clear, the introduction did not specify a motivation for studying this topic. It did not state in what kind of real-world application this study can be used. Also, related work should have been at the beginning of the paper, not at the end.

In addition to looking at specific words, I think the study should look at the presence of n-grams.

Data over time: Is there a pattern to the findings when looked over an extended time period? I think it would be interesting to do a similar study that looks at how the contents of right-wing channels change over time, and to see how the findings relate to real-world events.

In order to study the content of the video itself, and not just the text surrounding it, I think crowdsourcing could be used to label videos with tags that describe their contents. This dataset could be used to study any other classifications of YouTube videos.

Blocked words: Channels can create a list of words that they want to review before allowing in the comments section. This might limit what users are able to say. Do the right-wing channels block more, or less comments than the baseline channels? How might this impact the results of the study?

Verified channels: Is the percentage of channels that are verified different in the two categories (right-wing and baseline)?

Read More

Reflection #4 – [02/07/2019] – [Numan Khan]

Analyzing Right-wing YouTube Channels: Hate, Violence and Discrimination

Summary

This study conducted research on determining if presence of hateful vocabulary, violent content and discriminatory biases are depicted in right-wing channels and if commentators are exacerbated by these videos to express hate. These research questions were answered by an analysis of similarities and differences between users’ comments and video content in a selection of right-wing channels and compared it a baseline set using a three-layered approach: analysis of lexicon, topics and implicit biases present in the texts. They collected right-wing videos from Alex Jones’ channel and other 12 channels supported by him. The researchers collected the baseline videos from videos posted in the ten most popular channels in the “news and politics” category.

Reflection

Overall, I found it interesting that this paper chose to analyze YouTube videos that are right-wing related instead of articles. There are countless right-wing articles being published by media outlets like Breitbart. In addition, plenty of people will always read text content such as new articles, magazines, and newspapers,. over viewing videos. Personally, I view YouTube videos as a form of content that individuals are consuming exponentially more every day in the past few years. Therefore, I believe it was a wise choice of the researchers of this paper to analyze videos versus articles.

After reading this paper, I view this research as being very valuable to society because YouTube as a platform has let right-wing voices be heard by bigger and bigger audiences. This is proven by the fact that “…findings of a 2018 newspaper investigation [32] which shows that YouTube’s recommendations often lead users to channels that feature highly partisan viewpoints – even for users that have not shown interest in such content”. This is especially a problem if behaviors associated to hate, violence and discriminatory bias are being supported by these videos which became the focus of this paper’s first research question.

This paper does a great job at utilizing their three-layered approach by thoroughly explaining the methodology and providing thoughtful reflections for analyzing lexical, topical, and implicit bias. While it seemed slightly obvious that right-wing videos would display more hate than the baseline videos, it was interesting that this paper was able to prove that rage and violence was displayed in the captions while swearing words were dominant in the comments. Another finding that made sense to me, was that right-wing videos were more specific than the baseline videos. Right-wing YouTubers want to target specific topics that their audiences would be interested in, rather than broad topics covered by the baseline videos. Another finding I was interested in was the implicit bias analysis. While I am not surprised that there was a greater bias concerning Muslims in right-wing videos compared to the baseline videos, I am surprised that the captions of right-wing videos were statistically higher than the comments which held higher discriminatory bias against LGBT people.

Further Questions

  • One of the future works proposed in this paper was the addition of a temporal component to their analysis. Would the temporal component in this paper’s research show correlation to recent big political events such as significant events that have occurred during the current presidency?
  • What results would we find if the three-layered approach used in this paper was conducted using left-wing YouTube videos?
  • How different with the results from this paper on YouTube videos from different platforms such as Twitter or Facebook posts of right-wing outlets?

Read More

Reflection #4 – [02/07/2019] – [Phillip Ngo]

Summary

This paper focuses on the hate and violence activity of far-right YouTube channels and viewers. They used YouTube channels and videos that were related to Alex Jones and InfoWars who are categorized as far-right. As well as a baseline of the top 10 news channels on YouTube. Their analysis included research on the lexicon, topics, and bias of the videos and comments. They concluded that right-wing channels have more niche and targeted content that is both aggressive and violent (specifically the Muslim community).

Reflection

I think this paper is one of those candidates for “Everything is Obvious” where many probably wouldn’t be surprised with the results. But after seeing the data and methodology it’s clear that any reader could learn something new from the many of the results or reflections within the paper. Seeing the sheer number of statistics gives us a great idea of just exactly how much these channels’ behavior gets exacerbated with their interactions. But on the flip side of that, one thing that bothered me was that it seemed the authors chose data and methodologies that would compliment the results they were looking for.

I found their choice for a baseline to be a little odd. For the Right-wing channels they carefully reviewed every single channel in their category but just chose the first top 10 “news” channels for the baseline. The most popular, YouTube Spotlight, really doesn’t have much news (or anything politically related). A quick glance at their recent uploaded shows videos like the YouTube Rewind, fashion videos, and music videos. Even though the baseline channels aren’t intended to represent “neutral” users I’m not sure what they are supposed to represent.

Another thing I noted is that this paper also was similar to the first reflection read in which there is a lot of data dumping happening. They seemed to over explain the details and methods they used to an extent that went over my head. But also with that said, the amount of analysis that went into the research is astounding. They were able to pull out and compare the captions and comments to a greater extent than I could imagine. One example of this are the different terms under the negative and positive showing the aggression vs anger vs disgust percentage difference between their categories.

Regardless of some of the flaws, there definitely could be a ton of future work in this domain:

  • Instead of this baseline, have different categories like the far-left, left, right, and neutrally perceived channels and see how they differ from each other.
  • They also suggested a temporal look at the channels, but to add to it we could look at the extent to which hate and violence propagates or grows as channel gets more viewers and subscribers.
  • Can we gauge also the reactions to these videos and comments? For this it might be useful to take a look at the number of dislikes or reports on these channels as well as banned users.

Read More

Reading Reflection #4 – 02/07/19 – Kayla Moore

Summary:

In this paper, the authors explored hate, violence, and discriminatory bias in right-wing YouTube channels. They analyzed the lexicon, topics, and implicit biases in the videos as well as the comments [1]. They found that right-wing channels contain a higher percentage of negative words a higher bias against immigrants, Muslims, and LGBT people [2].

Reflection:

The paper was well-written and well-explained; however, it could have had better discussion. They could have talked more about cause rather than just correlation. The authors could have also talked about why this study is significant and what are the implications of these findings. The results themselves were not all that surprising. Some questions I had related to future work are:

  • What is, if it exists, the correlation between these videos and hate-based or discrimination-based violence?
  • How fast do these videos spread in comparison to other videos?
  • Could the results from this study help better classify political messages?
  • What similarities, if any, are found between right-wing channels and left-wing channels?

In the introduction, the researchers briefly cover why they explored this topic, but their reasoning is not that strong and does not explain why this study is important. The “far-right” has a reputation of expressing and encouraging hate, violence, and discrimination, so I don’t see what the purpose in exploring this further would be unless to prove this reputation as true. I think a better angle to take on this topic would be to ask how the hate, violence, and discrimination represented in right-wing content affect the general population. I think they could have also looked at the ‘timeline’ of a developing right-wing supporter and how right-wing content encourages or discourages them to continue supporting.

Overall, I think the paper explored an unsurprising topic and got an unsurprising result. Looking at the same topic with an emphasis on social context or social implications would have made for a better study.

Read More

[Reading Reflection #4] – [Dat Bui] – [2/7/19]

Analyzing Right-wing YouTube Channels: Hate, Violence and Discrimination

Summary:

Researchers analyze various right-wing Youtube channels to see if they contain more hateful vocabulary, violent content, and/or more discriminatory biases than regular channels. Users who comment on the videos are also analyzed to see if their comments are more likely to contain hate or discrimination. Channels were selected using InfoWars as a seed; meaning they used channels that were supported by InfoWars and its creator, Alex Jones. They studies did find that right-wing channels are more specific in their content, discussing topics like terrorism and war, and also use more words seen as negative than baseline youtube channels. Right-wing channels were also found to have a negative bias against muslim communities.

Reflection:

What struck out to me the most was that the study does not handle negations. This, to me, raises a lot of red flags about how the study was conducted; is it possible that many of the negative/hateful words and phrases they detected were accompanied by negations? The fact that the researchers did not consider negations should be concerning, as there is a massive difference between saying “This is bad” versus “This is not bad”.

There is also a question of what they used for their “baseline” sources, as they seem to at the very least, lean slightly left. VICE News is known to be fairly left leaning, as well as The Young Turks. It is questionable whether the “baseline” sources are neutral or not, but nevertheless, negative words are negative words.

While the study did not reveal anything new, it was interesting to see a quantification of what made right-wing news so obviously right-wing. With this information, we could better inform the public of what right-wing news looks like.

Read More

[Reading Reflection 4] – [02/07] – [Raghu, Srinivasan]

Summary:

This paper was primarily about comparing the similarities and differences between comments and video content in a selection of right-wing and baseline Youtube accounts. Through analyzing the lexicon, topics, and implicit biases present in the texts, comparisons were made between these groups based on a set of features. Some of the key conclusions drawn are listed below.

  • Right-wing accounts are typically more specific in their content, and also contain a higher percentage of negative words, such as “aggression”, “kill”, and “violence”. On the other hand, baseline channels typically contain a higher percentage of positive words, such as “joy” and “optimism”.
  • Comments generally use more words such as “disgust” and “hate”, whereas captions typically use more words such as “aggression”, “rage”, and “violence”. Youtube commentators are generally more exacerbated than video hosts on hate and discrimination topics.
  • There was not a significant difference in immigrant or LGBT bias between the both types of accounts. However, right-wing accounts tend to have a negative bias towards Muslim communities.
  • Right-wing accounts tend to have a higher bias against immigrants and Muslim communities in captions compared to a higher bias against LGBT groups in comments. They also typically raise more topics related to war and terrorism.

Reflection:

I have listed below a line that interested me in the paper.

“Among the top ranked topics for the right-wing captions, we observe a relevant frequency of words related to war and terrorism, including nato, torture and bombing, and a relevant frequency of words related to espionage and information war, like assange, wikileaks”

This statistic did not come as any surprise to me while reading this paper since most right-wing people heavily support increased defense spending to help end the war on terrorism. Wikileaks would also be a hot topic amonst right-wing people since Hillary Clinton’s emails from her private email server were leaked on Wikileaks. I’m also left wondering what the similarities and differences amongst comments and video content on left-wing videos are and how they compare to that of these videos. Would there be just as much negative sentiment on those videos as well? Or would they employ more positive sentiment? A potential glimpse into the answer to this question could be inferred upon from the fact that the study used The Young Turks as a baseline source, which is interesting since their views are fairly liberal. This source ended up having a more positive sentiment when referring to immigrants, LGBT, and Muslim groups. However, that’s not to say that there could exist other groups that left-wing videos may be more hostile towards.

Other Thoughts:

Overall, I wasn’t too surprised with the results of the paper, given the nature of right-wing politics. I’m very interested in seeing how these results compare with left-wing videos, as I believe that hate is likely to be found on the far right and far left ends of the political spectrum.

Read More

Reading Reflection #4

Analyzing Right-wing YouTube Channels: Hate, Violence and Discrimination

In this paper, YouTube videos and comments were analyzed using a multi-layered approach in order to try to observe trends related to hate, violence, and discrimination. YouTube, a video sharing website, has brought up concerns regarding its potential to be used as a platform for extremist actors to spread partisan and divisive content, which is sometimes untrue. Researchers looked to explore whether popular right-wing YouTube channels exhibited different patterns of hateful, violent, or discriminatory content, compared to baseline channels. They collected 3731 right-wing videos and 5,072,728 corresponding comments, as well as 3942     baseline videos and 12,519590 corresponding comments. The following research questions were asked:

“Is the presence of hateful vocabulary, violent content and discriminatory biases more, less or equally accentuated in right-wing channels?”

“Are, in general, commentators more, less or equally exacerbated than video hosts in an effort to express hate and discrimination?”

A unique aspect of this paper was the three-layered approach. Lexical analysis was used to carry out comparisons of semantic fields of words. Topic analysis was used to gather the prevalent topics in each group. Implicit bias analysis was used to measure implicit biases in the text. The researchers found that the right-wing channels included a higher percentage of negative words such as “aggression, kill, rage, and violence”. The baseline channels were found to include a higher percentage of fields like joy and optimism. High evidence of hate was not found for right-wing or baseline videos. Bothe categories were also found to exhibit a discriminatory bias towards Muslims. Comments were found to have more words related to aggression, rage, and violence. Seventy-five percent of right-wing videos were shown to have more Muslim bias in the captions than the comments.

I liked the paper, although not much of the results surprised me. I struggled to understand what the researchers intended the impact to be. In the conclusion they mentioned that these findings contribute to a better understanding of the behavior of general and right-wing YouTube users. They mentioned in the introduction that there are concerns about YouTube being used as an easy platform to spread hateful, violent, and discriminatory content, but did not elaborate in the conclusion how their work impacts this concern. I think that by knowing the trends that are present, more informed content sharing can be done and steps can be taken by others to avoid encouraging hateful or harmful content sharing.

I was surprised by the method of data collection. The researchers used InfoWars as a seed and selected other channels favored by the founder as the other right-wing videos. I thought that this process could have been done more methodically. In addition, they did not specify why they chose k=300 for the topic analysis. They may have gotten different results for different values of k, and did not explain why they selected this as the best option.

I like the three-layered approach and I like it could be useful for other studies that involve text. Future work could include applying these techniques to twitter or reddit data, or doing a similar study to this but involving other topics.

Read More

Reading Reflection #4 – 02/07/19 – Alec Helyar

Summary

These researchers chose to analyze the videos and comments of right-wing YouTube channels. To do this, the researchers looked at their language, topics, and biases. They analyzed over 7,000 videos and 17 million comments in their process. Using these, they observed similarities and differences between the videos and compared them to a normal dataset using layers. The researchers sought to answer 1) is the presence of hateful content more frequent in right wing channels and 2) how the comments compared to the video content in terms of hate and discrimination. The researchers discovered a general negativity in right wing videos, anti-muslim rhetoric in right wing communities, and negative words in both right wing videos and comments.

Reflection

I found the research article to be fairly interesting, because it covered an interesting political topic, but didn’t capture any results that were surprising. While I understand that it isn’t always fair to claim that research results are unsurprising, a major point to this paper is that it finds that YouTube comments are negative in right-wing communities. This should come as no surprise to anyone, since YouTube comments are already very negative and mixing political sentiment will only worsen the issue.

I found it odd that researchers decided that a future work item would be handling negations in text. This is something that has become fairly commonplace in natural language processing today. Any NLP library you come across should be able to propagate negations without issue. This might be an oversimplification of what they are talking about, but I don’t think that this change would be a difficult extension of their work.

Another easy extension of their work would be comparing their results to a dataset on a far left subcommunity. There are plenty to choose from, though likely not ones as ferocious as the far right in terms of language. Even still, observing the degree of difference would be very interesting. I bet that both groups cover similar topics and sentiment, but would have a stark difference in the type of language used.

Read More

Reflection#4-[02/05/2019]-[Kibur Girum]

Title: Analyzing Right-wing YouTube Channels: Hate, Violence, and Discrimination

Summary:

The purpose of the study was to determine ifhateful vocabulary, violent content and discriminatory biases more, less, or equally prevalent in right-wing channels and whether or not video commenters more, less or equally aggravated than video hosts. The research was conducted over a set of data collected on right wing video channels and baseline videos. They investigated the presence of hateful speech by using three-layer approach (lexicon, topics, and implicit bias) over users’ comments and video content. Based on multiple findings, the study provided the following conclusions about Right-wing channels:

  •  Right-wing channels tend to contain a higher degree of words from “negative” semantic fields
  •  raise more topics related to war and terrorism
  •  demonstrate more discriminatory bias against Muslims (in videos) and towards LGBT people

Reflection

YouTube has changed the way we acquire and spread information in our society. Everyone has now easy access to start a podcast or channel to spread information. This also brings a lot of challenges and one of them is the spread of heat speech specially from Right-wing channels. The 2016 election is good example for this. I believe that this study provided a step forward in tackling this problem. Even though, I am really impressed by their findings and conclusion, I am not surprised. Nevertheless, their research gives a great insight for future studies. Considering their findings and summarization, we can reflect on different aspects and their implications.

Part 1: From their findings that impressed me the most is that Right-wing channels tended to contain more “negative” semantics fields. It is important to know what the motive behind it is. 

 Questions and further research 

  1. Can we determine the standard a video based on users’ activity or account information? This will help us to perfectly identify hate-based videos. 
  2. What would be the result if we do the same research on other social media platforms like twitter and Facebook. I believe that conducting a research across different platforms would give insights about the commonality of this type of videos. 

Part 2: that stack out of me after reading the study is that do Right wing channels differ in terms of their approach? Do they change their approach time to time or stay consistent? I believe that doing more research on multiple Right winged channels on YouTube will help us to solve this problem. 

Read More

Reading Reflection #4 – [2/07/2019] – [Sourav Panth]

Analyzing Right-wing YouTube Channels: Hate, Violence and Discrimination

Summary:

The basis of this paper was to observe issues related to hate, violence and discriminatory bias in a data set containing more than 7000 videos and 17 million comments. They compared right wing channels to the baseline set using a three layered approach. This approach included analyzing lexicon, topics and implicit biases present in the text to find differences between users comments and video content. Their results show that right-wing channels tend to contain a higher degree of words from “negative” semantic fields, raise more topics related to war and terrorism, and demonstrate more discriminatory bias against Muslims in videos and towards LGBT people in comments.

Reflection:

Something that I found very interesting was that the paper talks about YouTube’s recommendation algorithm not being neutral during the presidential election of 2016 in the United States of America. This is actually very surprising to me because I thought YouTube would be very neutral since it is majority user submitted videos. I figured the recommendation algorithm would rely entirely on past videos that you watched, not accounting for YouTube to have a say in what shows up in the sidebar. Something interesting to look at would be the peak of videos recommended by YouTube versus similar videos that were not fed through the algorithm.

One possible source of inaccuracy is that they only used 15 categories to associate comments to negative feelings and 5 categories to associate comments to positive feelings. While I think this is probably fine to start, it seems like the results would be skewed towards the negative category with the current category set. Something that would concern me is that comments may be directed at the YouTube video and not just spewing hate in general. The author also talks about how the right wing raises more topics related to war and terrorism which isn’t necessarily a bad thing. Discussions about war and terrorism don’t automatically mean hate or discrimination and I think with a smaller list of categories some comments made be misidentified. In the future it may help to add more words for association and even out the number of categories in both the negative and positive sections.

Future Work:

There are a couple ways that you could go about improving upon the research already done. First I think it be interesting to see a similar analysis done on left-wing YouTube channels and comparing what the hateful topics of the left-wing is compared to that of the right-wing. Second, I would check the number of right wing videos recommended by YouTube versus the number of left-wing and neutral videos recommended by YouTube. This would help to show the bias of YouTube and also by looking at views you could see what kind of influence YouTube has on its audience. Finally one of the last things that I would improve to get better results is to increase the number of categories used to find comments associated with negative or positive reactions. In the paper they use 15 categories related negative feelings and 5 categories related to positive feelings. Looking through the words that they used, the association seemed very narrow. I think you’re using a larger number of categories to relate these feelings could result in more accurate sentiment analysis. Another thing to consider is if these comments that exhibit anger are directed at a certain group of people or if they’re directed at the YouTube video and creator.

Read More