Reading Reflection 4 – 02/07 – Yoseph Minasie

Summary:

This study focused on right-wing YouTube videos that promote hate, violence, and discriminatory acts.  It compared comments to video content and right-wing channels to baseline channels while analyzing lexicon, topics, and implicit biases. The research questions were 

  • Is the presence of hateful vocabulary, violent content and discriminatory biases more, less or equally accentuated in right-wing channels? 
  • Are, in general, commentators more, less or equally exacerbated than video hosts in an effort to express hate and discrimination? 

The results concluded that right-wing channels usually contained a higher degree of words from negative semantic fields, included more topics relating to war and terrorism, had more discriminatory biases against Muslims in the videos and LGBT people in the comments. One of the main points of this study is better understand of the right wing speech and how the content is related to its reaction. 

Reflection:

Even though this is an analysis, how would this research be used to better understand hate speech. Right-wing users post videos and people who subscribe to them, or people with similar views and ideals, watch these videos and respond to them with their own similar viewpoints. Same with non-right-wing users. This research doesn’t add too much into the discussion of understanding the rise of alt-right extremism in relation to the internet. However, the methodology used in the study was interesting and very beneficial, particularly the WEAT. Other studies can also benefit from using multi-layered investigation to better understand context. 

YouTube’s recommendations often lead users to channels that feature highly partisan viewpoints 

I’ve read about how YouTube’s algorithm does this in order to keep users online. An example of this occurring was after the Las Vegas shooting. YouTube recommended viewers of videos saying that the event was a government conspiracy[1]. YouTube has since changed its algorithm, but it still suggests some highly partisan and conspiracy content. A future study could be done on the amount of videosit takes for YouTube to recommend a highly partisan viewpoint or conspiracy theory. This could help YouTube engineers who work on the suggestion algorithm to better understand how or why this is occurring and help mitigate it.

It is important to notice that this selection of baseline channels does not intend to represent, by any means, a “neutral” users’ behavior (if it even exists at all) 

I thought this idea was interesting. There’s bias all around us, but some people unintentionally or intentionally draw a blind eye towards it. A future studycould be on the implicit biases popular sites, such as social media and news sites, have and the different types of viewers’ biases. This could help open the eyes of several people and also help them take their own implicit biases into account before performing an action. This may not create “neutral” behaviors in users, but might help users become more neutral. 

[1] https://www.techspot.com/news/73178-youtube-recommended-videos-algorithm-keeps-surfacing-controversial-content.html

Read More

Reading Reflection 3 – 2/04 – Yoseph Minasie

Early Public Responses to the Zika-Virus on YouTube: 

Summary: 

YouTube is a popular social media platform for sharing videos. Many of these videos cover a wide range of health topics. Out of these videos, some of them are very informative and helpful, while others, as with any other social media site, spread false information or conspiracy theories. This particular study had to do with YouTube videos regarding the Zika-Virus in 2016 when it was in its first phase of its outbreak. Videos were grouped based on whether it was informational or conspiracy. The empirical research questions had to do with comparing the views, replies, likes, and shares for each group, and sentimental and user response content difference between the two. The results concluded that there was almost no significant difference in user response or views. However, user response content was different between the two and users in both videos don’t really engage in conversation. 

Reflection: 

Users express their opinion in their responses without further participating in conversations:

One way to overcome this could be by asking questions towards the end of the video.This will promote engagement in conversations and more commenting as well. On Twitter, there’s a poll feature in which you could ask a question for up to 7 days and have up to 4 different answers. People could respond and after they do, the percentages of the amount of people who chose each answer will be given. This could be another great way to engage people about health issues.

Online health interventions can be targeted on the most active social media users

This could be a great way of combating misinformation that’s being spread by popular accounts. However, there might be some unforeseen obstacles. If the users are given too much attention, they might obtain a larger spotlight, which will increase followers.There are many people in today’s society that have spread misinformation and sparked outrage by many people. This popularity and reaction is then continued and their message is shared among more people. Even if the majority won’t agree with their message, it’s likely to reach a few people that feel well-aligned with it and boost their platform. 

Some conspiracy theories can be entertaining to think about and some of them have appeared to be true in the past. So it will be wrong to dismiss every theory. Facebook has recently been more forcefully trying to stop fake news on its site. It has partnered with factcheckers and if a post has false information tied to it, they don’t take it down, they just decrease its visual reach. This decreases the impact it will have on their users. This can be applied to many other social media sites, including YouTube. However, fact checking through a video might take longer and might introduce new obstacles.  

Automated Hate Speech Detection and the Problem of Offensive Language:

Summary: 

Hate speech is an large issue in social media. Many forms of detection, such as lexical detection, overclassify many posts as hate speech, leading to low accuracy. The goal of this study was to better understand how to automate the distinction between hate speech, offensive language, or neither. After the best model was created and tested, the results concluded that future work must better account for context and heterogeneity in hate speech. 

Reflection: 

We also found a small number of cases where the coders appear to have missed hate speech that was correctly identified by our model

I was surprised the workers classified the tweet mentioned wrong, considering it takes three people to check one tweet. Since these human classifications are used to teach the final model, the model might group tweets in the wrong categories, leading skewed results. Better human classifiers must be used in order to have a more accurate model. 

We see that almost 40% of hate speech is misclassified 

31% of hate speech in the data was categorized just as offensive language. Context is important in social situations and has been concluded by the report. Another study could be created just based on AI reading context. Natural language processing has been tried to solve this problem and there also has been numerous other approaches. In 2017, MIT scientists developed an AI that could detect sarcasm supposedly better than humans [1]. They too were trying to find an algorithm to detect hate speech on Twitter, but they concluded that the meaning of many tweets could not be properly understood without taking sarcasm into consideration. Future work can use the sarcasm detector to help with recognition of hate speech. 

Lexical methods are effective ways to identify potentially offensive terms but are inaccurate at identifying hate speech 

 Again, context is very important and could provide extensive support for this study. Not only context in the sentence, but context in the conversation is also beneficial. A reply to a tweet doesn’t always tell the whole story. If there are a large list of replies to a comment, context might need to be read for each reply in order to understand the last reply. Also, some tweets reference current events without doing so explicitly, so that also has to be taken into account.

[1] https://www.breitbart.com/tech/2017/08/07/new-mit-algorithm-used-emoji-to-learn-about-sarcasm/

Read More

Reading Reflection 2 – 1/30 – Yoseph Minasie

Summary

Fake news has gained a lot of attention from the 2016 presidential election and emphasis on combating it has increased since then. People have become more critical on the authenticity of publicly shared news articles that grab a large amount of attention. However, it seems as if fake news is still everywhere. There are currently many websites that check whether news articles are real or fake, but the accuracy could be better. This study was intended to help fact-checkers detect fake news early on. The main problem this study addressed was if there was any systematic stylistic and other content differences between fake and real news. The main conclusions that were: 

  • The content of fake and real news articles are substantially different. 
  • Titles are a strong differentiating factor between fake and real news
  • Fake content is more closely related to satire than to real. 
  • Real news persuades though arguments, while fake news persuades through heuristics. 

Reflection 

Their logic of real news coming from only real news sources and fake news coming from only fake news sources is flawed. There could have been some articles in which the opposite was true. This could have flawed their research if there was a good amount cases such as this.

The sharing of information conforms to ones beliefs:

This concept makes sense and I’ve noticed it happening around me. I remember one person I know, who I follow on social media, shared an article that aligned with her views but not with mine. It was about an attack on some politician. When I opened up the article, the arguments did not appear to be valid and there wasn’t much hard evidence. I didn’t think she read it carefully. In psychology there’s a term called confirmation bias that states that people are more likely to seek information parallel with their beliefs and they are also more likely to remember that information. This ties to the notion of an echo-chamber and how fake news persuades through heuristics. Another interesting study would be how much of a news article, real or fake, do people read before they share it. This could be done by tracking how long people are on the link. This could only be an estimation and doesn’t account for different speeds of reading, idle users, multitasking while reading, and other factors. 

This concept makes sense and I’ve noticed it happening around me. I remember one person I know, who I follow on social media, shared an article that aligned with her views but not with mine. It was about an attack on some politician. When I opened up the article, the arguments did not appear to be valid and there wasn’t much hard evidence. I didn’t think she read it carefully. In psychology there’s a term called confirmation bias that states that people are more likely to seek information parallel with their beliefs and they are also more likely to remember that information. This ties to the notion of an echo-chamber and how fake news persuades through heuristics. Another interesting study would be how much of a news article, real or fake, do people read before they share it. This could be done by tracking how long people are on the link. This could only be an estimation and doesn’t account for different speeds of reading, idle users, multitasking while reading, and other factors. 

Titles are a strong differentiating factor between fake and real news:

The example given in the paper is an obvious example of fake vs real news titles. Since not every fake news article will have the same clear signs, another example should have been given with less indicators in order to highlight the subtle differences between the two.

Fake content is more closely related to satire than to real:

The point of satire is to bring across a point using humor, irony, and exaggeration. So satire should be more related to fake content. Satirical news sources are very popular and do highlight key points. I follow some of them on social media and sometimes wonder if it’s apparent to everyone that these articles are satire. I say this because I’ve seen several comments from people who take the news literally and do not differentiate fact from satire. They then come forward with negative criticism. Some people would think that’s funny and reply with a sarcastic remark to get a reaction from that person.

Since this paper and future work on this topic will be public,one obstacle could be the use of this research by people with malicious intent. If they do create fake news and change their style a little based on this research, it could be harder for fact-checkers to detect fake articles, lowering their accuracy. More in-depth methods would need to be applied in order combat this issue. 

Read More

Reading Reflection #1 – 1/29 – Yoseph Minasie

Summary:

Over the recent years, Twitter has grown to become a primary platform for breaking news. In order to understand the prominence and influence of journalists, organizations, and consumers on this social media site, Mossab Bagdouri created a quantitative study to compare Twitter usage of journalists, organizations, and consumers. He used 18 features in his comparison of the different types of users. He was lead to the following conclusions: 

  • Organizations tend to broadcast their information, while journalists tend to be more personal.
  • Arab journalists tend to broadcast more than their English counterparts.
  • Arab journalists are more distinguished than English journalists. 
  • Print and radio journalists are dissimilar, while Television journalists stand somewhere in the middle. 
  • British and Irish journals share similar characteristics. 

Reflection:

My main critique of this paper was the lack of discussion Bagdouri made for the various conclusions. He explained the topic and the data derived from the study, but he never fully dove in to discuss the conclusions. Around half of the conclusions were pretty much self-explanatory. However, the remaining ones were interesting, such as, Arab journalists tend to broadcast more and they are also more distinguished. 

One possible point of discussion based on the conclusion that Arab journalists tend to broadcast more could be whether that was related to the different laws or cultures associated with the journalist’s residing country.Does that mean English journalists are more open to share their views and opinions or that Arab journalists have more of a stricter definition of journalism and just want to convey their message? Another study based off this conclusion could be comparing the amount of journalists who publish their own opinions.

Relating to the conclusion that Arab journalists are more distinguished, this could be explained by the number of Twitter users in English speaking countries vs in the Arab countries. Another explanation could be that since English journalists are more personal, the interaction with their users could create “citizen journalists”, and in turn, have create more verified users. There also could be more of these “citizen journalists” in English-speaking countries, increasing the number of people verified. 

Further Research: 

There’s been a lot of fake news in the past few years on many platforms, including Twitter. Not long after these posts, there might be several comments questioning the validity of that particular information, but by that time it might be too late and many people could have seen and believed it. Another interesting study could be comparing the number of verifiable news published using the same previous categories (e.g. journalist vs organization vs consumers, English speaking countries vs Arab countries). Some questions that study can answer would be: 

  • Are the standards of reporting actual news in journalism upheld in English speaking countries?
  • Are there similar standards of validity in different regions of the world?
  • Do journalists publish as much verifiable information as organizations? 
  • Is there current news that most people believe in but isn’t true?
  • Are consumers more likely to verify news before sharing it?

Read More