Early Public Responses to the Zika-Virus on YouTube:
Summary:
YouTube is a popular social media platform for sharing videos. Many of these videos cover a wide range of health topics. Out of these videos, some of them are very informative and helpful, while others, as with any other social media site, spread false information or conspiracy theories. This particular study had to do with YouTube videos regarding the Zika-Virus in 2016 when it was in its first phase of its outbreak. Videos were grouped based on whether it was informational or conspiracy. The empirical research questions had to do with comparing the views, replies, likes, and shares for each group, and sentimental and user response content difference between the two. The results concluded that there was almost no significant difference in user response or views. However, user response content was different between the two and users in both videos don’t really engage in conversation.
Reflection:
Users express their opinion in their responses without further participating in conversations:
One way to overcome this could be by asking questions towards the end of the video.This will promote engagement in conversations and more commenting as well. On Twitter, there’s a poll feature in which you could ask a question for up to 7 days and have up to 4 different answers. People could respond and after they do, the percentages of the amount of people who chose each answer will be given. This could be another great way to engage people about health issues.
Online health interventions can be targeted on the most active social media users
This could be a great way of combating misinformation that’s being spread by popular accounts. However, there might be some unforeseen obstacles. If the users are given too much attention, they might obtain a larger spotlight, which will increase followers.There are many people in today’s society that have spread misinformation and sparked outrage by many people. This popularity and reaction is then continued and their message is shared among more people. Even if the majority won’t agree with their message, it’s likely to reach a few people that feel well-aligned with it and boost their platform.
Some conspiracy theories can be entertaining to think about and some of them have appeared to be true in the past. So it will be wrong to dismiss every theory. Facebook has recently been more forcefully trying to stop fake news on its site. It has partnered with factcheckers and if a post has false information tied to it, they don’t take it down, they just decrease its visual reach. This decreases the impact it will have on their users. This can be applied to many other social media sites, including YouTube. However, fact checking through a video might take longer and might introduce new obstacles.
Automated Hate Speech Detection and the Problem of Offensive Language:
Summary:
Hate speech is an large issue in social media. Many forms of detection, such as lexical detection, overclassify many posts as hate speech, leading to low accuracy. The goal of this study was to better understand how to automate the distinction between hate speech, offensive language, or neither. After the best model was created and tested, the results concluded that future work must better account for context and heterogeneity in hate speech.
Reflection:
We also found a small number of cases where the coders appear to have missed hate speech that was correctly identified by our model
I was surprised the workers classified the tweet mentioned wrong, considering it takes three people to check one tweet. Since these human classifications are used to teach the final model, the model might group tweets in the wrong categories, leading skewed results. Better human classifiers must be used in order to have a more accurate model.
We see that almost 40% of hate speech is misclassified
31% of hate speech in the data was categorized just as offensive language. Context is important in social situations and has been concluded by the report. Another study could be created just based on AI reading context. Natural language processing has been tried to solve this problem and there also has been numerous other approaches. In 2017, MIT scientists developed an AI that could detect sarcasm supposedly better than humans [1]. They too were trying to find an algorithm to detect hate speech on Twitter, but they concluded that the meaning of many tweets could not be properly understood without taking sarcasm into consideration. Future work can use the sarcasm detector to help with recognition of hate speech.
Lexical methods are effective ways to identify potentially offensive terms but are inaccurate at identifying hate speech
Again, context is very important and could provide extensive support for this study. Not only context in the sentence, but context in the conversation is also beneficial. A reply to a tweet doesn’t always tell the whole story. If there are a large list of replies to a comment, context might need to be read for each reply in order to understand the last reply. Also, some tweets reference current events without doing so explicitly, so that also has to be taken into account.
[1] https://www.breitbart.com/tech/2017/08/07/new-mit-algorithm-used-emoji-to-learn-about-sarcasm/