What I found most interesting about “Early Public Responses to the Zika-Virus on YouTube: Prevalence of and Differences Between Conspiracy Theory and Informational Videos” is taking this concept and tangentially applying it to a different concept. If we take disease outbreaks, even on the regional or town level and trace it into a machine learning algorithm we could find out trends that would allow us to be even more proactive when it comes to detecting and preventing outbreaks. For example, recent measles, mumps, and rubella outbreaks (https://www.who.int/news-room/detail/29-11-2018-measles-cases-spike-globally-due-to-gaps-in-vaccination-coverage) have been traced backed to the increased incidence of anti-vaccination conspiracy theories and their effect on populations to act and chose not to vaccinate their children. Using these principles, we can study social media activity on many different and popular anti-vaccination pages to target hotbeds where this disease could spread and push specific media campaigns to convince and dissuade people from joining this damaging movement. Continuing on this, we can also use this for mental health awareness by using a specific machine learning we a look at troubling tweets or Facebook posts made by individuals and analyze them in order to determine problem areas or even typical mental health triggers, be it environment, drugs, poverty, abuse, etc. we will be able to get a holistic view on the issues that are plaguing our society the most and bring them into the forefront. These are just two possible ideas but based on sentiment mapping we can also see how diseases spread commonly like the flu, is represented on social media, historically, and then be able to develop certain rules that allow us, using historical data to predict or see from a bird’s eye perspective what the current condition of the outbreak is at and what we can do to influence it.
What I found most interesting is that this project used a no context approach, not in the fact that the grammar had no context rather that they didn’t consider the user who the words were being said to. This inspired me with an idea which could be an extension and addition to this current paper, first we can use image recognition to determine the race/ethnicity of someone then using that in order to figure out through the context given by the person receiving the hateful threats is actually hateful of like the paper was mentioning simply just part of a song. Continuing this we can use certain classifiers like Facebook user profile to get people’s religion as well and determine whether or not the user could be the recipient of hateful speech from that standpoint. However; after researching it seems like this is difficult more as many facial recognition software has been trained substantially on Caucasian models, so it seems like they are overtuned in that regard. This paper however does give us a great baseline to work with in preventing hate speech, as we now have the grammar in order to figure out whether or not a certain tweet or post is in fact hate speech, so now all we have to do is make the model more accurate by adding more context, maybe looking at the users account who is posting the hate speech to see if they have a history of posting hate speech or perhaps looking at the person account to see if they have a history of receiving hate speech. All this details we can use to make the model more accurate by eliminating confounding factors like song lyrics, sarcasm, quoting, etc.