Summary
The research paper “Antisocial Behavior in Online Discussion Communities” discusses and analyzes user and follower participation in posts, comments, votes, likes, etc in online communities. The researchers chose to mostly focus on people, or “trolls”, who were banned from such communities. User generated content is essential to the growth of any online community and “trolls” most likely hinder their growth. The purpose of this research was to answer the questions : “are there users that only become antisocial later in their community life, or is deviant behavior innate?” ,”does a community’s reaction to users’ antisocial behavior help them improve, or does it instead cause them to become more antisocial?” and “can antisocial users be effectively identified early on?”. They investigated CNN.com, Breitbart.com, and IGN.com by reading comments and threads and also by analyzing a list of banned users from each of the sites. A possibility for future research is to find a deeper understanding for such behavior and to better characterize the lives of antisocial users over time. The researchers identified post features, activity features, and community features as tools that can be used to identify antisocial users. They also found that it is easier to identify antisocial users when they post more than the average user.
Reflection
This paper discussed a lot of problems, strategies, and possible solutions that I will be able to apply to my term project. Right now, we are thinking about focusing on helping social media platforms or communities be able to identify and possibly mute offensive and toxic users. This research would definitely help narrow down what we are planning on doing and how we should go about gathering data and solving the problem. This paper also had a lot of research, data, and graphs to support their findings, which is definitely something that researchers should strive towards. I would be interested to find out if there could be a way to moderate online communities like this so moderators can manually find antisocial users. The different types of antisocial users have helped researchers conclude that moderators may be the most effective way to delete antisocial posts, which I agree with to an extent but maybe in the future we can change social media platforms so that they will be able to moderate and modify themselves based on antisocial behavior, and also be able to predict antisocial behavior before it happens.
Questions
- What stuff from this paper will I be able to apply to my term project?
- Which social media platforms and communities have been successful in identifying trolls and antisocial users?
- Is the use of moderators possible in communities like this?
- Are news websites like the ones studied more likely to have antisocial users?
- Are left leaning, right leaning, or neutral news sites most likely to have more antisocial users?
- What percent of users are antisocial and how is that different from one social media site to another?