Reading Response 9/12

Matthew Bernas

Summary

Antisocial behavior

The article “Antisocial Behavior in Online Discussion Communities” discusses the behaviors and characteristics of a specific category of online users that display antisocial behaviors. These behaviors include making flammatory posts and writing content intended to provoke other users into meaningless, off-topic debate. The writers conduct experiments on a couple online communities to gather and model trends exhibited by these “FBU” users. FBU users are defined as users who show antisocial behaviors and ultimately are banned from the community. The writers studied the posting habits of FBU and NBUs. They found that FBUs typically are more active in a select number of posts compared to NBU. These posts are less readable and more likely to contain swearing and off-topic content. The article discussed a prediction model they were able to create that accurately predicted FBU activity by observing FBU typical behavior. They found interesting trends in their model relating to content quality of both FBU and NBUs. They found that content quality decreases over time for both types of users. They were also able to identify two types of FBUs that differ from the amount of their content that is deleted. This prediction model proves to be effective and is able to predict users becoming FBUs by studying an amazingly little amount of 10 posts.

Reflection

The work discussed in this article is very interesting in that they were able to create such a powerful prediction model that requires only 10 posts to identify FBUs early in their life cycle. This accomplishment came from their accurate definitions of antisocial behavior and their study on how these behaviors change over time. Since they were able to predict if users were going to later develop into FBU, it shows that we can create categories of users describing their roles in online communities. The article showed that we can identify any type of user in a community by observing characteristics that define the type of user and then observe trends over time.

Questions

  • What are the possible implications of the results from the experiment to predict future FBUs?
  • What can we do with the definitions that we can form from observing certain types of users?
  • Can we use a similar process to detect posts that are created by fake accounts?

Read More

Reading Reflection 9/12

Summary:

The paper “Antisocial Behavior in Online Discussion Communities” analyzes the behavior of banned users from three different online platforms from the moment they joined to the moment they got banned. The three platforms used for this research paper were the general news site CNN, the political news site Breitbart and the computer gaming news site IGN. Using this information the paper found that banned users, before they were banned, tended to focus on a small number of threads, were more likely to post irrelevant information than other users and were more successful at getting responses than other users. More interesting was the information they found on how the banned user’s behavior would change over time. They found that as time went on a future-banned user would write even worse, become less and less tolerated by the other members of the community and that their antisocial behavior would get even worse as community feedback become harsher.

 

Reflection:

The idea that antisocial behavior gets even worse as a community’s feedback to that behavior gets harsher is interesting. I suppose for many people getting attacked by a community of people wouldn’t make the individual want to reevaluate their actions and be more social. Rather it would probably make that individual feel like they are being wronged for no good reason and come to resent the community. At this point if it gets bad enough they probably either leave the community or decide to bash and/or troll the community until they get banned.

 

Questions:

  • What could be done for antisocial users as they are newly signed up to prevent them from potentially getting worse and eventually banned?
  • Should anything be done?
  • Are anonymous and ephemeral sites like we discussed last week a better fit for these antisocial users?

Read More

Reading Reflection: Anti-Social Behavior

Summary

“Antisocial Behavior in Online Discussion Communities”

This article discusses the background, methods, and results of analyzing the posting habits of trolls (anti-social online community members). They categorized users as FBU’s (Future Blocked Users) and NBU’s (Never Blocked Users) and compared and contrasted their behaviors. They further divided the FBU’s according to low and high post deletion rates. After determining some of the patterns of trolls, they attempted to predict whether users would be blocked in the future based in their first few posts and design a means to traffic trolls away from or out of online communities. All or most of the results of their analysis were close to what they expected, showing grounds and plausibility of designing against anti-social users.

 

Reflection

This is an important step in the development of the online social network. As entertaining and common place as trolling is for some bored, sad, members of society, it has no place in peace and progress. If it has any place it is to show that it ought to be policed. The internet is a bottomless pit of distractions and an infinite pool of people to take advantage of. Troll trafficking might lead to a more wholesome and effectively developmental online community.

 

Questions

Is there really a way to completely safeguard against trolls? Won’t the determined, bored middle school student find a way?

Does the increase of anti-social behavior when transposing the interaction to online platforms reveal something inherent about society or humanity?

Read More

Reading Response 9/12

Summary

In the paper “Antisocial Behavior in Online Discussion Communities”, the author focuses on determining what causes a user to exhibit antisocial behavior, how an online community extinguishes or propagates this behavior, and if it’s possible to accurately identify this antisocial behavior. This is done by first splitting users into two groups, Future-Banned Users(FBUs) and Never-Banned Users(NBUs). These two different groups help divide up users who were liked or tolerated by their online community from those who were disruptive or cruel. Then we can identify differences in their behaviors that may relate to antisocial behavior. One thing we see is that FBUs post more frequently that NBUs and have a less-accepted post. This post likely includes negative thoughts or some degree of profanity. Another thing that appears is that both FBUs and NBUs have lower quality posts later in their posting lifetime, with FBUs having a greater drop in quality than NBUs. Differences such as these help provide a somewhat accurate way to identify users who are likely to exhibit antisocial behavior within as little as 5-10 posts. Along with this the way the user is accepted by his or her community as well as the number of posts in a thread are viable indicators of antisocial behavior.

 

Reflection

One of the things that was surprising to me from this paper was the general trends people with antisocial behavior follow in online communities. I can think back to reading some “discussions” in YouTube comments and seeing how active the original poster of the comment was. He would post a very controversial comment and wait for other users to reply with resentment to his post. There was a joy for him in seeing either the short temper of other users or seeing a dispute take place. Whatever the case, whenever another user would post he would quickly reply. The behavior this user exhibited closely matched the behavior patterns mentioned in this paper. It was also surprising to me that the quality of posts from both FBUs and NBUs decreased as time progressed. It is possible that once a user felt like they were a part of a certain community or thread that they could be more casual with their posts and less intriguing. Another possibility is that users begin to lose excitement for a group as time progresses. This is only speculation, however, and some research could be done on this topic. One idea that I did not see mentioned in this paper is the user who shows antisocial behavior by trolling other users but is applauded for it. This user may cause embarrassment for a few users but provides humor to a large crowd of users. Depending on the website, this user may be banned or they may be encouraged on.

 

Questions

What causes some users to find joy in antisocial behavior and others to despise it?

Why do online discussion community members produce lower quality posts as time progresses?

Are some users who exhibit antisocial behavior accepted to be used as a source of comedy or to start conversations?

Read More

Reading Reflection 9/12

Summary

In “Antisocial Behavior in Online Discussion Communities,” the authors analyze and characterize antisocial behavior of users over time in three online discussion communities, CNN, Breitbart, and IGN. Users of these communities are distinguished into two categories – being banned (Future-Banned Users, FBUs) and never being banned (Never-Banned Users, NBUs). The characteristics that differentiate FBUs from NBUs include using more profanity and less positive words, writing in concentrated individual threads, writing harder to understand posts, and posting less similarly like other users. By comparing posts written by FBUs and NBUs, it was found that the way FBUs are more likely to get off-topic and write posts that appear to be less readable. Similarly, it was made evident that FBUs are effective at engaging other users in irrelevant conversations. Another question the paper addresses is “how do FBUs and their effect on the community change over time?” Through a study of FBUs posts over 17 months, the authors determined that text quality of posts by FBUs decrease, but do not by NBUs. FBUs also became less tolerated by the community over time, and were banned from posting in threads. In order to identify antisocial users before they become FBUs,  Cheng, Danescu-Niculescu-Mizil, and Leskovec establish four features – post readability, user activity, interaction with the community, and moderator involvement.

Reflection

I found the passages about the effects of FBUs on the community interesting because not only do FBUs have a negative effect on the community, the communities also do on FUBs. As the authors state in the paper, “communities play a part in incubating antisocial behavior,” which is definitely true. For instance, communities have a part in creating FBUs by excessively censoring them and having negative reactions to their posts (though in many cases, it may be for good reason). Furthermore, communities also foster antisocial behavior by reacting to FBUs posts, which then can lead to further provocations by FBUs and arguments between the parties. From first hand experience, I think we all know it’s difficult to not respond to a comment that offends us or promotes an opposing viewpoint. I suppose that’s a good reason as to why discussion communities include downvoting – users don’t need to verbally express their disapproval of a post.

Questions

  • What makes FBUs want to continue writing harmful/fruitless posts in discussions?
  • Since there are methods to detects FBUs, are there ways to help potential FBUs remain as NBUs?
  • How fast do moderators delete posts?

Read More

Reading Reflection 9/11

Summary

In “Antisocial Behavior in Online Discussion Communities”, Cheng takes a look at what defines anti-social behavior and if it can be predicted. Using 3 different websites, CNN, IGN, and Breitbart, he studies the difference between future banned users (FBU) and never banned users (NBU). He finds that there are noticeable differences between FBUs and NBUs. For instance, FBUs tend to use more negative language, post more often, the quality of their posts are poorer than NBUs, and their posts are more likely to get deleted the longer they are online. He also looked at the different types of anti-social behavior. Some users called Hi-FBUs had their posts deleted more often than Low-FBUs due to the language they use in their posts. He outlined a four ways to identify anti-social behavior:

  • Post readability
  • How often/where they post
  • Community interactions
  • If moderators delete the post

Using those features they were able to predict if the user would be banned after the first five posts made by a user with an AUC of 0.8.

 

Reflection

One thing I found interesting was the difference between the websites themselves. CNN was much quicker to ban a user than IGN. I suppose this could be attributed to the credibility of the website as CNN is slightly more legitimate or serious than IGN or perhaps due to the presence of moderators. I also thought it unsurprising that FBUs wrote poorer quality posts than NBUs since their purpose is probably to incite (and poorly written posts themselves can do that) so they wouldn’t take the time to write more eloquently.

 

Questions

Is there an alternative to moderators or community feedback that would help prevent anti-social behavior?

Would more crowdsourcing change or affect the outcome of some of their results?

How could we encourage anti-social users to be more social?

Read More

Reading Reflection 9/12 Mark Episcopo

Summary

In the article, Antisocial Behavior in Online Discussion Communities, the author began by discussing antisocial behavior and antisocial users. This included information on trolls, users who bait others into arguments and generally engage in “negatively marked online behavior”, as well as a classification of antisocial users into a category. This category is called being a Future-banned user (FBU), these are users who have been banned but whose previous posts are still visible. These FBU’s serve as the basis of the study because it was possible to analyze their behavior and identify why they were banned.  The author then explains that they chose to study the comments section of CNN, IGN, and Breitbart. The study brought forth a few findings. One such finding is that FBU’s posts are less similar to other posts in the same topic from users who were never banned, this indicates that they like to steer the discussion off topic.  FBU’s are also less likely to use positive words in their posts. It was also found that these users get more replies than typical users, implying they are successful at garnering attention. They also tend to post heavily in a narrow selection of threads that they participate in, because they like to keep the argument going. There was also evidence to suggest that FBU post quality deteriorates over time. This also could tie into the community getting more familiar with the troll and getting their posts deleted more often.

Analysis

I do think that this study was important to perform but I didn’t find the results too surprising. I have seen from personal experience how trolls operate on comment sections of most websites. So this study served to confirm those perceptions. It seems that most trolls or users who get banned like to stir up arguments, but I did like how the article mentioned that some people downvote and report others just because the other user has a differing opinion. People who are not offensive but believe strongly in a different opinion should not be banned or censored. I think that is why having a human moderator check quality of posts is important to avoid people getting wrongfully banned. I also thought it was interesting how the author mentioned giving the trolls a way to redeem themselves, I’m not too sure how well that would work.

Questions

  • Would providing a way to promote good behavior and allow trolls to redeem themselves be successful?
  • Would there be a way to automate the deletion of offensive posts effectively?
  • Do trolls have the right to participate in their behavior, as it is a public board? Is there a line to be crossed?

Read More

Reading Reflection 9/12

Summary

In the paper “Antisocial Behavior in Online Discussions Communities” the focus was on the behavior patterns of users that eventually get banned from communities for posting inappropriate comments, irrelevant issues, and other behavior patterns that do not comply with standard online community conversation norms. The three main questions they were asking were:

(1) When does a user become antisocial – later in community life or from the start?

(2) What role does the community play in encouraging or discouraging antisocial behavior?

(3) Is it possible to identify antisocial users before they are banned?

These questions were imperative for their analysis because these are the fundamental concepts of understanding who a banned user is and how to prevent/identify trolls before they terrorize a community. They focused on three major websites: CNN.com, Breitbart.com, IGN.com in order to gather information that would provide them with enough data to make accurate claims about banned users. Additionally, these sites have the ability to report users, comment and down like posts. They created two groups of users within the community, Future Banned Users (FBUs) and the Never Banned Users (NBUs). These groups emerged not only because this was their focus, but also because language patterns were similar between FBUs and NBUs, this language that FBU’s used were more controversial and words that provoke users. Additionally, the difference between these two groups, their posting habits were also significantly different, with FBU’s there are more posts and more concentrated on a few threads instead of more spread out as a NBU’s post would be. Additionally, there was a trend that FBUs behaviors worsen the more time spent in that community. Furthermore, they indicated that post deletion and post reports were a high indicator of FBU’s, while number of comments and down likes didn’t have such a strong correlation because they serve other purposes. They have determined that within the first 10 post, they can pretty certainly determine what type of user this is.

 

Personal Reflection

Most of what this article confirms, does not surprise me, it just simply confirms suspects I had about members of on online community. What does surprise me is that there were some trends found with individuals where isolation in the community made the behavior worse. I find this interesting because this makes sense, but also, I am curious how many users get off to the wrong foot, feel isolated, and then lash out. Additionally, I am curious about if banned users are already isolated from society or if they are functioning, just not online. If they aren’t function or are, is there a way we can help them to function online?

 

Questions

Are all FBUs acting in a malicious intent or just simply lacking the social skills that NBU’s have or have learned to have? I know they clearly lack the social skills to function in an online community, but does that mean that some who maybe as a mental disorder that affects their social skills can’t use a community because they don’t understand the norms? That isolates them from participating in online communities and probably other social communities to. Which raises the question: Is social media only simply connecting they already connected within society? Those who have learned the norms and know how to participate in a receptive manner to others? Additionally, could that possibly perpetuate more isolation?

 

For example, an individual with autism, enough to function within society, but still noticeable autism. If all of his friends are participating on online communities, do these studies see him as a FBU? Do they get banned for saying inappropriate things? Does that further isolate them from their social community they may have?

Read More

Reading Reflection 9/12

Summary:

In the article “Antisocial Behavior in Online Discussion Communities” the authors discuss the topic os behavior on online communities where discussion between users is a fundamental or very large part of the site, specifically how behavior that can be categorized as antisocial or disruptive is handled and processed by the community. They also discuss what happens to the poster that posted the disruptive post. The authors look at the sites CNN.com, Breitbart.com, and IGN.com, all of which have forums and comment sections for all of the content the display on their sites. The study of users that went on to be permanently banned found that the users who were banned were given warnings or had their posts deleted by a moderator before they were banned but in the process they tended to post worse and worse posts and the community they were apart of became less and less tolerant of the banned users rants. This was a large finding of the study, that as time passed and the abusers were told to stop or removed they would continue to come back and get worse and worse creating a bigger and bigger problem until they were eventually banned. Their were some cases of users being banned for a set amount of time and then allowed to return but these cases were not studied in depth but this brings up a question of how effective the timed punishment ban was at deterring the bad behavior.

Reflection:

I found this article as a good verification of the behavior and trends that I have experienced myself on online communities. I have seen first hand that the users who abuse a community message system and then are told to stop or get reactions out of the community that are angry or disgusted just encourages the original abuser to not only continue the behavior but even get worse. If a regular user has an incident where they went too far and are told to stop its usually a very quick apology and then things go back to normal but not for the trolls. They continue the behavior and only get worse because thats the reactions they thrive on and are seeking from the community. The issue of trying to get ride of these trolls is a tricky one because you have to give each user a fair chance at the beginning of that users account life. It wouldn’t be fair to just ban someone right out because of one thing, but that second/third chance is also what feeds the really bad posts and situations. I would also be very interested in a study of the cases where a timed ban actually helped the troll stop doing the bad behavior. Finding the best way of dealing with the abusers of the system to either lock them out or correct the behavior.

Questions:

  • Is a permanent ban or a timed ban more effective in dealing with abusers?
  • What would be the best way to discern the true trolls from a user that just got out of control for a time?
  • What draws trolls to come post on the sites that they do? Just for fun?
  • Could an effective bot be created to accurately moderate forums?

Read More

Reading Reflection 4

Summary

The research paper “Antisocial Behavior in Online Discussion Communities” discusses and analyzes user and follower participation in posts, comments, votes, likes, etc in online communities.  The researchers chose to mostly focus on people, or “trolls”, who were banned from such communities.  User generated content is essential to the growth of any online community and “trolls” most likely hinder their growth.  The purpose of this research was to answer the questions : “are there users that only become antisocial later in their community life, or is deviant behavior innate?” ,”does a community’s reaction to users’ antisocial behavior help them improve, or does it instead cause them to become more antisocial?” and “can antisocial users be effectively identified early on?”.   They investigated CNN.com, Breitbart.com, and IGN.com by reading comments and threads and also by analyzing a list of banned users from each of the sites.   A possibility for future research is to find a deeper understanding for such behavior and to better characterize the lives of antisocial users over time.   The researchers identified post features, activity features, and community features as tools that can be used to identify antisocial users.   They also found that it is easier to identify antisocial users when they post more than the average user.

Reflection

This paper discussed a lot of problems, strategies, and possible solutions that I will be able to apply to my term project.  Right now, we are thinking about focusing on helping social media platforms or communities be able to identify and possibly mute offensive and toxic users.  This research would definitely help narrow down what we are planning on doing and how we should go about gathering data and solving the problem.   This paper also had a lot of research, data, and graphs to support their findings, which is definitely something that researchers should strive towards.   I would be interested to find out if there could be a way to moderate online communities like this so moderators can manually find antisocial users.  The different types of antisocial users have helped researchers conclude that moderators may be the most effective way to delete antisocial posts, which I agree with to an extent but maybe in the future we can change social media platforms so that they will be able to moderate and modify themselves based on antisocial behavior, and also be able to predict antisocial behavior  before it happens.

Questions

  • What stuff from this paper will I be able to apply to my term project?
  • Which social media platforms and communities have been successful in identifying trolls and antisocial users?
  • Is the use of moderators possible in communities like this?
  • Are news websites like the ones studied more likely to have antisocial users?
  • Are left leaning, right leaning, or neutral news sites most likely to have more antisocial users?
  • What percent of users are antisocial and how is that different from one social media site to another?

Read More