9/12 Reflection #4

Summary

The article “Antisocial Behavior in Online Discussion Communities” describes a study that sought to characterize the behavior and evolution of antisocial web community users over time. The study focused on users from three websites: CNN, Breitbart, and IGN. Users under observation were split into three categories:

  • FBU (high): Users who were eventually banned and had many of their posts deleted.
  • FBU (low): Users who were eventually banned and had fewer posts deleted.
  • NBU: Users who were never banned.

In order to aid their characterization, the authors observed the frequency with which these users posted, the readability of their posts, and the disdain with which the community backlashed. A point of focus was how users in the FBU category had lower quality content from the start of their membership in the community compared to NBUs. FBU content quality and readability saw a noticeable downward trend as time goes on, and these users tended to post more frequently in fewer discussion boards. This type of activity received a greater backlash from the community and eventually led to their banning. A method used to potentially identify antisocial users early on was derived from the data. The authors discovered that users who received a large amount of backlash, many deleted posts, and more quickly deleted posts early on in their membership were extremely likely candidates for banning later on.

Reflection

Overall, the study was one that anyone from my generation would be able to relate with. Observing similar trolling and flaming in my day to day online life, I found it interesting to see how these users can be characterized. This article is helpful to me and my project partner since our current project idea involves a method to use automated moderation and identify/delete posts that veer from the discussion topic. While the article focuses mainly on negatively perceived posts and comments it was intriguing to see how some trolling behavior was positively perceived by the community. I can imagine a right wing user making an unpopular opinion on a left based political site then being chastised by a leftist user. It is not hard to see the community egging on such behavior. It makes me wonder how moderators should truly respond to this situation. If the behavior goes unpunished, then a toxic community is formed where attacks on unpopular opinions are allowed. If the behavior is punished, the website may lose the faith of its users for “supporting” the unpopular opinion and not the opinion of the toxic user who is considered a true member of the community.

Questions

  • Breitbart is considered to be a far-right news source. What observations can be made about leftist users who started out by stating an unaccepted opinion and gradually became more aggressive in their behavior?
  • What observations can be made on those who witness antisocial behavior? Do they become more likely to engage in such behavior themselves? Do their responses to such behavior become more aggressive over time?
  • It was mentioned that users who were unjustly banned returned with much more antisocial behavior. The study claims that certain activity patterns can be used to identify antisocial users early on. Do you think taking such pre-emptive action could lead to unjust banning and thus the creation of new trolls and flamers?

Read More

Reading Reflection #3

This paper analyzes antisocial behavior by examining three online communities, CNN.com, CNN.com, and IGN.com. Some of the questions the researchers are curious about are is antisocial behavior innate in a user or does it build up over time and how does the community reaction affects the potential FBU, Future Banned User. A key characteristics of FBU’s are that they focus on individual threads rather than trying to spread their message across an entire platform. The paper concludes by presenting a way to predict which users can potentially be FBU’s; keeping track of how much they post and delete, as well as their content, which a sometimes can contain profanity or derogatory language against other users.

I would have loved to seen this study been done on sites such as Reddit or Stackoverflow as they are more of pure-forum type social networks. FBU’s may not necessarily be trolls though as I’ve seen first-hand on the sites I mentioned above. The community they interact with has the ability to tarnish their reputation e.g. some users on Stackoverflow have a complexity issue and are not fond of ‘simple questions. Once I saw a post a user was asking regarding a string manipulation, he/she had showed their work too so they clearly weren’t out to just get the answer and a lot of comments or replies were mean and quickly got upvotes. One comment was along the lines of “If you’re really stuck doing this you may want to change careers” Down the comment thread I read a new comment from a user defending OP saying that at one point in their career everyone was an amateur or beginner. I viewed the user’s history because it might be another account from OP but it clearly wasn’t because the user had a much higher reputation. Half hour later this user’s comment got deleted as it probably got down voted so much or seeing the backlash he/she received they proceeded to delete it.

 

What’s a good way to tell apart a different point of view in a particular community from an actual troll? As seen in my reflection ..

What other types of mechanisms besides down voting/banning can social sites do to promote a more positive community?

Read More

Reading Reflection #4

Summary

The paper “Antisocial Behavior in Online Discussion Communities” focuses on antisocial behavior in online communities such as CNN, IGN, Breitbart, etc. by analyzing banned users from such online communities. These banned users negatively affect other users and the online communities by undesired behaviors such as trolling, flaming, bullying, and harassment. The paper describes these users’ behavior to worsen over time, and causing the online communities to become less tolerant of their behaviors. The paper explains how it is possible to characterize, identify, and also predict antisocial users and their behaviors.

Reflection

I found this paper to be interesting and relatable. I have seen and dealt with antisocial behavior in also every online community I’ve visited. This is especially true for online communities that are anonymous or don’t involve real world identity. I think this makes antisocial behavior easier for some users since they believe that their behavior cannot be tied back to their real identity. I think being able to predict and identify potential FBUs is a step in the right direction. This would allow for online communities to focus more on growing their content than having to worry about antisocial behavior. This would also allow for online communities to be safer since users wouldn’t have to deal with harsh posts or offline danger. Offline danger is a real problem since users can be targeted physically by users with antisocial behaviors. This could be anything from written threats to SWATing. I also am interested in the moderating systems since these systems are also subjected to antisocial behavior. There are many instances where these systems are abused such as abusive moderators, one-sided voting, and false reporting.

Questions

  • In a system of using down-votes to moderate bad posts, how often is this system abused?
  • Does banning users with antisocial behaviors actually help or worsen the situation?
  • Is it possible for a user’s post to be legitimate, but also be perceived as antisocial behavior because their view is different?
  • Does hiding a post have any impact over deleting a post?
  • What do FBUs have to gain?
  • Which type of moderating system is the most fair and efficient?
  • Which type of moderating system is most prone to abuse?

Read More

Reading Reflection 4

Summary

The article “Antisocial Behavior in Online Discussion Communities” studies antisocial users. They study their behavior in different settings and the feedback they receive. This helps people spot antisocial users at an early stage. The study tries to answer three questions about antisocial users. They ask if they are created or born antisocial, if reactions promote antisocial behavior, and if antisocial users be spotted at an early stage. This study was data driven and the data proves the point that antisocial users are a problem. The community’s response is only making it worse however; there is a way to spot them early.

Reflection

This article ties with the team project my group and I are interested in working at. We want to catch the antisocial users and try to identify them with a label so other users will know to stay clear of that user. We will be using a way to scrub through the posts the user puts up and look for vulgar language or degrading terms. Instead of banning them, we would like to give them a chance. This means they are still able to stay online but the public will know of their reputation. This means they need to stop with the antisocial behavior or they will continue being blocked by other users. I believe this article showed me how much data can prove a point and help analyze user’s behavior. This is similar to what my group and I must accomplish to understand where to head with our project.

Questions

  • What should people look at to determine if a user is antisocial?
  • If one person is being antisocial, is there a chance that there are multiple in that one conversation to bring an argument?
    • Or can the antisocial user influence the rest of the community?
  • What difference in punishment should an antisocial user face versus an antisocial person in the real world?

Read More

Reading Reflection #4

Summary

In the article, “Antisocial Behavior in Online Discussion Communities”, the authors characterize antisocial behavior in online discussion communities by analyzing users who were banned from CNN.com, Breitbart.com, and IGN.com. Banned users were found to use controversial language and kept their posts in individual threads, which would get more replies than the average user. Additionally, banned users’ behavior worsen over time as communities became less forgiving and tolerant, resulting in increasing rate of the banned users’ posts being deleted. Using the data collected regarding banned users’ behavior, the authors were able to identify antisocial users based off their post history and certain habits. The data collected and analyzed in this article could assist in better understanding of antisocial behavior and help maintain better, more positive online communities.

Reflection

I found this article to be very interesting as I never thought that there would be a correlation between trolls and antisocial behavior. I especially was interested by the authors’ comment regarding the possibility that rejection from a community might feed a user’s negative antics. Users that purposely try to get attention through posting controversial statements would feel encouraged by the lack of response to try more to get the reaction they are looking for. It almost seems like they enjoy the attention and the fact that they caused other users to get riled up. Thus, I am curious to how the antisocial users would have a chance to redeem themselves as suggested in the article. It is possible that banning might encourage users to try again and make a new account to redo the same problematic behavior as they might see it as a challenge of not getting caught as long as possible.

Questions

  • If ignoring a troll encourages more posting then what is the best way to react to one that would allow them an opportunity to redeem themselves?
  • What do trolls gain from posting controversial posts?
  • Does banning problematic users actually make a difference in regard to the online community’s environment?

Read More

9/12 Reflection

Summary

In “Antisocial behavior in online discussion communitites,” the authors discuss the multitude of ways they tried to determine antisocial user accounts in such a way that the likelihood of a future ban could be predicted. The studies conducted focused on three sites with fairly large user bases; IGN, Breitbart, and CNN then divided users into two groups: Never-banned users and future-banned users. They then divided future-banned users into two more distinct categories of those who experienced high rates of post removals and those who had fewer removed over the course of their online life leading up to the ban. Their studies also indicated that everyone’s content quality slips as their life on a site increases, future-banned users overall had generally lower content quality to begin with.

Reflections

I found this article to be very interesting, but am not quite sure if this has any implications on prevention of trolling online. They didn’t go too much into how they determine which comments are considered to be instigator comments, but I imagine it would be difficult to differentiate between responses as more often than not the responder is more aggressive than the creator. Some users create accounts with the expressed purpose of being a troll account so their analysis that deviant users exert more effort into singular threads or conversations makes a lot of sense. Since they are trying to instigate and inflame others. However, how would they differentiate between actual trolls and users who are simply voicing their unpopular opinion. I think that this work is interesting from an analytical standpoint but in practice this could create a white-washed environment where unpopular or dissenting opinions could be quashed before a person even has a chance to defend themselves.

Questions

 

How, if at all, would they differentiate between users with unpopular opinions and genuine trolls.

Some antisocial users behavior degenerated due to response from the community. What could be done to stop the degeneration or backlash?

Censorship is a problem, but is it Ok to silence others who’s statements you perceive as inflammatory?

Read More

Reading Reflection 9/12

Summary:

The article “Antisocial Behavior in Online Discussion Communities” analyzes undesirable user participation in online communities, and how to detect them early on before they are banned from communities. The goal of this is to minimize troll-like behavior, which results in more positive online communities. There are already methods of trying to prevent this, such as reporting posts, down voting, and blocking. Although these methods are still in place, there is still a large amount of trolling in the online communities. The study uses three online discussion-based communities: CNN.com, Breitbart.com, and IGN.com. Through this, users were categorized as Future-Banned Users (FBUs) or Never-Banned Users (NBUs). While going through the users’ behaviors, the article evaluates three main questions regarding how and when users start deviant behavior online. NBU’s and FBU’s can be analyzed to find out more information about whether or not someone will be banned. Design features such as post content, user activity, community response, and actions of community moderators help do this. The Never-Banned Users and Future-Banned Users are split by the rate in which their posts are deleted. Depending on how fast a post is deleted, it can be predicted whether or not the account should be taken down.

Reflection:

The actions of the Future-Banned Users imply that they will produce deviant behavior, even before referring to them as “FBU’s”. From using social media sites such as Twitter, most of the deviant content that I come across does not make much sense. The irregular tweets are usually a mix of inappropriate words replying to a previous tweet or being stated. Most Future-Banned Users write much differently than accounts that I follow, and it is easy to spot out. As discussed in the article, most deviant content is concentrated within individual threads. What if the threads are private? Is there an efficient way to monitor that without intruding someone’s privacy? I do think that this is a major issue within online social communities, but it may be harder to solve with privacy issues.

Questions:

Is there a way to prevent deviant content that is private?

Why do people feel the need to post deviant tweets?

Read More

Reading Response 9/12

Summary

‘Antisocial Behavior in Online Communities’ should perhaps be titled ‘Negatively Social Behavior in Online Discussion Communities’ (although this may just be me not understanding the terminology perfectly). It focuses primarily on primarily attempting to figure out why people do things which are meant to purposely hurt or instigate others in online communities. This study also sets out to differentiate itself as a quantitative study, where most studies on the subject have been qualitative. They used CNN, IGN, and Breitbart as their sources, mostly because they had large enough obtainable datasets.  Then they go over various predictions for antisocial behavior, such as how they write, how they write over time, and whether or not their writing changes significantly should they be censored. They then spend quite some time reiterating some ideas that they honestly have already gone over. Finally ending with a discussion on how to identify the future banned users.

Reflection

It’s interesting that frequently banned users tend to post completely differently than others, that is in small antagonistic focused quantities. It is also a bit odd that people who usually post this type of material worsen over time. This may suggest that accepting them in to the community might help the antisocial behavior. CNN bans more users than Breitbart, but deletes significantly less posts, especially when compared to the number of posts reported to be inflammatory. Some of the research isn’t particularly surprisingly from a logical standpoint. It turns out people who are there to be antagonistic don’t use much non-definitive language, they’re much more likely to curse, and much less likely to talk in a positive fashion. Angry people also don’t write as well. As well should someone have a post ‘unfairly’ censored, they are more likely to write poorly in the future. So there may be some link between post quality and general outrage at the site. A lot of the information isn’t overall very surprising. Much of it makes perfect sense when you think about it (mind you, most information does once it is presented to you). Some of the research is important though. The fact that users who will be banned in the future use much angrier or hostile language makes basic sense. They are often trying to inflame others. That said, the fact that users who will be banned in the future have their writing deteriorate at a faster rate over time than normal users is interesting.

 

Questions

-Do you think that acceptance or rejection either would or do play a role in antagonistic user’s postings?

-What line do you think must be crossed for a user to have their posts deleted? Is deleting or censoring a user ever okay?

-Why do you think a large portion of “Antisocial Users” exists? To purposely inflame others, or because they actually have major differing opinions?

Read More

Reading Response 9/12

Matthew Bernas

Summary

Antisocial behavior

The article “Antisocial Behavior in Online Discussion Communities” discusses the behaviors and characteristics of a specific category of online users that display antisocial behaviors. These behaviors include making flammatory posts and writing content intended to provoke other users into meaningless, off-topic debate. The writers conduct experiments on a couple online communities to gather and model trends exhibited by these “FBU” users. FBU users are defined as users who show antisocial behaviors and ultimately are banned from the community. The writers studied the posting habits of FBU and NBUs. They found that FBUs typically are more active in a select number of posts compared to NBU. These posts are less readable and more likely to contain swearing and off-topic content. The article discussed a prediction model they were able to create that accurately predicted FBU activity by observing FBU typical behavior. They found interesting trends in their model relating to content quality of both FBU and NBUs. They found that content quality decreases over time for both types of users. They were also able to identify two types of FBUs that differ from the amount of their content that is deleted. This prediction model proves to be effective and is able to predict users becoming FBUs by studying an amazingly little amount of 10 posts.

Reflection

The work discussed in this article is very interesting in that they were able to create such a powerful prediction model that requires only 10 posts to identify FBUs early in their life cycle. This accomplishment came from their accurate definitions of antisocial behavior and their study on how these behaviors change over time. Since they were able to predict if users were going to later develop into FBU, it shows that we can create categories of users describing their roles in online communities. The article showed that we can identify any type of user in a community by observing characteristics that define the type of user and then observe trends over time.

Questions

  • What are the possible implications of the results from the experiment to predict future FBUs?
  • What can we do with the definitions that we can form from observing certain types of users?
  • Can we use a similar process to detect posts that are created by fake accounts?

Read More

Reading Reflection 9/12

Summary:

The paper “Antisocial Behavior in Online Discussion Communities” analyzes the behavior of banned users from three different online platforms from the moment they joined to the moment they got banned. The three platforms used for this research paper were the general news site CNN, the political news site Breitbart and the computer gaming news site IGN. Using this information the paper found that banned users, before they were banned, tended to focus on a small number of threads, were more likely to post irrelevant information than other users and were more successful at getting responses than other users. More interesting was the information they found on how the banned user’s behavior would change over time. They found that as time went on a future-banned user would write even worse, become less and less tolerated by the other members of the community and that their antisocial behavior would get even worse as community feedback become harsher.

 

Reflection:

The idea that antisocial behavior gets even worse as a community’s feedback to that behavior gets harsher is interesting. I suppose for many people getting attacked by a community of people wouldn’t make the individual want to reevaluate their actions and be more social. Rather it would probably make that individual feel like they are being wronged for no good reason and come to resent the community. At this point if it gets bad enough they probably either leave the community or decide to bash and/or troll the community until they get banned.

 

Questions:

  • What could be done for antisocial users as they are newly signed up to prevent them from potentially getting worse and eventually banned?
  • Should anything be done?
  • Are anonymous and ephemeral sites like we discussed last week a better fit for these antisocial users?

Read More