Reading Reflection: Anti-Social Behavior

Summary

“Antisocial Behavior in Online Discussion Communities”

This article discusses the background, methods, and results of analyzing the posting habits of trolls (anti-social online community members). They categorized users as FBU’s (Future Blocked Users) and NBU’s (Never Blocked Users) and compared and contrasted their behaviors. They further divided the FBU’s according to low and high post deletion rates. After determining some of the patterns of trolls, they attempted to predict whether users would be blocked in the future based in their first few posts and design a means to traffic trolls away from or out of online communities. All or most of the results of their analysis were close to what they expected, showing grounds and plausibility of designing against anti-social users.

 

Reflection

This is an important step in the development of the online social network. As entertaining and common place as trolling is for some bored, sad, members of society, it has no place in peace and progress. If it has any place it is to show that it ought to be policed. The internet is a bottomless pit of distractions and an infinite pool of people to take advantage of. Troll trafficking might lead to a more wholesome and effectively developmental online community.

 

Questions

Is there really a way to completely safeguard against trolls? Won’t the determined, bored middle school student find a way?

Does the increase of anti-social behavior when transposing the interaction to online platforms reveal something inherent about society or humanity?

Read More

Reading Response 9/12

Summary

In the paper “Antisocial Behavior in Online Discussion Communities”, the author focuses on determining what causes a user to exhibit antisocial behavior, how an online community extinguishes or propagates this behavior, and if it’s possible to accurately identify this antisocial behavior. This is done by first splitting users into two groups, Future-Banned Users(FBUs) and Never-Banned Users(NBUs). These two different groups help divide up users who were liked or tolerated by their online community from those who were disruptive or cruel. Then we can identify differences in their behaviors that may relate to antisocial behavior. One thing we see is that FBUs post more frequently that NBUs and have a less-accepted post. This post likely includes negative thoughts or some degree of profanity. Another thing that appears is that both FBUs and NBUs have lower quality posts later in their posting lifetime, with FBUs having a greater drop in quality than NBUs. Differences such as these help provide a somewhat accurate way to identify users who are likely to exhibit antisocial behavior within as little as 5-10 posts. Along with this the way the user is accepted by his or her community as well as the number of posts in a thread are viable indicators of antisocial behavior.

 

Reflection

One of the things that was surprising to me from this paper was the general trends people with antisocial behavior follow in online communities. I can think back to reading some “discussions” in YouTube comments and seeing how active the original poster of the comment was. He would post a very controversial comment and wait for other users to reply with resentment to his post. There was a joy for him in seeing either the short temper of other users or seeing a dispute take place. Whatever the case, whenever another user would post he would quickly reply. The behavior this user exhibited closely matched the behavior patterns mentioned in this paper. It was also surprising to me that the quality of posts from both FBUs and NBUs decreased as time progressed. It is possible that once a user felt like they were a part of a certain community or thread that they could be more casual with their posts and less intriguing. Another possibility is that users begin to lose excitement for a group as time progresses. This is only speculation, however, and some research could be done on this topic. One idea that I did not see mentioned in this paper is the user who shows antisocial behavior by trolling other users but is applauded for it. This user may cause embarrassment for a few users but provides humor to a large crowd of users. Depending on the website, this user may be banned or they may be encouraged on.

 

Questions

What causes some users to find joy in antisocial behavior and others to despise it?

Why do online discussion community members produce lower quality posts as time progresses?

Are some users who exhibit antisocial behavior accepted to be used as a source of comedy or to start conversations?

Read More

Reading Reflection 9/12

Summary

In “Antisocial Behavior in Online Discussion Communities,” the authors analyze and characterize antisocial behavior of users over time in three online discussion communities, CNN, Breitbart, and IGN. Users of these communities are distinguished into two categories – being banned (Future-Banned Users, FBUs) and never being banned (Never-Banned Users, NBUs). The characteristics that differentiate FBUs from NBUs include using more profanity and less positive words, writing in concentrated individual threads, writing harder to understand posts, and posting less similarly like other users. By comparing posts written by FBUs and NBUs, it was found that the way FBUs are more likely to get off-topic and write posts that appear to be less readable. Similarly, it was made evident that FBUs are effective at engaging other users in irrelevant conversations. Another question the paper addresses is “how do FBUs and their effect on the community change over time?” Through a study of FBUs posts over 17 months, the authors determined that text quality of posts by FBUs decrease, but do not by NBUs. FBUs also became less tolerated by the community over time, and were banned from posting in threads. In order to identify antisocial users before they become FBUs,  Cheng, Danescu-Niculescu-Mizil, and Leskovec establish four features – post readability, user activity, interaction with the community, and moderator involvement.

Reflection

I found the passages about the effects of FBUs on the community interesting because not only do FBUs have a negative effect on the community, the communities also do on FUBs. As the authors state in the paper, “communities play a part in incubating antisocial behavior,” which is definitely true. For instance, communities have a part in creating FBUs by excessively censoring them and having negative reactions to their posts (though in many cases, it may be for good reason). Furthermore, communities also foster antisocial behavior by reacting to FBUs posts, which then can lead to further provocations by FBUs and arguments between the parties. From first hand experience, I think we all know it’s difficult to not respond to a comment that offends us or promotes an opposing viewpoint. I suppose that’s a good reason as to why discussion communities include downvoting – users don’t need to verbally express their disapproval of a post.

Questions

  • What makes FBUs want to continue writing harmful/fruitless posts in discussions?
  • Since there are methods to detects FBUs, are there ways to help potential FBUs remain as NBUs?
  • How fast do moderators delete posts?

Read More

Reading Reflection 9/11

Summary

In “Antisocial Behavior in Online Discussion Communities”, Cheng takes a look at what defines anti-social behavior and if it can be predicted. Using 3 different websites, CNN, IGN, and Breitbart, he studies the difference between future banned users (FBU) and never banned users (NBU). He finds that there are noticeable differences between FBUs and NBUs. For instance, FBUs tend to use more negative language, post more often, the quality of their posts are poorer than NBUs, and their posts are more likely to get deleted the longer they are online. He also looked at the different types of anti-social behavior. Some users called Hi-FBUs had their posts deleted more often than Low-FBUs due to the language they use in their posts. He outlined a four ways to identify anti-social behavior:

  • Post readability
  • How often/where they post
  • Community interactions
  • If moderators delete the post

Using those features they were able to predict if the user would be banned after the first five posts made by a user with an AUC of 0.8.

 

Reflection

One thing I found interesting was the difference between the websites themselves. CNN was much quicker to ban a user than IGN. I suppose this could be attributed to the credibility of the website as CNN is slightly more legitimate or serious than IGN or perhaps due to the presence of moderators. I also thought it unsurprising that FBUs wrote poorer quality posts than NBUs since their purpose is probably to incite (and poorly written posts themselves can do that) so they wouldn’t take the time to write more eloquently.

 

Questions

Is there an alternative to moderators or community feedback that would help prevent anti-social behavior?

Would more crowdsourcing change or affect the outcome of some of their results?

How could we encourage anti-social users to be more social?

Read More

Reading Reflection 9/12 Mark Episcopo

Summary

In the article, Antisocial Behavior in Online Discussion Communities, the author began by discussing antisocial behavior and antisocial users. This included information on trolls, users who bait others into arguments and generally engage in “negatively marked online behavior”, as well as a classification of antisocial users into a category. This category is called being a Future-banned user (FBU), these are users who have been banned but whose previous posts are still visible. These FBU’s serve as the basis of the study because it was possible to analyze their behavior and identify why they were banned.  The author then explains that they chose to study the comments section of CNN, IGN, and Breitbart. The study brought forth a few findings. One such finding is that FBU’s posts are less similar to other posts in the same topic from users who were never banned, this indicates that they like to steer the discussion off topic.  FBU’s are also less likely to use positive words in their posts. It was also found that these users get more replies than typical users, implying they are successful at garnering attention. They also tend to post heavily in a narrow selection of threads that they participate in, because they like to keep the argument going. There was also evidence to suggest that FBU post quality deteriorates over time. This also could tie into the community getting more familiar with the troll and getting their posts deleted more often.

Analysis

I do think that this study was important to perform but I didn’t find the results too surprising. I have seen from personal experience how trolls operate on comment sections of most websites. So this study served to confirm those perceptions. It seems that most trolls or users who get banned like to stir up arguments, but I did like how the article mentioned that some people downvote and report others just because the other user has a differing opinion. People who are not offensive but believe strongly in a different opinion should not be banned or censored. I think that is why having a human moderator check quality of posts is important to avoid people getting wrongfully banned. I also thought it was interesting how the author mentioned giving the trolls a way to redeem themselves, I’m not too sure how well that would work.

Questions

  • Would providing a way to promote good behavior and allow trolls to redeem themselves be successful?
  • Would there be a way to automate the deletion of offensive posts effectively?
  • Do trolls have the right to participate in their behavior, as it is a public board? Is there a line to be crossed?

Read More

Reading Reflection 9/12

Summary

In the paper “Antisocial Behavior in Online Discussions Communities” the focus was on the behavior patterns of users that eventually get banned from communities for posting inappropriate comments, irrelevant issues, and other behavior patterns that do not comply with standard online community conversation norms. The three main questions they were asking were:

(1) When does a user become antisocial – later in community life or from the start?

(2) What role does the community play in encouraging or discouraging antisocial behavior?

(3) Is it possible to identify antisocial users before they are banned?

These questions were imperative for their analysis because these are the fundamental concepts of understanding who a banned user is and how to prevent/identify trolls before they terrorize a community. They focused on three major websites: CNN.com, Breitbart.com, IGN.com in order to gather information that would provide them with enough data to make accurate claims about banned users. Additionally, these sites have the ability to report users, comment and down like posts. They created two groups of users within the community, Future Banned Users (FBUs) and the Never Banned Users (NBUs). These groups emerged not only because this was their focus, but also because language patterns were similar between FBUs and NBUs, this language that FBU’s used were more controversial and words that provoke users. Additionally, the difference between these two groups, their posting habits were also significantly different, with FBU’s there are more posts and more concentrated on a few threads instead of more spread out as a NBU’s post would be. Additionally, there was a trend that FBUs behaviors worsen the more time spent in that community. Furthermore, they indicated that post deletion and post reports were a high indicator of FBU’s, while number of comments and down likes didn’t have such a strong correlation because they serve other purposes. They have determined that within the first 10 post, they can pretty certainly determine what type of user this is.

 

Personal Reflection

Most of what this article confirms, does not surprise me, it just simply confirms suspects I had about members of on online community. What does surprise me is that there were some trends found with individuals where isolation in the community made the behavior worse. I find this interesting because this makes sense, but also, I am curious how many users get off to the wrong foot, feel isolated, and then lash out. Additionally, I am curious about if banned users are already isolated from society or if they are functioning, just not online. If they aren’t function or are, is there a way we can help them to function online?

 

Questions

Are all FBUs acting in a malicious intent or just simply lacking the social skills that NBU’s have or have learned to have? I know they clearly lack the social skills to function in an online community, but does that mean that some who maybe as a mental disorder that affects their social skills can’t use a community because they don’t understand the norms? That isolates them from participating in online communities and probably other social communities to. Which raises the question: Is social media only simply connecting they already connected within society? Those who have learned the norms and know how to participate in a receptive manner to others? Additionally, could that possibly perpetuate more isolation?

 

For example, an individual with autism, enough to function within society, but still noticeable autism. If all of his friends are participating on online communities, do these studies see him as a FBU? Do they get banned for saying inappropriate things? Does that further isolate them from their social community they may have?

Read More

Reading Reflection 9/12

Summary:

In the article “Antisocial Behavior in Online Discussion Communities” the authors discuss the topic os behavior on online communities where discussion between users is a fundamental or very large part of the site, specifically how behavior that can be categorized as antisocial or disruptive is handled and processed by the community. They also discuss what happens to the poster that posted the disruptive post. The authors look at the sites CNN.com, Breitbart.com, and IGN.com, all of which have forums and comment sections for all of the content the display on their sites. The study of users that went on to be permanently banned found that the users who were banned were given warnings or had their posts deleted by a moderator before they were banned but in the process they tended to post worse and worse posts and the community they were apart of became less and less tolerant of the banned users rants. This was a large finding of the study, that as time passed and the abusers were told to stop or removed they would continue to come back and get worse and worse creating a bigger and bigger problem until they were eventually banned. Their were some cases of users being banned for a set amount of time and then allowed to return but these cases were not studied in depth but this brings up a question of how effective the timed punishment ban was at deterring the bad behavior.

Reflection:

I found this article as a good verification of the behavior and trends that I have experienced myself on online communities. I have seen first hand that the users who abuse a community message system and then are told to stop or get reactions out of the community that are angry or disgusted just encourages the original abuser to not only continue the behavior but even get worse. If a regular user has an incident where they went too far and are told to stop its usually a very quick apology and then things go back to normal but not for the trolls. They continue the behavior and only get worse because thats the reactions they thrive on and are seeking from the community. The issue of trying to get ride of these trolls is a tricky one because you have to give each user a fair chance at the beginning of that users account life. It wouldn’t be fair to just ban someone right out because of one thing, but that second/third chance is also what feeds the really bad posts and situations. I would also be very interested in a study of the cases where a timed ban actually helped the troll stop doing the bad behavior. Finding the best way of dealing with the abusers of the system to either lock them out or correct the behavior.

Questions:

  • Is a permanent ban or a timed ban more effective in dealing with abusers?
  • What would be the best way to discern the true trolls from a user that just got out of control for a time?
  • What draws trolls to come post on the sites that they do? Just for fun?
  • Could an effective bot be created to accurately moderate forums?

Read More

Reading Reflection 4

Summary

The research paper “Antisocial Behavior in Online Discussion Communities” discusses and analyzes user and follower participation in posts, comments, votes, likes, etc in online communities.  The researchers chose to mostly focus on people, or “trolls”, who were banned from such communities.  User generated content is essential to the growth of any online community and “trolls” most likely hinder their growth.  The purpose of this research was to answer the questions : “are there users that only become antisocial later in their community life, or is deviant behavior innate?” ,”does a community’s reaction to users’ antisocial behavior help them improve, or does it instead cause them to become more antisocial?” and “can antisocial users be effectively identified early on?”.   They investigated CNN.com, Breitbart.com, and IGN.com by reading comments and threads and also by analyzing a list of banned users from each of the sites.   A possibility for future research is to find a deeper understanding for such behavior and to better characterize the lives of antisocial users over time.   The researchers identified post features, activity features, and community features as tools that can be used to identify antisocial users.   They also found that it is easier to identify antisocial users when they post more than the average user.

Reflection

This paper discussed a lot of problems, strategies, and possible solutions that I will be able to apply to my term project.  Right now, we are thinking about focusing on helping social media platforms or communities be able to identify and possibly mute offensive and toxic users.  This research would definitely help narrow down what we are planning on doing and how we should go about gathering data and solving the problem.   This paper also had a lot of research, data, and graphs to support their findings, which is definitely something that researchers should strive towards.   I would be interested to find out if there could be a way to moderate online communities like this so moderators can manually find antisocial users.  The different types of antisocial users have helped researchers conclude that moderators may be the most effective way to delete antisocial posts, which I agree with to an extent but maybe in the future we can change social media platforms so that they will be able to moderate and modify themselves based on antisocial behavior, and also be able to predict antisocial behavior  before it happens.

Questions

  • What stuff from this paper will I be able to apply to my term project?
  • Which social media platforms and communities have been successful in identifying trolls and antisocial users?
  • Is the use of moderators possible in communities like this?
  • Are news websites like the ones studied more likely to have antisocial users?
  • Are left leaning, right leaning, or neutral news sites most likely to have more antisocial users?
  • What percent of users are antisocial and how is that different from one social media site to another?

Read More

Reflection #3

Summary

The Chat Circles Series

This paper drafts the designs and observations of multiple stages in a text based communications program. The impetus behind this software was to take the now mundane and relatively emotionless activity of texting and provide it with some semblance of the life found in face to face interaction. Various methods of user representation and graphical motion are used to emulate the experience of taking part in conversation with a group of people. There are features that express distance between two people, emotional tensions, and the interactions of being in an environment with its own independent happenings (such as news that plays in the background and potentially stimulates conversation). From a minimal set of breathing circles, the software evolves into a chatroom as vibrant with stimulus and emotion as a real gathering.

Social Translucence

            This paper notes how the factors of visibility, awareness, and accountability drive certain aspects of interaction and real life. Furthermore, it explores how the absence of these factors affects virtual communication as well as what can be done to remedy this absence. These elements engage two interacting people with a set of rules that defines acceptable behavior. Without them, the gloves are off and truly genuine interaction becomes difficult. The paper provides three forms of solutions:

  • Realist: Projecting social information from the physical into the digital domain.
  • Mimetic: Represent social cues from the physical into the digital domain.
  • Abstract: Portraying social information in ways not closely tied to their physical analog.

These solutions provide different modes of injecting virtual communication with the aforementioned factors.

 

Reflection

The Chat Circles Series

            I’ve always been intrigued by how some developers tackle the issue of breathing life into virtual communication. Both articles share a common ideal in this regard. The various chat circle programs seem to take a cue from the points mentioned in Social Transluence in that they seeks to represent real life social cues in the digital world through the continuous movement and manipulation of circles. I must also note that, while this goal is interesting, I question whether the general population desires it or not. I am under the impression that a lot of people appreciate the difference between real life and virtual communication. Different sets of rules afford them different abilities. For example, the proposed features of Chat Circles would allow people to see whether you’re listening to the conversation or not. However, popular chat domains like Facebook Messenger or GroupMe have no such indication. Many users appreciate Facebook’s notification that the other person ‘has seen’ your message because of the fact that one can read the message that popped up without clicking on it (thus sending the notification).

Social Translucence

I can certainly recognize how it’s not just the tone and body language of a person that affects communication but the physical environment the speakers are in.  Most social media websites are of a public nature. Facebook, Twitter, and Instagram’s primary feature involves posting something for all of your friends to see. This wholly affects what you’re willing to say, how you say it, and does not really allow for intimate communication between two individuals without the use of a chat tool. It makes me wonder what a site would be like where a person has multiple friends and pages dedicated to each of them. Only the user and that specific friend could access and update that page, reminiscent of a private diary shared by two people. How would this affect their activities?

Questions

  • Does the general population really want more intimate virtual chatting or do they have an appreciation for the emotive disconnect that comes with it?
  • How efficiently can the models proposed in Chat Circles evoke certain powerful emotions? The article noted aggression or disdain but can such feelings be felt without true presence?
  • How much extra effort do these modes of communication require with the addition of such features? If people do desire these features, how readily will they accept the extra effort?

Read More

Reading Reflection #3

Summary

The article, “Social Translucence: An Approach to Designing Systems that Support Social Processes” discusses about the difficulties of digital communication and collaboration. As social creatures, people are sensitive to the actions and interactions of others, however, in the digital world there are no social cues to observe. To help solve this social blindness, the authors created a prototype digital environment that would be socially translucent called Babble. An important aspect of Babble is the social proxy, a minimalist graphical representation of users’ presence and activities. In the social proxy, the conversation is represented by a large circle and the participants are colored dots. Users involved in the current conversation are represented by having the dots be within the circle while users who are logged in but in different conversations are shown by dots that are outside of the circle. After two years of daily usage, Babble was found to be an effective environment for supporting informal group conversations on various topics.

The article, “The Chat Circles Series: Explorations in Designing Abstract Graphical Comm. Interfaces”, discusses about the development of a series of abstract graphical chat environments called Chat Circles. The series represents the projects’ growth to more legible and engaging social environments, with each new project having some different kind of feature or fulfilling a different purpose. The article not only talks about the various projects but also discusses the differences between the projects and how those differences affected the social communication. It was found that including group information, graphics, and online speech helped foster better communication and sociable atmosphere among users.

Reflection

I think both articles bring up a valid point about how, though, digital communication has made it easier for people to connect and talk, the lack of in person interaction can affect the conversation. A large feature that is missing in online conversation is tone of voice. The tone of someone’s voice can greatly affect how a message is conveyed and how someone could respond back. For example, if someone was asked a question in a harsh voice then that person is more likely to respond back angrily or defensively than if the question was asked in a calmer voice. This kind of situation can be easily seen online, where textual conversation can come off as impersonal and cold due to lack context and tone causing people to often misinterpret other people’s intentions as explained in the second article.

The prototype Babble that was created in the first article reminded me of how a lot of online chat applications now have a way to see if your message was sent and read by the other person. The first application I thought of was Facebook’s messenger. Whenever you send a message to someone using messenger, there is a small icon that appears next to the message sent. The icon can be either a clear circle, clear circle with a check mark, a blue filled in circle with a check mark, or a small circular version of the user’s profile picture. These 4 types of icons present the status of the message. A clear circle represents that the message is being sent while a clear circle with a check mark means that the message has been sent but the other person has not received it yet. A filled in circle with a check mark means that the message has been received but unread and the circular version of the user’s profile picture means it has been read. The usage of these icons like how Babble uses circles and dots, help users feel more involved and there is less of a sense of disconnect.

Questions

  • With the increase usage and popularity of emojis, is it possible that in the future people will move away from text-based messages in favor of  graphic based messages?
  • Would the usage of “likes” and comments be consider a way of social cues in a digital environment?
  • Is it possible that the lack of context and emotion felt through online messaging be due to how people tend to write less in online messages?
  • How can people tell a happy text-based message from a sad text-based message?

Read More