Reading Reflection 5

Summary

In the paper, “The Language that Gets People to Give: Phrases that Predict Success on Kickstarter”, the authors seek to identify the successful features of a crowdfunding project. They wisely observed that not just any data from Kickstarter, the crowdfunding website they performed their analysis on, could be used to draw conclusions. They needed to grab a subset of projects and posts on Kickstarter, specifically projects that reached their end date. From this list of projects, they were able to pull a little over nine million unique phrases. Then from these phrases they eliminated any phrase that was used less than 50 times or was topic specific. This left them with about twenty thousand phrases to work with and aid in drawing their conclusions from.

After analyzing these phrases and control variables that were determined against successful and non-successful Kickstarter projects, the authors were able to find some interesting results. They found that the “top 100 predictors of funded and not funded are solely comprised of phrases” and not control variables among other findings. Along with this, the implications of this research and data that will be released are discussed. It is very possible that crowdfunding websites could provide these most popular successful predicting phrases in some location on the website. Whatever the case, this research deepens our understanding of funded and not funded crowdfunding campaigns as well as persuasive design.

Reflection

I found this paper to be very informative concerning the phrases and variables that can be used to predict crowdfunding success or not. I was slightly surprised that some phrases in general seemed to be a much greater predictor of success than control variables. I also noted that there is much more significance placed on crowdfunding descriptions than I had originally thought.

The idea of reciprocity, as was mentioned, can play a huge role in crowdfunding success. From my own personal experience, I find it much more compelling to contribute something I own when I feel like I am receiving something in return. While this “trade” may be greatly skewed to one side, the idea of making a trade seems like a logical decision and gives us greater ease.

I like how this paper recognized that certain conditions cannot be accounted for when analyzing their data. They recognized that people who provide updates on their Kickstarter page and list their new backers may have a small influence on them reaching their project goal. They acknowledged this limitation, and I find this to be crucial for academic papers that seek to see future research done in the area.

Questions

Are the factors that identify a successful crowdfunding project changing as they become more well-known?

How important is reciprocity in the workplace in terms of effort expended and praise given? How does this affect employee retention?

Can the excessive use of a successful predicting phrase have the opposite effect?

How does the layout of a particular crowdfunding website affect a person’s desire to contribute? Does a more professional layout entice different subsets of people?

 

Read More

10/19

Summary

The article dives into a new funding resource that many individuals and enterprises have used to achieve funding goals: crowdfunding. It specifically looks into the Kickstarter website.
After explaining what exactly kickstarted does and how it works, they go into analytical processing of what works create a successful funding vs not successful, in kickstarter if you achieve you goal you get the money, if not, you don’t get the funding money. The main trend they found is the language that was used to describe the campaign was a predictive factor related to the success of the campaign.

 

Some of these persuasive language elements were reciprocity, scarcity, social proof, social identity, relationship (linking) to the host, and authority. Most of these did not come as a surprise if you think about a verbal persuasive argument and what makes that successful: offering someone in return for their donating (reciprocity), giving them a time or product constraint (scarcity), showing that other people are donating(social proof), a feeling of belonging to a group that is donating (social identity), knowing the person and liking them (linking), and expert opinions on product donating for (authority). All of the data gather was from multiple statistical analysis and some other interesting things were find. At a enterprise  level, creating a crowdfund lead to more collaborations between departments creating a concern for a collective group instead of there own self interests.

 

Personal Reflection

The data gathered was quite similar to what I would have predicted if I studied persuasive arguments in a verbal context, which is interesting that it has the same effect online. I am not interesting in crowdfunding and gathering more data on it, but I do find the way words and social behavior online can affect a users decisions to do or not do something interesting. I found that how they analyzed words and their meaning within them would be applicable for an extension to Tweetasaurus, in being able to analyze the context of the sentence or emotion behind it and then offer better suggestions based on this emotion. Additionally, can any of these persuasion techniques be used to help users of Twetasusaurs encouraged to use it for its better purpose than just a thesaurus? Such as, being able to see other’s changing negative words to better word to create this “social” environment of “social proof”.   

 

Questions

How much does the wording matter? This analyzed the meaning of the wording, but say if someone was trying to appeal to someone’s social identity, but they did so in an ineffective way? Is there a way to measure if they users are “trying” be embody these characteristics, but failing?

 

I found it interesting that it brought people together at enterprise level, is there a way to see if there were any other improvements in the workplace?

Read More

9/12 Reflection #4

Summary

The article “Antisocial Behavior in Online Discussion Communities” describes a study that sought to characterize the behavior and evolution of antisocial web community users over time. The study focused on users from three websites: CNN, Breitbart, and IGN. Users under observation were split into three categories:

  • FBU (high): Users who were eventually banned and had many of their posts deleted.
  • FBU (low): Users who were eventually banned and had fewer posts deleted.
  • NBU: Users who were never banned.

In order to aid their characterization, the authors observed the frequency with which these users posted, the readability of their posts, and the disdain with which the community backlashed. A point of focus was how users in the FBU category had lower quality content from the start of their membership in the community compared to NBUs. FBU content quality and readability saw a noticeable downward trend as time goes on, and these users tended to post more frequently in fewer discussion boards. This type of activity received a greater backlash from the community and eventually led to their banning. A method used to potentially identify antisocial users early on was derived from the data. The authors discovered that users who received a large amount of backlash, many deleted posts, and more quickly deleted posts early on in their membership were extremely likely candidates for banning later on.

Reflection

Overall, the study was one that anyone from my generation would be able to relate with. Observing similar trolling and flaming in my day to day online life, I found it interesting to see how these users can be characterized. This article is helpful to me and my project partner since our current project idea involves a method to use automated moderation and identify/delete posts that veer from the discussion topic. While the article focuses mainly on negatively perceived posts and comments it was intriguing to see how some trolling behavior was positively perceived by the community. I can imagine a right wing user making an unpopular opinion on a left based political site then being chastised by a leftist user. It is not hard to see the community egging on such behavior. It makes me wonder how moderators should truly respond to this situation. If the behavior goes unpunished, then a toxic community is formed where attacks on unpopular opinions are allowed. If the behavior is punished, the website may lose the faith of its users for “supporting” the unpopular opinion and not the opinion of the toxic user who is considered a true member of the community.

Questions

  • Breitbart is considered to be a far-right news source. What observations can be made about leftist users who started out by stating an unaccepted opinion and gradually became more aggressive in their behavior?
  • What observations can be made on those who witness antisocial behavior? Do they become more likely to engage in such behavior themselves? Do their responses to such behavior become more aggressive over time?
  • It was mentioned that users who were unjustly banned returned with much more antisocial behavior. The study claims that certain activity patterns can be used to identify antisocial users early on. Do you think taking such pre-emptive action could lead to unjust banning and thus the creation of new trolls and flamers?

Read More

Reading Reflection #3

This paper analyzes antisocial behavior by examining three online communities, CNN.com, CNN.com, and IGN.com. Some of the questions the researchers are curious about are is antisocial behavior innate in a user or does it build up over time and how does the community reaction affects the potential FBU, Future Banned User. A key characteristics of FBU’s are that they focus on individual threads rather than trying to spread their message across an entire platform. The paper concludes by presenting a way to predict which users can potentially be FBU’s; keeping track of how much they post and delete, as well as their content, which a sometimes can contain profanity or derogatory language against other users.

I would have loved to seen this study been done on sites such as Reddit or Stackoverflow as they are more of pure-forum type social networks. FBU’s may not necessarily be trolls though as I’ve seen first-hand on the sites I mentioned above. The community they interact with has the ability to tarnish their reputation e.g. some users on Stackoverflow have a complexity issue and are not fond of ‘simple questions. Once I saw a post a user was asking regarding a string manipulation, he/she had showed their work too so they clearly weren’t out to just get the answer and a lot of comments or replies were mean and quickly got upvotes. One comment was along the lines of “If you’re really stuck doing this you may want to change careers” Down the comment thread I read a new comment from a user defending OP saying that at one point in their career everyone was an amateur or beginner. I viewed the user’s history because it might be another account from OP but it clearly wasn’t because the user had a much higher reputation. Half hour later this user’s comment got deleted as it probably got down voted so much or seeing the backlash he/she received they proceeded to delete it.

 

What’s a good way to tell apart a different point of view in a particular community from an actual troll? As seen in my reflection ..

What other types of mechanisms besides down voting/banning can social sites do to promote a more positive community?

Read More

Reading Reflection #4

Summary

The paper “Antisocial Behavior in Online Discussion Communities” focuses on antisocial behavior in online communities such as CNN, IGN, Breitbart, etc. by analyzing banned users from such online communities. These banned users negatively affect other users and the online communities by undesired behaviors such as trolling, flaming, bullying, and harassment. The paper describes these users’ behavior to worsen over time, and causing the online communities to become less tolerant of their behaviors. The paper explains how it is possible to characterize, identify, and also predict antisocial users and their behaviors.

Reflection

I found this paper to be interesting and relatable. I have seen and dealt with antisocial behavior in also every online community I’ve visited. This is especially true for online communities that are anonymous or don’t involve real world identity. I think this makes antisocial behavior easier for some users since they believe that their behavior cannot be tied back to their real identity. I think being able to predict and identify potential FBUs is a step in the right direction. This would allow for online communities to focus more on growing their content than having to worry about antisocial behavior. This would also allow for online communities to be safer since users wouldn’t have to deal with harsh posts or offline danger. Offline danger is a real problem since users can be targeted physically by users with antisocial behaviors. This could be anything from written threats to SWATing. I also am interested in the moderating systems since these systems are also subjected to antisocial behavior. There are many instances where these systems are abused such as abusive moderators, one-sided voting, and false reporting.

Questions

  • In a system of using down-votes to moderate bad posts, how often is this system abused?
  • Does banning users with antisocial behaviors actually help or worsen the situation?
  • Is it possible for a user’s post to be legitimate, but also be perceived as antisocial behavior because their view is different?
  • Does hiding a post have any impact over deleting a post?
  • What do FBUs have to gain?
  • Which type of moderating system is the most fair and efficient?
  • Which type of moderating system is most prone to abuse?

Read More

Reading Reflection 4

Summary

The article “Antisocial Behavior in Online Discussion Communities” studies antisocial users. They study their behavior in different settings and the feedback they receive. This helps people spot antisocial users at an early stage. The study tries to answer three questions about antisocial users. They ask if they are created or born antisocial, if reactions promote antisocial behavior, and if antisocial users be spotted at an early stage. This study was data driven and the data proves the point that antisocial users are a problem. The community’s response is only making it worse however; there is a way to spot them early.

Reflection

This article ties with the team project my group and I are interested in working at. We want to catch the antisocial users and try to identify them with a label so other users will know to stay clear of that user. We will be using a way to scrub through the posts the user puts up and look for vulgar language or degrading terms. Instead of banning them, we would like to give them a chance. This means they are still able to stay online but the public will know of their reputation. This means they need to stop with the antisocial behavior or they will continue being blocked by other users. I believe this article showed me how much data can prove a point and help analyze user’s behavior. This is similar to what my group and I must accomplish to understand where to head with our project.

Questions

  • What should people look at to determine if a user is antisocial?
  • If one person is being antisocial, is there a chance that there are multiple in that one conversation to bring an argument?
    • Or can the antisocial user influence the rest of the community?
  • What difference in punishment should an antisocial user face versus an antisocial person in the real world?

Read More

Reading Reflection #4

Summary

In the article, “Antisocial Behavior in Online Discussion Communities”, the authors characterize antisocial behavior in online discussion communities by analyzing users who were banned from CNN.com, Breitbart.com, and IGN.com. Banned users were found to use controversial language and kept their posts in individual threads, which would get more replies than the average user. Additionally, banned users’ behavior worsen over time as communities became less forgiving and tolerant, resulting in increasing rate of the banned users’ posts being deleted. Using the data collected regarding banned users’ behavior, the authors were able to identify antisocial users based off their post history and certain habits. The data collected and analyzed in this article could assist in better understanding of antisocial behavior and help maintain better, more positive online communities.

Reflection

I found this article to be very interesting as I never thought that there would be a correlation between trolls and antisocial behavior. I especially was interested by the authors’ comment regarding the possibility that rejection from a community might feed a user’s negative antics. Users that purposely try to get attention through posting controversial statements would feel encouraged by the lack of response to try more to get the reaction they are looking for. It almost seems like they enjoy the attention and the fact that they caused other users to get riled up. Thus, I am curious to how the antisocial users would have a chance to redeem themselves as suggested in the article. It is possible that banning might encourage users to try again and make a new account to redo the same problematic behavior as they might see it as a challenge of not getting caught as long as possible.

Questions

  • If ignoring a troll encourages more posting then what is the best way to react to one that would allow them an opportunity to redeem themselves?
  • What do trolls gain from posting controversial posts?
  • Does banning problematic users actually make a difference in regard to the online community’s environment?

Read More

9/12 Reflection

Summary

In “Antisocial behavior in online discussion communitites,” the authors discuss the multitude of ways they tried to determine antisocial user accounts in such a way that the likelihood of a future ban could be predicted. The studies conducted focused on three sites with fairly large user bases; IGN, Breitbart, and CNN then divided users into two groups: Never-banned users and future-banned users. They then divided future-banned users into two more distinct categories of those who experienced high rates of post removals and those who had fewer removed over the course of their online life leading up to the ban. Their studies also indicated that everyone’s content quality slips as their life on a site increases, future-banned users overall had generally lower content quality to begin with.

Reflections

I found this article to be very interesting, but am not quite sure if this has any implications on prevention of trolling online. They didn’t go too much into how they determine which comments are considered to be instigator comments, but I imagine it would be difficult to differentiate between responses as more often than not the responder is more aggressive than the creator. Some users create accounts with the expressed purpose of being a troll account so their analysis that deviant users exert more effort into singular threads or conversations makes a lot of sense. Since they are trying to instigate and inflame others. However, how would they differentiate between actual trolls and users who are simply voicing their unpopular opinion. I think that this work is interesting from an analytical standpoint but in practice this could create a white-washed environment where unpopular or dissenting opinions could be quashed before a person even has a chance to defend themselves.

Questions

 

How, if at all, would they differentiate between users with unpopular opinions and genuine trolls.

Some antisocial users behavior degenerated due to response from the community. What could be done to stop the degeneration or backlash?

Censorship is a problem, but is it Ok to silence others who’s statements you perceive as inflammatory?

Read More

Reading Reflection 9/12

Summary:

The article “Antisocial Behavior in Online Discussion Communities” analyzes undesirable user participation in online communities, and how to detect them early on before they are banned from communities. The goal of this is to minimize troll-like behavior, which results in more positive online communities. There are already methods of trying to prevent this, such as reporting posts, down voting, and blocking. Although these methods are still in place, there is still a large amount of trolling in the online communities. The study uses three online discussion-based communities: CNN.com, Breitbart.com, and IGN.com. Through this, users were categorized as Future-Banned Users (FBUs) or Never-Banned Users (NBUs). While going through the users’ behaviors, the article evaluates three main questions regarding how and when users start deviant behavior online. NBU’s and FBU’s can be analyzed to find out more information about whether or not someone will be banned. Design features such as post content, user activity, community response, and actions of community moderators help do this. The Never-Banned Users and Future-Banned Users are split by the rate in which their posts are deleted. Depending on how fast a post is deleted, it can be predicted whether or not the account should be taken down.

Reflection:

The actions of the Future-Banned Users imply that they will produce deviant behavior, even before referring to them as “FBU’s”. From using social media sites such as Twitter, most of the deviant content that I come across does not make much sense. The irregular tweets are usually a mix of inappropriate words replying to a previous tweet or being stated. Most Future-Banned Users write much differently than accounts that I follow, and it is easy to spot out. As discussed in the article, most deviant content is concentrated within individual threads. What if the threads are private? Is there an efficient way to monitor that without intruding someone’s privacy? I do think that this is a major issue within online social communities, but it may be harder to solve with privacy issues.

Questions:

Is there a way to prevent deviant content that is private?

Why do people feel the need to post deviant tweets?

Read More

Reading Response 9/12

Summary

‘Antisocial Behavior in Online Communities’ should perhaps be titled ‘Negatively Social Behavior in Online Discussion Communities’ (although this may just be me not understanding the terminology perfectly). It focuses primarily on primarily attempting to figure out why people do things which are meant to purposely hurt or instigate others in online communities. This study also sets out to differentiate itself as a quantitative study, where most studies on the subject have been qualitative. They used CNN, IGN, and Breitbart as their sources, mostly because they had large enough obtainable datasets.  Then they go over various predictions for antisocial behavior, such as how they write, how they write over time, and whether or not their writing changes significantly should they be censored. They then spend quite some time reiterating some ideas that they honestly have already gone over. Finally ending with a discussion on how to identify the future banned users.

Reflection

It’s interesting that frequently banned users tend to post completely differently than others, that is in small antagonistic focused quantities. It is also a bit odd that people who usually post this type of material worsen over time. This may suggest that accepting them in to the community might help the antisocial behavior. CNN bans more users than Breitbart, but deletes significantly less posts, especially when compared to the number of posts reported to be inflammatory. Some of the research isn’t particularly surprisingly from a logical standpoint. It turns out people who are there to be antagonistic don’t use much non-definitive language, they’re much more likely to curse, and much less likely to talk in a positive fashion. Angry people also don’t write as well. As well should someone have a post ‘unfairly’ censored, they are more likely to write poorly in the future. So there may be some link between post quality and general outrage at the site. A lot of the information isn’t overall very surprising. Much of it makes perfect sense when you think about it (mind you, most information does once it is presented to you). Some of the research is important though. The fact that users who will be banned in the future use much angrier or hostile language makes basic sense. They are often trying to inflame others. That said, the fact that users who will be banned in the future have their writing deteriorate at a faster rate over time than normal users is interesting.

 

Questions

-Do you think that acceptance or rejection either would or do play a role in antagonistic user’s postings?

-What line do you think must be crossed for a user to have their posts deleted? Is deleting or censoring a user ever okay?

-Why do you think a large portion of “Antisocial Users” exists? To purposely inflame others, or because they actually have major differing opinions?

Read More