Reflection 13 [11/29] Neelma Bhatti

Reference:
Lazer, D., & Radford, J. (2017). Data ex machina: Introduction to big data. Annual Review of Sociology, 43, 19-39.
Reflection:
This article reminded me of two things, why I love sociology/psychology and why I like literature surveys. Literature surveys, Sociology and Big data seemed like a good amalgamation of things which reveal interesting findings about literature,  groups of people and  patterns respectively.
The article kind of puts the semester long class of social computing in one place by discussing Big data, social systems, bots and sock puppets creating the illusion of perfect user and social contagion, to name a few. The author emphasizes the strength of incorporating sociological studies and computer science to make the best use of Big Data. He explores the opportunities and threats and also provides suggestions for addressing future challenges with big data research.
The author did a great job of summarizing almost every aspect of the opportunities and vulnerabilities associated with Big Data . The role of big data in transforming learning and higher education seemed to be missing though. Big data its still a niche topic in the field of education, but governments have started to produce reports about the potential of big data in education [1]
Although at a glance, most of the studies designed to gather or examine big data to extract useful pattern look intrusive (like the behavioral involving college students who were provided with cell phones for the investigation of user data), I believe their is a need to educate people about the greater good of collaborating by providing passive, non-sensitive data for scientific and behavioral studies. Because we are generally skeptical about possible privacy invasions through our phones and social media accounts, but also want to remain foremost at the receiving end of scientific advancements.
 Just as in an emergency people are willing to do monetary help, they should also be educated to contribute by providing data to investigate the situation. This will greatly reduce the vulnerabilities associated with big data such as errors and misinterpretations of data due to self-reporting.
[1]Eynon, R. (2013). The rise of Big Data: what does it mean for education, technology, and media research?.

Read More

Reflection #12 – [10/23] – [Neelma Bhatti]

  1. Bond, Robert M., et al. “A 61-million-person experiment in social influence and political mobilization.” Nature 489.7415 (2012): 295.
  2. Kramer, Adam DI, Jamie E. Guillory, and Jeffrey T. Hancock. “Experimental evidence of massive-scale emotional contagion through social networks.” Proceedings of the National Academy of Sciences (2014): 201320040.

Reflection:

The first paper resonated really well with my experience on Facebook during 2013 and 2018 General Elections in Pakistan. Election campaigns are no longer limited to the real world, and political self-reflection on Facebook is more common than ever. During 2013 election, a certain party seemed to have a clean sweep during election as reflected by user Polls on Facebook, and considering the amount of youth that uses Facebook, the predictions reflected by the polls seemed only real. However, the results of the election were not even remotely similar to that shown in the Polls and from overall atmosphere. . Some of the possible reasons might be:

  • Not all people who were actively involved in the election campaign on Facebook actually voted: exhibiting political enthusiasm to stay in the loop weighed over actually leaving the house to vote in the long queues and scorching heat.
  • Not all users participated in the campaign were located in the country during the election, a large amount of them were overseas Pakistanis and expats. The study in this paper didn’t seem to indicate if this fact has been taken into account.

Also, a large group of people (who don’t qualify as ‘youth’, for the lack of a better term in my vocabulary) didn’t use Facebook, or were hard to influence due to their deep rooted beliefs. This kind of suggested that social media doesn’t really have an impact on voting behavior of users other than the apparent hype. However, 2018 General elections had a higher voter turn out and had more drastic changes on voter behaviors and overall political scenario. Some of the observations are as follows:

  • The youngsters influenced by the social media campaign several years earlier were able not only to convince their parents’/elders opinions over the years with logical reasoning, but also made efforts to take their old/unwell parents/grandparents to polling stations. The live stream of pictures posted by voters on social media groups had real-time effect on procrastinators, and several users commented on how they made an effort to actually go out and vote along with their families.
  • Group members casually posting to have a laugh about spending the day chilling instead of voting were severely bashed and sent on a guilt ride by other members. A study similar to that in the paper regarding voter turnout based on their posting or interaction behavior would’ve been able to measure the influence of group members (other than the users close friends).
  • The results also suggested that social media influence on voting behavior and political mobilization might not be immediate, but has more of a slow seeping effect in the minds of people.

The second paper induced the following thoughts about the design of the study and the results:

  • The posts were tagged positive or negative if they contained at least one positive or negative word, which may give false positives about a post belonging to either of these emotional categories. A sentence such as “you should know how NOT to give up” might be an example.
  • What about double negatives? “I did not see nothing
  • The study didn’t take the frequency of positive and negative emotions expressed on the pages liked or the groups joined by user. A similar experiment which takes these measures into account can portray a better picture about users posting habits and emotional states.
  • It can also take the sponsored posts
  • … and the videos /graphics content seen by the user.

 

 

Read More

Reflection 11 – [10/16] – [Neelma Bhatti]

  • Danescu-Niculescu-Mizil, C., Sudhof, M., Jurafsky, D., Leskovec, J., & Potts, C. (2013). A computational approach to politeness with application to social factors.
  • Zhang, J., Chang, J. P., Danescu-Niculescu-Mizil, C., Dixon, L., Hua, Y., Thain, N., & Taraborelli, D. (2018). Conversations Gone Awry: Detecting Early Signs of Conversational Failure.

Authors in the first paper strive to develop a computational framework which identifies politeness, or the lack thereof, in Wikipedia and Stack exchange. They uncover connections between politeness markers and context as well as syntactic structure to develop a domain-independent classifier for identifying politeness. They also investigate the notion that politeness is inversely proportional to power: The higher one ranks in a social (online) setting, the less polite they tend to become.

Reflection:

  • In introduction section of the paper, authors mention their findings about established results about relationship between politeness and gender. It also claims the prediction-based interactions to be applicable to different  communities and geographical regions. However, I didn’t seem to quite understand how the results relate to gender roles in determining politeness. I am also skeptical about the said computational framework to be applicable to different communities and geographical regions since different languages vary greatly by their politeness markers, and have different pronominal forms and syntactic structure. Also accounting the fact that all human annotators in this experiment were residing in the US.
  • Stemming from the above comment is another research direction that seemed interesting to me, does a particular gender tend to be politer than the other in discussions? What above incivility? Is gender also a politeness marker in such case?
  • Authors talk about politeness and power being inversely proportional to each other, by showing the increase in politeness of unsuccessful Wikipedia editors after the elections. This someone doesn’t seem intuitively correct. What if some unsuccessful candidates feel that the results are unjust or unfair, will they still continue being politer than their counterparts? The results seem to indicate that all such aspiring editors keep striving to achieve the position by being humble and polite, which might not always be the case.
  • Research on incorporating automatic spell checking and correction by finding word equivalents for misspelled words can help reducing false positives in the results produced by Ling. (Linguistically Informed Classifier)

Second paper talks about detecting the derailing point of conversations between editors on Wikipedia. Although authors talk at length about the approaches and limitations, there did not seem to be (at least to me) a strong motivation for the work. Possible applications that I could think of are as follows:

  • Giving early warnings to members (attackers) involved in such conversations before imposing a ban.The notion of ‘justice’ can then be prevailed in the community, by making commenters aware of their conduct beforehand.
  • Another application can be muting/shadow banning such members by early detection of conversations which can possibly go awry, to maintain a healthy environment in discussion communities.

 

 

Read More

Reflection 10 – [10/02] – [Neelma Bhatti]

  • Starbird, K. (2017, May). Examining the Alternative Media Ecosystem Through the Production of Alternative Narratives of Mass Shooting Events on Twitter. In ICWSM (pp. 230-239).
  • Samory, M., & Mitra, T. (2018). Conspiracies Online: User discussions in a Conspiracy Community Following Dramatic Events.

Summary:

Authors in both papers explore the alternative narratives of events such as mass shootings, bombings or other crisis events and the users who engage with them by either creating or propagating such news in the name of challenging the  ‘corporate controlled media’ by presenting what they think are the actual ‘facts’.

Reflection:

Humans have this tendency of seeking surprises, and from my understanding it arises from the boredom experienced by ‘regular’ and ‘mainstream’ news one consumes all the time. Believing in, or at least giving a thought to conspiracy theory also gives a sense of being a responsible citizen who takes all news sources into account before settling on believing in one, unlike a regular news reader who readily believes in mainstream media”.

Some thoughts or future research directions that came to my mind (considerable fact: I’m not an active Reddit user) after reading Samory et al’s paper as as follows:

  • What insights could be gained by taking into consideration the number of users who leave the subreddit? What forces/urges them to do so? Do they grow sick of hearing the ‘alternate’ versions of theories? Do some of them ‘joiners’ only join shortly for gaining alternate version of a particular event and then leave? Maybe studying these figures can produce some more insight into what goes on around these communities, and how they shape over time.
  • What’s the ratio of real users vs bots and sock puppets in this subreddit? Do bots and sock puppets exist so as to promote a sense of diversity into spreading alternate narratives of the events?
  • What other dissimilar subreddits have they joined and how do they relate to each other. Qualitative analysis of such data by making a graph like that in Starbird’s paper may produce valuable insight.
  • Also, since veterans have been in the game for long, and may identify fellow veterans (or even some converts), do they join the subreddits joined by their favourite fellow veterans/converts maybe purely out of interest or friendship?

Also since I have great interest in human psychology and what forms the opinions of people, and how one thought leads to another, the first paper had me thinking about the following couple of things:

  • This is an excellent piece of work, but what about users who create their own conspiracy theories? In case of twitter, do such users post the theories they come up with on their own blogs?
  • How does gender play out in this whole conspiracy theory thing?
  • Do some or any of the conspiracy theory ever turn out to be true?
  • What is such user’s regular source of news consumption and how do they land up at the conspiracy theorist news source? Is there a pattern in linking one such news source to another so as to lead the user into the labyrinth of conspiracy theories?

Read More

Reflection #7 – [09/18] – [Neelma Bhatti]

  1. Donath, Judith, and Fernanda B. Viégas. “The chat circles series: explorations in designing abstract graphical communication interfaces.” Proceedings of the 4th conference on Designing interactive systems: processes, practices, methods, and techniques. ACM, 2002.
  2. Erickson, Thomas, and Wendy A. Kellogg. “Social translucence: an approach to designing systems that support social processes.” ACM transactions on computer-human interaction (TOCHI) 7.1 (2000): 59-83.

Reading reflections

Both the papers are significantly old, and there been advancement in terms of Social translucence. Applications, specifically social systems are have made their communication interfaces significantly “visible”, resulting in a more “aware” user. Examples include the adaptive chat windows where one can see if someone is typing, has read our message, the message is still pending or failed to deliver. The idea of “Given clues that useful knowledge is present, interested parties could request summaries of the topic, petition for admission to the community, or simply converse with some of the community members” is also effectively implemented in Facebook groups now.

The idea of digital spaces having graphical wear showing who has been doing what while being there seemed really novel. But come to think of it, internet is haven for the introverts, and the ability to interact privately is one of the reasons why it is so. Such participants won’t be fond of the social system maintaining their conversation history for transparency or transforming the temporal dimension into depth.

Some thoughts while reading the papers are as follows:

  • Quoting from the paper by Erickon et. al. “in the digital world we are socially blind.” However, I tend to disagree with this statement as we are now more socially aware in a digital world than ever. In a physical setting, it is hard to locate a restaurant, a phone booth or a grocery store which is out of our sight unless we have been there already. The digital world not only helps us locating the service of our choice, but helps us with finding alternatives, displays a list of people who have used the service and what they say about it, and also if there is a perceived obstacle (bad weather the next day, an alternate route in effect because of construction, a traffic jam, working hours etc.) about the service which help us reaching to a conclusion.  It not only makes us better sighted, but helps us in reaching a decision by being well equipped far ahead of time, unlike the “crowded parking lot” example quoted in the paper.
  • Users in a digital world have the liberty to initiate and carry on multiple conversations simultaneously, without one interrupting the other, unlike real world. Having textual conversations with several people simultaneously in a digital space doesn’t hinder communication since their voice doesn’t overlap , neither does it offend the participants in the conversation if the participant turns away from them temporarily, since most of the times it is unnoticeable. It also has to do with the fact that users tend to make the most of the time in the digital world, and it doesn’t require them to be physically present at one place. Although the whole concept of depicting real world interactions in terms of hearing range, action traces, speaking rhythms and other behavioral representations is appealing, it only makes the user able to strike one conversation at a time.

 

Read More

Reflection #6 – [09/13] – [Neelma Bhatti]

  1. Sandvig, Christian, et al. “Auditing algorithms: Research methods for detecting discrimination on internet platforms.” Data and discrimination: converting critical concerns into productive inquiry (2014): 1-23.
  2. Hannak, Aniko, et al. “Measuring personalization of web search.” Proceedings of the 22nd international conference on World Wide Web. ACM, 2013.

Summary

Both papers set to explore the invisibility and resulting personalisation (in some cases, discrimination) of recommendation, search and curation algorithms.

Sandvig et al. map the traditional auditing studies for determining racial discrimination in housing to finding recommendation and searching bias faced by users of e-commerce, social and searching websites. Hannak et. al. develop and apply a methodology for measuring personalization in web search results presented to users, and the features driving that personalization.

Reflection:

  • Having read the “Folk theories” [1] paper, one can’t help but wonder if search engines also use”narcissism” as a personalisation feature, other than several others perceived by Hannak et. al. The narcissism itself can be based on inferring similarities between web traces and search history, demographics etc of different users.
  • It would be interesting to quantitatively assess the if the level or type of personalisation differed based on the device used to log into a certain account (be it Amazon, Google, Facebook). I know for a fact that it does.
  • Algorithmic audits can also be used to investigate fallacious shortage of products on e-commerce websites to generate false “hype” about certain products.
  • As the saying goes: “If you’re not paying for it, you become the product”. So are we (the products) even eligible to question what is being displayed to us? (especially while using free social media platforms).   Nothing that we consume in this world from food to news to entertainment content, has a cost  associated with it, maybe in case of using these services, the cost  is our personal data and right to access information deemed irrelevant by the algorithm.
  • Seemingly harmless, in fact beneficial personalization strategy results in a more serious problem than filter bubbles. Relevant products being shown on the newsfeed based on what we talk with friends (I once mentioned going to Starbucks and had offers related to Starbucks all over my newsfeed) or what we see (The exact products from an aisle I stood in front of for more than 5 minutes on my newsfeed) invade user privacy. If algorithmic audits need to adhere to Terms and Conditions of  a website, I wonder if any Terms and Conditions about not invading the personal space of a user to the point of creepiness exist.
  • A study to determine if an algorithm is possibly rigged¹ can have users tweak the settings publicly available to change the working of an algorithm and see if the results still favors the ownership.

¹which probably all algorithms are, to a certain extent (quoting “Crandall’s complaint”:” Why would you build and operate an expensive algorithm if you can’t bias it in your favor?”)

[1] Eslami, M., Karahalios, K., Sandvig, C., Vaccaro, K., Rickman, A., Hamilton, K., & Kirlik, A. (2016, May). First i like it, then i hide it: Folk theories of social feeds. In Proceedings of the 2016 cHI conference on human factors in computing systems(pp. 2371-2382). ACM.

 

Read More

Reflection #5 – [09/10] – [Neelma Bhatti]

  • Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science348(6239), 1130-1132.
  • Eslami, M., Rickman, A., Vaccaro, K., Aleyasen, A., Vuong, A., Karahalios, K., … & Sandvig, C. (2015, April). I always assumed that I wasn’t really that close to [her]: Reasoning about Invisible Algorithms in News Feeds. In Proceedings of the 33rd annual ACM conference on human factors in computing systems (pp. 153-162). ACM.

Reading reflections:

Most of us have been wondering at some point if a friend who doesn’t show up on our Facebook news feed anymore, blocked or restricted us? At times, we forget about them until they react on some post on our timeline, bringing their existence into notice.

People are becoming more aware of some mechanism used to populate their news feed with stories from their friends, the groups they have joined and the pages they have liked. However, not all of them know whether the displayed content is just randomly selected and displayed, or if there is a more sophisticated way of not only arranging and prioritizing what is displayed, but also filtering out what is “deemed” unnecessary or uninteresting for us by Facebook, which is by using a curation algorithm.

  • There needs to be some randomization in what is displayed to us to break the echo chambers and filter bubbles created around us. It applies both to news we want to read as well as stories displayed in the news feed. Just like going to Target to get a water bottle and finding an oddly placed but awesome pair of  headphones in the aisle. One might not end up buying it, but it will certainly catch the attention and might even lead you to the electronics section to explore around.
  • As regards to political news, not all people choose to read only what is aligned with their ideologically. Some people prefer reading the opposite party’s agenda, if only to pick points to use against the opponent in an argument, or simply to be in the know. Personalizing the news displayed to them based on what they “like” may not exactly be what they are looking for, whatever the intention for reading that news may be.
  • Eslami et. al. talk about the difference in acceptance of the new knowledge with some users demanding to know the back story, while more than half (n=21) ultimately appreciating the algorithm. While some users felt betrayed by the invisible curation algorithm, knowing about the existence of an algorithm controlling what is displayed on their news feed overwhelmed some participants. This sounds true for some elderly people who haven’t been social media users from a long time, or users who are not very educated.  Authors also talk about future work in determination of optimal amount of information  displayed to users  “to satisfy the needs of trustworthy interaction” and “protection of propriety interest”.  An editable log maintaining the changes to news feed content made by hiding a story or lack of interaction with a friend’s/page’s/group’s story etc., which is accessible to user if only he chooses to see it seems to be a reasonable solution to this issue.
  • I liked the clear and interesting narrative of participant selection to data analysis in the second paper, especially after reading the follow up paper [1]. I do think there should have been more information about how participants reacted to stories missing from the groups they follow or pages they’ve liked, or about the extent to which they preferred keeping them as displayed to them. It would’ve given some useful insights into their thought process(or “folk theories”) about what they think goes on with the news feed curation algorithm.

 

[1] Eslami, M., Karahalios, K., Sandvig, C., Vaccaro, K., Rickman, A., Hamilton, K., & Kirlik, A. (2016, May). First i like it, then i hide it: Folk theories of social feeds. In Proceedings of the 2016 cHI conference on human factors in computing systems(pp. 2371-2382). ACM.

 

 

Read More

Reflection #4 – [09/06] – [Neelma Bhatti]

Kumar, Srijan, et al. “An army of me: Sockpuppets in online discussion communities.” Proceedings of the 26th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 2017.

Summary

Kumar et al. focus this paper on the identification, characterization and prediction of sock puppets in online discussion communities. They define sockpuppets as extra account(s) created and managed by a single user for influencing or manipulating public opinion, igniting debates or vandalizing content (in case of Wikipedia). They characterize sockpuppets as pretenders vs non-pretenders and supporters vs. dissenters  based on their linguistic traits, online activity and reply network structure.

Reflection

Although this study nicely situates itself in the body of work currently done in the domain of deception, I felt that it does not establish a very strong objective of being carried out.

It would also be interesting to see if a sock puppet account, or a pair of such is operated by more than one person interchangeably, which not only makes the concept of one puppet master imprecise, but also weakens the statistics obtained by the data with a single user in mind. The hypothesis of puppet masters leading double life reminded me of Facebook, where spouses access each other’s accounts without any problem, sometimes simply in order to peek into the content of ladies only groups, and even comment or react on different posts just for the sake of fun. Although very different from the topic under discussion, it poses the question of whether a study on online behavior of such individuals produce accurate results because of multiple users associated with a single account.

The authors have also used IP as a means to cluster different sock puppets, I was wondering if users logging in to the social platform using proxy servers would be easy to identify using the same study? What if the puppet master uses a both sock puppets and bots to steer the discussion?  In such case, the detection system can be made more robust by incorporating mechanisms to not only to take linguistic traits and activity, but also consider the amount of customization in creating user profile and geographical metadata [1]. This will not only help detecting sockpuppets, but will also be able to identify bots from sockpuppets.

The authors also rightly point out that study of behavior or personality traits would add another dimension to this research. The reasons of having more than one identity online can go beyond sadism, and also be a product of sheer boredom or for the sake of bragging in front of friends.  The puppetmaster can also create multiple identities to avenge a previous ban.

[1] Bessi, A., & Ferrara, E. (2016). Social bots distort the 2016 US Presidential election online discussion.

Read More

Reflection #3 – [09/04] – [Neelma Bhatti]

  1. Garrett, R.K. and Weeks, B.E., 2013, February. The promise and peril of real-time corrections to political misperceptions. In Proceedings of the 2013 conference on Computer supported cooperative work (pp. 1047-1058). ACM.
  2. Mitra, T., Wright, G.P. and Gilbert, E., 2017, February. A parsimonious language model of social media credibility across disparate events. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (pp. 126-145). ACM.

Summary and Reflection for paper 1

Article one talks about how people tend to be picky and choosy when it comes to rumors and their correction. They find a news hard to believe if it doesn’t align with their preconceived notions about an idea, and even harder to made amends for proliferation of a false news if it does align with their agenda/belief.  It presents plausible recommendations about fine graining the correction into users view so that it is more easily digestible and acceptable. I personally related with recommendation 2 about letting the user know about the risks associated with hanging on to the rumor, or the moral obligation of correcting their views.  However, does the same user profiling and algorithms for guessing preferences work across sources of news other than the traditional ones i.e. twitter, CNN etc.?

As delayed correction seemed to work better in most of the cases, can a system decide how likely the user is to pass on the news to other sources based on his/her profile, present real-time corrections to users who tend to proliferate fake news faster than others by using a mix of all three recommendations presented in this paper?

 

Summary for paper 2

As long as there’s market for juicy gossips and misinterpretation of events, rumors will keep spreading in one form or the other. People have a tendency to anticipate, and readily believe things which are either consistent with their existing beliefs, or give an adrenaline rush without potentially bringing any harm to them.  Article 2 talks about using language markers and cues to authenticate a news or its source which can, when subsumed with other approaches of classifying credibility, work as an early detector of false news.

Reflection and Questions

  • Credibility score can be maintained and publicly displayed for each user, which starts from 0 and is decreased every time the user is reported for posting or spreading a misleading news .Can such credibility score be used to determine how factual someone’s tweets/posts are?
  • Can such a score be maintained for news too?
  • Can a more generous language model be developed, which also takes multilingual postings into account?
  • How can number of words used in a tweet, retweet and replies be an indicator of authenticity of a news?
  • Sometimes users use emoticons/emojis in the end of the tweet to indicate satire and mockery of the otherwise seriously portrayed news. Does the model include their effect on the authenticity of the news?
  • What about rumors posted via images?
  • So much propaganda is spread via videos or edited images on the social media. Sometimes, all textual news that follows is the outcome of a viral video or picture circulating around the internet. What mechanism can be developed to stop such false news from being rapidly spread and shared?

Read More

Reflection #2 – [08/30] – [Neelma Bhatti]

Assigned reading: Cheng, Justin, Cristian Danescu-Niculescu-Mizil, and Jure Leskovec. “Antisocial Behavior in Online Discussion Communities.” Icwsm. 2015.

This paper talks about trolling and undesirable behavior in online discussion communities.  Authors studied data from three online communities namely CNN, Breitbart and IGN to see how people behave in these large discussion based communities, with the aim to create a topology of antisocial behavior seen in such communities, and use it for early detection of trolls. Authors have used complete time stamped data of user activity and the list of total banned users on these websites for a course of one year obtained from Disqus.

Since services like Disqus help fetch data about user profiles, it would be interesting to have users’ demographics to observe whether age, geographical orientation has anything to their said anti-social behavior. These behaviors differ greatly from one community to another i.e. gaming, personal, religion, education, politics, and news communities. The patterns also vary in gender specific platforms.

Community bias is very real. In online discussion boards or groups on Facebook, people whose opinion differs from the admin/moderator are promptly banned. One person who is fairly harmless, in fact likeable in one group can be banned in the other. It has nothing to do with their content being profane, it’s more about a difference in opinion. There are also instances where people gang up against an individual and report/harass them in the name of friendship/gaining approval from the moderator or admin of the group. Analyzing or using such data to categorize users as trolls produces inequitable results.

Some other questions which need consideration are:

 

  • There is a fairly large number of people (without quoting exact stats) who have more than one accounts on social media websites. What if the banned user rejoins the community with a different identity? Can the suggested model do early detection of such users based on historical data?

 

  • Does the title correctly portray the subject matter of the study? As the word anti-social refers to someone who’s uncommunicative and avoids company, but based on the results it could be seen that FBU tend to not only post more, but they also have the capability of attracting more audience and steering the topic to their liking.

 

  • Having read previous articles revolving around identities and anonymity, would the results be same for communities with anonymous users?

 

  • How can we control trolling and profanity in communities such as 4chan, where the actual point of unison among seasoned (but anonymous) users is trolling new users and posting explicit content?

 

  • Authors also try to assess whether excessive censorship makes the users hostile or antisocial. Considering real life instances such as teacher calling out and scolding student for someone else’s mistake, would there be some users who are likely to refrain from posting altogether when once treated unfairly?

 

 

Read More