Video Reflection #9 – [09/27] – [Bipasha Banerjee]

The video we had been assigned was about partisanship and search for engaging news by Dr Natalie Stroud. We were exposed to the filter bubble effect in the previous readings and this video gave another perspective into selective exposure, moral foundation and stereotype exposure model. News organization these days are trying to engage with the audience and yes, they do succeed in effecting the thought process of the audience. At least in India, political election results are often determined by the party the dominant news channel of the region supports. This support has led to stable parties being toppled by new emerging political parties. I am focusing on the influence of media and other outlets on audience.

My major concern is as a newbie or a new joiner to any community one must have all the information and then decide if he wants to believe or oppose the particular ideology. This would make sure that unknowingly the person does not start following and believing the ideology. There is something known as a Media Bias chart [1] which depicts all major media outlets on a chart based on their liberal and conservative stance. Similar to that, a “bias index” could be created which would determine from past history the tendency of the particular media outlet to be bias and not only in terms of political belief. It is a way to ensure that people are informed about the stance of the news outlet. Other affiliations should also be made transparent. Social media platforms like YouTube, Twitter, Facebook have incorporated a sort of #Ad feature. This helps the user of the platform to form an informed judgement.

Journalism should be unbiased, however as the speaker had pointed out the factor of “What sells?” plays an important role. This determines what articles are promoted, the breaking news etc. It is true that human nature demands controversy. Negative publicity is what is popular and is received well by a certain sector. So, to analyze how an information is perceived by the audience a live voting mechanism can be incorporated to measure people perception, something similar to the feedback model of Facebook live for news outlets. A simple upvote and downvote system, where the user can text in their “Yes” or “No” opinion and according to the result the number of votes change. This would help formulate the user rating into an index. Let’s call this the “perception index”. It would give a comprehensive idea of what the audience feels about the media outlet. This will be applicable to live television broadcast or digital media and not the print media. Both the perception and the bias index would be a great judgement of the partisanship of the news outlet.

The bias index would be an index reflecting from prior data (considering certain parameters for determining bias). The bias parameters should be known and available to the audience to avoid prejudice in determining the index and to promote awareness. The perception index on the other hand is an indicator of what is perceived at the moment (rather than the past). This would make things transparent as it would not have any predefined parameters. Considering both the indexes a total index can be calculated. I believe that this sort of a model would encourage people to get out of the filter bubble and be more aware.

[1] https://www.adfontesmedia.com/media-bias-chart-3-1-minor-updates-based-constructive-feedback/

Read More

Reflection #8 – [09/25] – [Bipasha Banerjee]

[1] Garett, R. Kelly (2009) – “Echo chambers online? Politically motivated selective exposure among Internet news users

[2] Resnick, Paul (2013) – “Bursting Your (Filter) Bubble: Strategies for Promoting Diverse Exposure”- Proceedings of CSCW ’13 Companion, Feb. 2013.

Summary

The topic for todays’ discussion is Polarization and selective exposure which means people are exposed to a certain kind of news and they are unaware of what is going on beyond that the “exposure bubble”. Selective exposures offered by a filter bubble is a result of personalized search avoiding diversified and conflicting information. These papers conduct surveys among the population using their political beliefs. The previous researches reveal that the internet news users mostly read and watch the news which conforms with their political belief (opinion reinforcement) rather than opinions which challenge their political beliefs. However, peoples use of internet to acquire political knowledge has raised concerns regarding selective exposure problem. The authors conducted a survey – a web administered behavior tracking study to access how the ones political attitude influences their use of the online news. The study used 5 hypotheses (H1 to H4) based on existing research on exposure processes. The study found that people are more inclined to look and spend more time to read new stories containing more opinion reinforcing information and that they are less likely to look and to be influenced by the opinion challenging information. The last hypothesis concluded that a person will spend more time to observe a story contradicting or challenging their opinion.  The reason being the greater the length of exposure in the news the greater opportunity for the person to refuse to view this news in the future.

 

Reflection

We were first exposed to the filter bubble effect in the paper by Hannak et al. [3]. This is a user getting exposed to information which is personalized due to their search history and might be as a result of algorithmic bias. Echo chambers scenario is mainly because of the fact of web personalization. A user is exposed to beliefs and news which are relevant to their search history and de facto their personal beliefs. It was interesting to learn that when it comes to political domain, people also spend time researching politically challenging data in order to critique the opposite idea further. Thus, being exposed to varied idea gives rise to breaking of the filter bubble, however they are not likely to alter their belief. The need for algorithmic audit is becoming clear to me.

Another important problem that may arise from selective exposure that may have a much larger negative impact is the health sector. Be it beliefs related to vaccination or other health procedures, if people who are against it are exposed only to data to support their hypothesis that may have a severe consequence. Similar to the example provided in the text, even if these group are exposed to theories contrary to their beliefs, they might use the exposure as a form of opinion reinforcement. How can we make sure that along with breaking the filter bubble, the person also becomes open to new thoughts as well?

[3] Hannak, Aniko et al. (2013) – “Measuring Personalisation of Web Search” – Procedings of International World Wide Web Conference Committee. (527-537).

Read More

Reflection #7 – [09/18] – [Bipasha Banerjee]

[1] Erickson, Thomas et al. (2000) – “Social Translucence: An Approach to Designing Systems that Support Social Processes” ACM Transactions on Computer- Human Interaction (59-83)

[2] Donath, Judith and Viégas, Fernanda (2002) – “The Chat Circle Series”- DIS 2002, London

Summary

This week’s reading was the design of social systems. There are certain properties in the physical world which enables human to human collaboration. However, in the digital world there are substantial shortcomings and problems hindering long running productive communication within the society as digital systems are opaque. Social translucence is a concept that is primarily to help design digital systems which will help information being made visible within the system. The authors try to implement translucence in the digital world and for that they introduced the Babble prototype which focuses on the textual and graphical representation to make digital information more transparent. The second paper discussed various graphical chat models which would potentially do away with the drawbacks of simple textual chats. They introduced the Chat Circles series which was essentially a graphical model to represent users and thus enhance the social interaction. It uses 2D graphics where the users words appear in circles which brightens and grows to accommodate the message. They introduced several other models with some key interface elements which were primarily based on the Chat Circles series, namely the Chat Circles II, Chatscape, Tele-direction. Both of the papers aim at making the digital design interactive and graphical in order to integrate well with human behavior.

Reflection

While reading both the papers, I noticed that they talk about the importance of graphical representation and integration of social digital interaction with human behavior in order to make the system translucent rather than opaque. One common feature both the papers have is that they were both published in the early 2000 where chat forums were in their nascent stage. More so, the data they must have worked on was late 90’s data. I believe that internet back in the day was not a very harmful place, and people followed basic etiquette. The graphical representation to exude emotions as a human interaction would have a greater negative impact in the current community.

The authors of the first paper says- “we believe that digital systems can become environments in which new social forms can be invented, adopted, adapted, and propagated”. This statement seemingly harmless can be interpreted variably in different contexts. These sorts of interactive system can give rise to negative consensus, cyberbully etc. Concept of moderators need to be implemented to monitor activity. If a particular person is being targeted by the group be it justified or not should be made accountable for. Privacy and security are far more relevant in today’s internet culture. Gone are the days where a community was purely consisted of people who believed in the cause. Sockpuppets, spammers, impersonators, trolls are all common in today’s social media.

The second paper talks about creating a graphical chat interface in the form of chat circles. This idea is novel, and I believe it would improve the social interaction and make it enjoyable. However, this approach would also require a centralized authority monitoring the activity and making sure to do away with the common problems faced by social media. If a group is filled with malicious users, then this method would become difficult to interact with. Today’s social media does a good job of maintaining accountability and security. A Facebook or a twitter profile is visible to all and the settings can be changed accordingly. Integration of emoticons, gif etc. do help with the expression of emotion more effectively than before and make the chat interesting.

Read More

Reflection #6 – [09/13] – [Bipasha Banerjee]

Readings assigned

[1] Sandvig, Christian et al.  (2014) – “Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms” – Paper presented at the “Data and Discrimination: Converting Critical Concerns into Productive Inquiry,” a preconference at the 64th Annual Meeting of the International Communication Association.

[2] Hannak, Aniko et al. (2013) – “Measuring Personalisation of Web Search” – Procedings of International World Wide Web Conference Committee. (527-537)

Summary

The readings assigned to us mainly talks about the algorithmic audits. The first paper which discussed about how algorithms are rigged and used to favor or bias in many instances. They have discussed the example of the computer system named SABRE which was created by American Airlines and IBM for the ease of airline reservation system. It was later found out that the results often were bias and American Airlines results were given an unfair priority as a result of the search. This led to the government to intervene to make the system much more transparent and accessible to other airlines when information was sought. Algorithms can also be rigged and this how the “reply girls” of YouTube became popular. The authors proposed five different algorithmic audit methods and their respective effectiveness and limitations. The second paper on the other hand focused mainly on the effects personalisation of an account or profile has on search engines web results. They conducted an experiment which focused on features like basic cookie tracking, browser user-agent, the geolocation and the google account attributes. They recruited Amazon Mechanical Turks and observed about 11.7% of the search results being different due to the personalisation.

Reflection

The need to understand how algorithms are designed and how they affect our interaction in the social media and internet in general is immense. The paper by Motahhare Eslami [3] discussed how algorithms are there in place to give a personalized news feed for Facebook. Algorithms are used to in all possible ways, be it sorting the search result, the priority of the results, filtering etc. However, to those who are not in the particular company the term algorithm is just another ambiguous one. The non-disclosure clause of many companies leads them to be so mysterious to the outer world.

The authors of the first paper highlighted several algorithmic audits namely, the code audit, the non-invasive user audit, scraping audit, sock puppet audit and the crowdsourcing audit. They mentioned the challenges each method. In my opinion, looking at only one method is not going to be optimal. If, we combine two or more audit methods, and give a weight to each method (sort of a weighted mean) and at the end compute the “audit score”. The weight being assigned to each of the audits needs to be adjusted depending upon the priority of the audit method. Thus, the audit score generated would give a comprehensive idea about the algorithm in place being a “rigged” one or not.  Algorithmic audits can also be used to depict the fairness of businesses. [4].

The second paper talks about the ways that might lead to the search results being altered. Although personalisation leads to search results being altered, however it was also found that majority of the time the results were not altered. This result seems quite shocking to me. I believe a global data sampling would depict a completely different story. Comparing the both would give a better picture about exactly how personalisation is taken into consideration. Web search personalisation is an important and effective way to ensure a great user experience. However, the user needs to be aware exactly how and where their data is being used. This is where the companies need to be open and transparent.

[3] Eslami, Motahhare et al.- “I always assumed that I wasn’t really that close to [her]”: Reasoning about invisible algorithms in the news feed”

[4]https://www.wired.com/story/want-to-prove-your-business-is-fair-audit-your-algorithm/

Read More

Reflection #5 – [09/11] – [Bipasha Banerjee]

Readings Assigned

  1. Bakshy, Eytan et al. – “Exposure to ideologically diverse news and opinion on Facebook” – Published in http://science.sciencemag.org Science 05 Jun 2015:
    348, Issue 6239, pp. 1130-1132 DOI: 10.1126/science.aaa1160
  2. Eslami, Motahhare et al.- “I always assumed that I wasn’t really that close to [her]”: Reasoning about invisible algorithms in the news feed” – Proceedings of CHI’15 (153-162)

 

Summary

The article and the paper both discuss how the Facebook notification feed is curated by an algorithm. The first article published in the science magazine talks about how a user of Facebook is exposed to diverse amount of news and opinion. The authors deduced that social median in fact do expose the users to ideologically diverse viewpoints. The authors of the second paper conducted a Facebook news feed case study comprising of 40 Facebook users. The users were from a diverse social as well as economic background. They classified the users as “aware” or “unaware” based on their knowledge of Facebook curation of the news feed which is based on an algorithm. The theme of this week’s reading was how users are influenced by the notification feed and how they react when knowing about the existence of the algorithm, their satisfaction level after being given a chance to see their unfiltered feed. Overall, it was found that the users were ultimately satisfied about the way the news feed was curated by the algorithm. All of them became more aware of the algorithm. Th experiment changed the the way they used and interacted with posts as that decision was a much informed one.

Reflection

 It is true that computer algorithm exposes us to content which sometimes do influence our ideology, beliefs and the way we perceive an issue. The only question that comes to my mind is, are algorithms reliable? We know that algorithms are used in almost all things we do on the internet. I read an article on how United Airlines had overbooked a flight [1]. This led to more people possessing tickets than the number of seats available. Upon discovery of this, a passenger was removed forcibly from the plane. The reason that the airline company had given was that an algorithm had sorted through the passenger list and took in some parameters like the price of the ticket, if they were frequent flyer and the time of their check in. It had thus given the output that the passenger who was removed was one who was “least valuable” to them.

Additionally, algorithms are used profusely in each and every aspects of the internet. Social media’s news feeds are curated. The one thing that companies could potentially do to improve user awareness of the algorithmic existence is to inform users about them. I do not mean the endless “terms and conditions”. What I do mean is, like Facebook reminds one of all the memories, birthday, they can remind or notify about the algorithm. Since social media users are varied in education status and background, and that not all are from “computer science” background, it is the responsibility of the company to make sure users are “aware”.

Moreover, they can also provide more flexibility to users to filter the notification. I know that these are already in place but are a bit ambiguous. It depends on users indicating their preference against each post. However, such filtering should be made more user friendly and easy. Similar to how we filter amazon search results, something like that can be implemented in the homepage globally, not just against each post. It can be chronologically by default and customization on demand. Facebook in particular starts showing posts related to what we have recently liked or visited. This generally leads to the feed being monopolized with certain posts and that generally is one of the main reasons I am repelled by the platform. Advanced filtering setting could have these parameters as well to help users even more and allow users to customize rather than the algorithm choosing for us.

[1] https://99percentinvisible.org/episode/the-age-of-the-algorithm/

Read More

Reflection #4 – [09/06] – [Bipasha Banerjee]

Kumar, Srijan et al. (2017)- “An Army of me: Sockpuppets in Online Discussion Communities” – Proceedings of the International World Wide Web Conference Committee (IW3C2). ACM 978-1-4503-4913-0/17/04 http://dx.doi.org/10.1145/3038912.3052677

Summary

The authors discuss mainly about how sockpuppets (an user account owned by a person who has at least some other account) engage in online discussions. They found how sockpuppets tend to write differently from the normal users in general and use first person pronouns. To remove false positives while identifying sockpuppets, the IP address used by many users were discarded. The authors also introduced the Kmin concept to identify sockpuppet posts that are close in time and similar in length.

Sockpuppetry across nine discussion communities were studied and it was found that these accounts were created early in an user’s lifespan which in turn suggests they were not a consequence of the social interaction the user had in the community. They are most likely to swear and discuss controversial topic, use lesser parts of speech. They tend to generate a lot of communication and are treated harshly by the community with often receiving downvotes and even at times being blocked by moderators. Sockpuppets owned by the same puppet master tend to contribute similar content and those working in pairs try to increase each other’s popularity.

Reflection

This paper emphasizes to distinguish sockpuppets from the normal users. The paper gave us a way to understand in depth the sockpuppets. They essentially differ from the online antisocial present in discussion communities that was discussed by Justin Cheng et al [1]. Sockpuppets can be both pretenders and non-pretenders. This suggests that a part of the sockpuppet group are not trying to deceive. They created multiple accounts (often with similar usernames) to post differently in varied discussion communities.  However antisocial trolls generally create accounts to with negative intention to disrupt. I believe that the non-pretenders, since some have similar usernames are not trying to hide and are benign when it comes to the intention of creating a different account.

The most valid and important concept that comes to my mind is a form of digital central authority that can moderate online accounts across the internet. (I know sounds a bit too ambitious. But please, hear me out!). India has introduced in recent years the Aadhar Card concept (The Indian take on SSN). Since last year, government is trying to link all mobile accounts, bank accounts, mobile wallets (Amazon pay etc.) to the Aadhar unique number. This would ensure authenticity. However, I would not recommend using the same ID for online purpose as well. Once the online account is hacked, a person’s identity can be easily stolen. Instead, some kind of an online digital signature can be introduced. This seems similar to the twitter and Youtube’s verified (blue tick) concept. However, I want to emphasize on it being central and that the same “digital signature” is used across all kinds of social media, discussions etc. A central authority needs to govern this digital signature generation. This verification can be applied when a person first opens an email account. This way the account is linked virtually to a person and impersonation, sockpuppetry etc. would become significantly difficult.

References

[1] Cheng, Justin et al. (2015) – “Antisocial Behavior in Online Discussion Communities”- Proceedings of the Ninth International AAAI Conference on Web and Social Media (61-70).

Read More

Reflection #3 – [09/04] – [Bipasha Banerjee]

Today’s topic of discussion is Credibility and Misinformation online.

Mitra, Tanushree et al. (2017) – “A Parsimonious Language Model of Social Media Credibility Across Disparate Events”- CSCW 2017 (126-145).

Summary

The paper mainly focuses on establishing the credibility of news across social media. The authors identified 15 theoretically grounded linguistic assumptions and took help of the CREDBANK corpus to construct a model that would map language to the perceived levels of credibility. Credibility has been broadly described as believability, trust and reliability along with other related topics. However, the term credibility has been termed as both subjective or objective depending on the area of expertise of the researcher. A CREDBANK [1] was constructed which is essentially a corpus of tweets, topics, events and associated human credibility judgements. The corpus has credibility annotations on a 5-point scale (-2 to +2). The paper dealt with the perceived credibility (annotations based as “Certainly Accurate”) of the reported twitter news of a particular event. Proportions of annotations (Pca = “Certainly Accurate” ratings of event / Total rating for that event) was calculated. An event was rated as “Certainly Accurate” if its Pca belonged to the “Perfect Credibility class” (0.9≤ Pca ≤1). All events were given a credibility class of Low to Perfect (rank as Low ≤ Medium ≤ High ≤ Perfect). The linguistic assumptions were considered as the potential predictors of perceived credibility. The potential credibility markers were namely, Modality, Subjectivity, Hedges, Evidentiality, Negations, Exclusions and Conjugations, Anxiety, Positive and negative emotions, Boosters and Capitalization, Quotation, Questions and Hashtags. Nine variables were used as controls namely, Number, average length and number of words in original tweets, retweets and replies. The regression technique used an alpha (=1) parameter to determine the distribution of weight amongst the variables. It was found out that retweets and replies with longer message lengths were associated with higher credibility scores whereas, higher number of retweets were correlated with lower credibility scores.

Reflection

It has become increasingly common for people to experience news through social media and with this comes the problem of the authenticity of that news. The paper dealt with few credibility markers which assessed the credibility of the particular post. It spoke about the variety of words used in the post and how they are perceived to be.

Firstly, I would like to point out that certain people have their own jargon. The millennials speak in a specific language, a medical professional may use a certain language. This may be perceived as negative or dubious language which may in turn reduce the credibility.  Does the corpus have variety of informal terms and languages as well as group specific languages in the database to avoid erroneous result?

Additionally, a statement in the paper says, “Moments of uncertainty are often marked with statements containing negative valence expressions.” However, negative expressions are also used to depict some unfortunate event. Let’s take the example of the missing plane MH 370. People are likely to use negative emotion while tweeting about that incident. This certainly doesn’t make it uncertain or less credible.

Although this paper dealt with the credibility of news in the social media realm, namely twitter, credibility of news is still a valid concern when it comes to all forms of news sources. Can we apply this to Television and Print media as well? They are often accused of reporting unauthenticated news or even being bias in some cases. If a credibility score of such media is also measured other than the infamous “TRP or Rating”, it would make these news outlets credible as well. It would force the news agencies to validate their source and this index or score would also help the readers or followers of the network to judge the authenticity of the news being delivered.

[1] Mitra, Tanushree et.al. (2015)- “CREDBANK: A Large-scale Social Media Corpus With Associated Credibility Annotations

Read More

Reflection #2 – [08/30] – [Bipasha Banerjee]

The paper for today’s discussion:

Cheng, Justin et al. (2015) – “Antisocial Behavior in Online Discussion Communities”- Proceedings of the Ninth International AAAI Conference on Web and Social Media (61-70).

Summary

The paper was mainly focused on analyzing the antisocial behaviors in large online community namely, CNN.com, Breitbart.com and IGN.com. The authors describe undesirable behavior such as trolling, flaming, bullying, harassment and other unwanted online interactions as anti-social behaviors. They have categorized users who display such unwanted attitudes into two broad groups, namely, the Future-Banned Users (FBUs) and the Never-Banned User (NBUs). The authors conducted statistical modelling to predict individual users who will eventually be banned from the community. They collected data from the above-mentioned sites via Disqus for a period of about 13 months. They based their measure of undesirable behaviors on the posts that were deleted by the moderators. The main characteristic of an FBU post are

  • They post more than an average user would and contribute more towards posts per thread.
  • They generally post off topic conversation and which generally are negative emotions.
  • Posting quality decreases over time and this may be as a result of censorship.

It was found that at times the community tolerance changes as well and become less tolerant of an users’ post over time.

The authors further classified the FBU’s into Hi-FBU and Lo-FBU with the name signifying the amount of post deletion that occurs. It was found that Hi-FBUs exhibited strong anti-social characteristics and their post deletion rate were always high. Whereas, for the Lo-FBUs the post deletion rates were low until the second half of their lives where it rose. Lo-FBU start to attract attention (the negative kind) in their later life.  Few features were established in the paper for the identification of the antisocial users namely the post features, activity features, community features and the moderator features. Thus, the authors were able to establish a system that would identify undesirable users early on.

Reflection.

This paper was an interesting read on how the authors conducted the data-driven study of anti-social behavior in online communities. The paper on Identity and Deception by Judith had introduced us to online “trolls” and how their posts are not welcomed by the community and might even lead to the system administrators banning them from such posts. This paper delved further into the topic with analyzing the types of anti-social users.

One issue which comes to my mind is how are the moderators going to block users when the platform is anonymous? The paper on 4chen’s popular board /b/, which was also assigned as a reading, focused on the anonymity of users posting on threads and that the majority of the site attracted anti-social behavior. Is it possible to segregate users and ultimately block them from creating profanity in anonymous platforms?

One platform where I have witnessed such online unwanted comments is YouTube. The famous platform by google has a comment section where anyone having a google account can post their views. I recently read an article “Text Analysis of YouTube Comments” [1]. The article focused on videos from few categories like comedy, science, TV and news & politics. It was observed that news and political related channels attracted the majority of the negative comments whereas, the TV category are mostly positive. This leads me to think that the subject of discussion is sometimes important as well. What kind of topics do generate the most amount of anti-social characteristic in the discussion communities?

The social media in general has now become a platform for cyberbullying and unwanted comments. If these users and their patterns are detected and if such comments are automatically filtered out as “anti-social”, it would be a huge step in the right direction.

[1] https://www.curiousgnu.com/youtube-comments-text-analysis

 

Read More

Reflection #1 – [08/28]- [Bipasha Banerjee]

The main topic of discussion for todays’ reflection was Identity, deception and anonymity. The papers assigned for this assignment are

  1. Donath, Judith S. (1999)- “Identity and Deception in the Virtual Community. Book chapter from “Communities in Cyberspace” (29-59).
  2. Bernstein, Michael S. et al. (2011) – “4chan and /b/: An Analysis of Anonymity and Ephemerality in a Large Online Community”. Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media (50-57).

Summary.

The first paper by Judith S. Donath talk about the identity and the deception that is prevalent in the online community. For example, a person can claim to be an expert of a matter or can falsely embody someone else so on and so forth. The paper mainly focuses on the Usenet newsgroup which is predominantly a non-fiction based virtual community. It mainly discusses how reliability of a post is closely based on the writers’ credibility. It talks about how the information from a post e.g., the writers’ email address, the language and tone of the post itself, the signature, can be used to detect various attributes about the author of the post. Attributes like the location, organization, gender etc. are some which may be detected. Judith also describes how trolls are common in those chat forums and that often it has even led to contacting the system administrators of these offenders to act against them.

The second paper focuses mainly on the anonymity and the ephemerality of posts in the large online community of 4chan, and its most popular board named /b/. The authors conduct two kinds of study to test the ephemerality of the posts itself and the identity and anonymity of user to understand its effects. It gives an idea about what kind of content the community wants, which results in the post to have a relatively longer life and even been re-posted later on. The concept of “bumping” and “sage” is described which gives the user control over the ephemerality of the posts. It was found that over 90 percent of the posts on /b/ was completely anonymous. Email signatures were also uncommon with 98.3% of posts not containing an email. It was also found that only 0.05% of posts had tripcodes and pseudo names.

Reflection.

The first paper gives an idea about how identity plays an important role in the virtual community. It also points out ways by which one can somewhat get an essence of the post is trustworthy or just a “troll”. One thing that I could relate to right away is how I tend to rely on articles and posts in social forums like Quora, Twitter, mac-forums or Stack Overflow is quite similar. I have noticed that I tend to look at the persons’ name, the email and his description. The blue tick of twitter, or the name and description of Quora, the tag attached to the author in mac-forums (Administrator, Moderator, Member, Premium-Member etc.) and the number of up votes (or the green tick) in a stack overflow post makes me decide if I want to believe or follow the particular article. It is true that sometimes it turns out to be dubious and I am completely directed in the wrong route which leads me to believe that trusting such users, based on only the signatures and other attributes is erroneous.

The second paper mainly highlights how common anonymous posting is in the realm of the virtual world. There is a need for anonymous posting where people seeking help or advice without giving up the identity can benefit from this. This helps in keeping certain sensitive matters private. Nonetheless, on the other hand, in absence of any form of authentication, a suspicious individual can exploit the system due to the lack of accountability.  It was pointed out by the author that /b/ is “crude” with contents often being “intentionally offensive”. because of its anonymous nature. The main point that stands out to me here is, when it comes to valuable information, then user identification is of greater importance and the worth of the information is related to the credibility of the one posting it.

The question of ephemerality is the one which concerns me the most. Although /b/ is ephemeral by design (where a post is automatically deleted once it reaches the fifteenth page) but, our today’s social media platforms are not ephemeral in general. Facebook keeps on remaining me what I did 4 years ago or my nth friendship anniversary with someone. This suggests that all our details and data are stored, analyzed and later utilized to give us a personalized feed. The “My Activity” of google which records everything, even things being asked to a google home speaker. The concept of “digital footprint” thus arises.

Read More