Reflection 10 – [10/02] – [Vibhav Nanda]

Readings:

[1] Examining the Alternative Media Ecosystem through the Production of Alternative Narratives of Mass Shooting Events on Twitter

Summary:

In this paper the author talks about the ecosystem around fake news/alternative news/conspiracy theories. In order to study such an ecosystem, author took an interpretivist approach to understand the ecosystem — by “blending qualitative, quantitative and visual methods to identify themes and patterns in the data.” The data was collected from Twitter Streaming API, tracking on words that could indicate a shooting — such as gunmen, shooter, shooting etc. — over a 10 month period, resulting in 58M tweets. An extremely high tweet count, for a single topic, was the result of 3 high profile mass shootings — Orlando(FL), Munich(Germany), and Burlington(WA). To extract tweets related to alternative narratives, author used key words like false flag, crisis actor, hoax etc, resulting in 99,474 tweets. After getting the tweets, the author carried out a great deal of classification of the accounts and the domains that the links point to. During their analysis, the author created a network graph to fully understand the alternative news ecosystem. Interestingly enough, the author found thematic similarities between conspiracy theories, and alternative narratives of real life events.

Reflection/Questions:

This was an extremely interesting read for me as conspiracy theories are my guilty pleasure, but only for entertainment reasons. 58M tweets were collected relating to gun shootings, however only 99,474 were identified as being related to alternative news. Seeing how only an extremely small percentage (around 0.17%) were related to conspiracy theories, I would say this is not an epidemic YET. Whilst reading this paper, I started thinking about possible solutions or tools to raise awareness amongst readers; without banning/blacklisting websites/pages/users who indulged into such sort of activities. I came up with the following system.

New Design:

Background: I would say there is a difference between information and news. Information includes both opinions and facts and may or may not be verified; news (in its typical sense) is only facts and is verified from multiple sources. Stemming from this difference, citizens should only be allowed to freely disseminate all the information they want, however they should not be allowed to disseminate any sort of news. Only authenticated citizens should be allowed to disseminate news, we can call them e-journalists. Same goes for websites, and pages on faceboook (and other social media websites). The system I am going to outline, only focuses on websites.

Assumption: User is a male, and the platform that is under discussion is Twitter (can be scaled to other platforms also).

Explanation of the system: The system has multiple facets to it.

A) Each time a user cites an authenticated website as a news source on his tweet, he gets some reward points (for being a responsible citizen). Each time a user cites an unauthenticated website as a news source, he gets penalized for that. If the user ends up having 0 points, he will not be allowed to cite any more unauthenticated websites unless he gains some points by citing an authenticated source first. Let’s call this point system “Karma points”

B) When the user posts a link to an unauthenticated website as a news source, he will get a warning pop up window when he presses the tweet button. The pop up window will let him know that the news source is not authenticated, could include some more discouraging language, and then will have a confirm button — which will allow him to cite that website anyway. When that tweet is finally posted, it will have a warning next to it. The warning will let other users know that this specific tweet has cited an unauthenticated website as its news source. This will discourage the user from citing such websites in the first place.

C) When a different user(the reader) clicks on the unauthenticated website, he will also get a pop warning saying that he is “about to enter a website that is not identified as an authentic news source.” He would have to click the confirm button to move forward. The readers Karma points will remain unaffected.

 

 

Affects of the design:   I believe such a design will caution people when entering an unauthenticated website cited as a news source. It will also dissuade people from sharing such websites that tend to have fake news/ conspiracy theories. The process of dissuasion will come first as a caution (before they post) and then as shame (when their post is marked with a warning label and reduced karma points).

Read More

Reflection #9 – [09/27] – [Vibhav Nanda]

Video: Partisanship and the search for engaging news

Summary: In this blog I am proposing a system which will nudge readers towards the other side — based on their current emotional and mental state.

Introduction: Natalie Stroud’s video inspired me to come up with a system which can encourage bipartisanship and burst the echo chamber effect. From the video and the previous papers that I have read, I have gathered that we need to work on and worry about people with extreme political standpoints(extreme left leaning and extreme right leaning); people with a more balanced standpoint already read news from disparate sources — their balance is what makes them supporters of the center politics. Extreme political takes can usually be traced down to belief systems, and to nudge people out of their belief system is risky — sometimes leading to resentment towards others’ belief system. Howbeit, based on an individuals mental and emotional state they are more or less likely to try to understand other side of the story. I am proposing a system which will track users’ behavior online, understand how they usually behave given a circumstance and if their behavior is deviant from usual then nudge them towards the other side.

Assumption: For the sake of simplicity and brevity, I am going to make the following assumptions:

  1. The system only tracks behavior in the comment section (inspiration drawn by the video)
  2. User is a male liberal and aggressive towards anyone opposing his opinion (on an online platform)

Explanation of the system through an example:  Now lets say our hypothetical user gets his daily dose of news from articles shared on facebook (extremely realistic situation), and because of all the filters, the news he gets is usually published by CNN and MSNBC. He reads the news, scrolls through the comments section and responds aggressively to users whose comments are in opposition to either the article or the topic it is about(lets say gay rights). Aggression is the users usual response to top 5 opposing comments — this is our users’ online persona and has been recognized and developed by our system. Now one day our user reads an article about gay rights and either doesn’t respond so aggressively towards opposing comments or doesn’t respond at all — an aberration that would be detected by our system and would be flagged as “open”, meaning this user is open today to opposing ideas. Taking advantage of this open mindedness, our system will subtly nudge the user towards a gay rights article written by Fox News.

 

Novelty:  The system leverages changes in moods and emotions to nudge readers towards the other side, instead of a constant nudge. A constant nudge can lead to ignorance of the nudge’s presence, frustrate the user into switching off the feature and if that is not possible, then pushing the user to a different platform. This timely nudge is important for it to be successful in promoting the user to be empathetic towards the other side and to engage in a more civil and logical conversation.

Read More

Reflection #8 – [09/25] – [Vibhav Nanda]

Readings:

[1] Echo chambers online?: Politically motivated selective exposure among Internet news users

Summary:

This paper studies the widely known echo chamber phenomenon. In order to carry out his experiment the author of the paper recruited subjects from readership of two different online news sites — which supported either sides of the aisle. The big question that the author was trying to address was: are people more likely to read news that supports their opinion/belief, or do people actively try to avoid news that challenges their opinion? In order to answer the big question, he made five different hypothesis, and proved them right using results from his experiment. The five hypothesis were:

  1.  The more opinion-reinforcing information an individual expects a news story to contain, the more likely he or she is to look at it.
  2.  The more opinion-reinforcing information a news story contains, the more time an individual will spend viewing it.
  3.  The more opinion-challenging information the reader expects a news story to contain, the less likely he or she is to look at it.
  4.  The influence of opinion-challenging information on the decision to look at a news story will be smaller than the influence of opinion-reinforcing information.
  5.  The more opinion-challenging information a news story contains, the more time the individual will spend viewing it.

Subjects recruited for the study were not indicative of the larger US demography, howbeit the recruits from the two news sites shared various demographic similarities — hence making it possible for the author to make the generalizations and prove his hypothesis correct.

Reflection/Questions:

Whilst reading the paper I stumbled across a certain statistical number and it stood out to me — more than 85% participants were white. The immediate question I had in mind was what could be the reason for a majority white representation? More access to technology? Widespread access to education, resulting in higher interest levels in news ? The previous question was followed by how would the study be affected if the race of the participants was more diverse and more equally distributed ? This particular paper and all the other papers that I have read in this class discuss the phenomenon of filter bubbles and echo chambers, which talk about what kind of news do people consume and do they get enough exposure to opposing ideas — but this assumes a single topic(same topic on both sides); reading this paper made me interested in understanding how to deliver different news topics that people might not get exposure to because of all the different filters in online platforms. For instance someone might be reading both sides of news for gun control, but that same person might be so immersed in gun control, that he might not pay attention to poverty.

There are three more interesting ideas that I got from this paper :

  1.  Exposure to how many opinion-challenging news articles would it take, either consecutively or in a very short interval, before the reader disregards the source entirely; consequentially not revisiting that source ?  (title of the article)
  2. When reading an opinion-challenging article, what kind of credibility does the reader ascribe to the source ? and how does the perceived credibility of the source affect the comprehension of the news article that the reader is reading ? (source of the article)
  3.  How do people react to different treatments of their views in an opinion-challenging article, and how does this impact their future take on opposing news ? (content of the article)

I think all three of these questions are very important for designing a platform where the user is exposed to opinion-challenging news in a positive manner, without being driven away or becoming resentful towards opinion-challenging news. In fact, answers to these questions might help researchers and designers in creating a platform that promotes readers to read opinion-challenging news.

[2 ] Bursting Your (Filter) Bubble: Strategies for Promoting Diverse Exposure

Summary:

This paper talks about possibly nudging people towards a more diverse exposure to news articles. The authors discourse two different possibilities of nudging people:

  1. Diversity-aware news aggregators
  2. Provide subtle nudges to promote readers to choose more diverse news

After detailing different methods of nudging people, the authors move onto talk about various methods that promote processing information in a motivated manner. On this end, the authors further discuss three different platforms — Reflect, OpinionSpace, ConsiderIt — that have promoted more deliberate behavior by audience indulging in divergent opinion; in other words promoting empathy.

Reflection/Questions:

During the paper, the authors suggest that “news aggregator might set a higher quality threshold..”, but this only makes think that quality is an intrinsic property and is subjective, consequentially can’t be used as a measurable attribute. One thing that all three platforms(reflect, opinionspace , and considerit) have in common is that a user has to go through more rigorous interaction with the platform, for the platforms intended operation to be meaningful — for instance ConsiderIt has its users create a pros-cons list. Not everyone has this kind of patience and/or time, so one question that I ask myself is: how can we design a platform that is not as time intensive, but still engages readers (of opposing ideologies) in a more deliberate fashion?

Read More

Reflection #7 – [09/18] – [Vibhav Nanda]

Readings:

[1] The Chat Circles Series :  Explorations in designing abstract graphical communication interfaces

Summary:

This paper focuses on the design aspects of a chat environment, which is free of limitations incurred by the more traditional text chats. The authors modified plethora of aspects of their chat environment, resulting in various different representations of appearance of a chat environment and determining user-interface interaction.  Some of the primary environment elements that the authors predominantly focused on were history, movement, communication channel, context, individual representation, and the environment itself. Working with previously outlined elements, the authors devised five chat environments including Chat Circles, Chat Circles II, Talking in Circles, Chatscape, and Tele-direction. This paper was able to highlight the entire process of creating chat programs — intended to “foster rich, engaging environments for sociable communication online.”

Reflection/Questions:

I am of the opinion that real life social setting can never be transposed to the digital realm — primarily because social interactions are based on myriad of physical cues and the interpretation of which are distinct for every individual based on their upbringing, their past experiences, and their current state of mind. Howbeit, as the authors described we can come close to simulating real life social interactions, but their context might differ. A lot of our social understanding comes from interpreting not what the speaker is speaking, but their underlying tone — a big reason voice messages over wechat are so popular in China and even considered as a status symbol [1]. I think Talking in Circles is the best representation of daily informal chat program as it encompasses verbal cues with physical cues. When I started reading Tele-directions in the paper, it reminded of a game called Blue whale, which turned fatal for many souls, as the tele-director asked tele-actor to take their life in order to win the game. Ergo design, context, and environment of an application are of utmost importance. Whilst reading the paper I thought what is it that I want in a daily informal social chat program? First few quick answers were facetime, whatsapp call, and google duo; then I thought about how to enhance these experiences — having a 3-D AR rendering of the person I am talking to instead of a 2D rendering. I have only thought about chat programs where I am talking to known individuals in an informal setting — hence context coming into play.

[2 ]  Social Translucence: An Approach to Designing Systems that Support Social Processes

Summary:

This paper focused on creating a system that would enable large groups of people, over the computer networks, to communicate and collaborate. In order to create such a system the authors identified three key feature namely visibility, accountability, and awareness — that exist in the real world to aid us in social interaction. They also discourse how our individual constraints and our understanding of social constraints influence our social interaction in the physical world. The authors of the paper present a functioning platform called “Babble”, and  highlight the flaws that social translucence raises, from a digital communication perspective.

Reflection/Questions:

Towards the conclusion of the paper, the authors write that “the digital world appears to be populated by technologies that impose walls between people,” and I am of the belief that these walls don’t exist because of design but because of physical strictures that digital life presents, in comparison to physical world. The walls also exist because of our heightened sense of conscious, as our activity in virtual world persists and can be used against us in future, on the flip side our words are fleeting and people forget. Unless we are able to have a full virtual reality/augmented reality set up, and all communication is verbal, these walls will continue to exist. I believe that new social rules/norms will surface with evolution of social media and our integration with the virtual world. Howbeit, we are currently in the stone age of the internet (in terms of evolution). It was interesting to read about persistence of material in social media, this reminded me of variety of cases where popular people ran into PR problems because one of their old tweets resurfaced in a different light/context. Expanding on the last point, all of us are very careful when posting on social media because companies go through our social media accounts — this problem doesn’t exist in physical world. This situation makes me think of the next social science problem that might need a solution —  how to protect people from getting cyberbullied for their tweet/ message being highlighted in the wrong context? How to allow people to freely share what they would in physical world, without the fear of being harshly judged by others/ not being vetted out by companies ? The authors also write that “in the digital world we are socially blind,” I would agree with this and I would add that we are socially manipulated, for instance exposure to only happy photos/messages/memories of people evoke jealousy in us. In addition, when people take part in social media activity we are not aware of their current state of mind and hence their most recent post might not be reflective of their current emotional state, hence socially manipulating us. 

[1] https://qz.com/443441/stop-texting-right-now-and-learn-from-the-chinese-theres-a-better-way-to-message/

Read More

Reflection #6 – [09/12] – [Vibhav Nanda]

Readings:

[1] Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms

Summary:

The central question that this paper addressed was the mechanisms that are available to scholars and researchers to determine the operation of the algorithms. The authors started talking about the traditional reasons why an audit was carried out, explained how audits were carried out traditionally and why it was ok to cross some ethical borders to find answers for the greater good of the public. They went on to detail how overly restrictive laws (CFAA) and scholarly guidelines are a serious impediment today for a similar study that would not be bound by such laws and guidelines in the 1970’s, consequentially hindering social science researchers from finding answers to the problems they need to solve. Throughout the paper the authors profiled and detailed five varying algorithm audit designs including code audit, noninvasive user audit, scraping audit, sock puppet audit, and collaborative audit or crowdsourced audit.

Reflection/Questions:

Through the entirety of the paper the authors addressed algorithms as something that has conscience and this method of addressing algorithms bothered me, for instance the last question that the author poses  “how do we as a society want these algorithms to behave?”. Usage of the word behave was not apropos according to me and a better fitting word would have been function, so something along the lines “how do we as a society want these algorithms to function?” The authors of this paper also addressed various issues regarding algorithmic transparency that I brought up in my previous blog and in class — ” On many platforms the algorithm designers constantly operate a game of cat-and-mouse with those who would abuse or “game” their algorithm. These adversaries may themselves be criminals (such as spammers or hackers) and aiding them could conceivably be a greater harm than detecting unfair discrimination in the platform itself”. Within the text of the paper the authors contradicted themselves by first saying that audits are carried out to find out trends and not punish any one entity, howbeit later in the paper they say that algorithmic audits on a wide array of algorithms will not be possible and ergo the researchers would have to resort to targeting individual platforms. I disagree that algorithms can incur any sort of bias since biases are based out of emotions, and pre-conceived notions which are a part of human conscience and algorithms don’t have emotions. On that end, let’s say that research finds a specific algorithm on a platform to be biased, who is accountable ?  the company ? the developer/ the developers who created the libraries? the manager of the team?  Lastly, according to me googles “screen science” was perfectly acceptable — one portion of the corporation supporting another portion, just like the concept of a donor baby.

 

[2 ]  Measuring Personalization of Web Search

Summary:

In this paper the authors detail their methodology for measuring personalization in web searches, apply their methodology to numerous users, and finally dive into cause of personalization on web. The methodology created by the researchers exposed that 11.7% searches were personalized, mainly caused due to geographic location of the user, and the users account being logged in. The method for finding out personalization also controlled for various noise sources, hence delivering more accurate results. The authors acknowledged the drawback in their methodology — which will only identify positive instances of personalization and will not identify absence of personalization.

Reflection/Questions:

Filter bubble’s and media go hand in hand. People consume what they want to consume. Like I have previously said, personalizing search outputs isn’t the evil of all societal problems. According to me it almost seems as if personalization is being associated with manipulation, which is not the same. If search engines do not personalize, the users get frustrated and find a place that will deliver them the content that they want. I would say there are two different types of searches: factual searches, and personal searches. Factual searches include searches which have a factual answer and there is no way that can be manipulated/personalized, however personal searches include things about feelings, products, ideas, perceptions, etc. and these results are personalized, which I think should rightly be.  Authors also write that there is a “possibility that certain information may be unintentionally hidden from users,” which is not a draw back of personalization but reflective and indicative of real life, where  a person is never exposed to all the information on one topic. Howbeit, the big questions I think about personalization are what is the threshold of personalization ?  At what point is the search engine a reflection of our personality and not an algorithm anymore ? At what point does the predictive analysis of searches becomes creepy ?

Read More

Reflection #5 – [09/10] – [Vibhav Nanda]

Readings:

[1] “I always assumed that I wasn’t really that close to [her]”: Reasoning about invisible algorithms in the news feed

Summary:

This paper focused on a plethora of items regarding our digital life and the ubiquitous curation algorithms. The authors talk about varying awareness levels in different users, pre study conception of facebook newsfeed, post study conception of new feed, participants reaction to finding out about a hidden curation algorithm and how it changed the perception of the participants of the study. In order to show the difference between raw feeds and curated feeds the authors of the paper decided to create a tool called FeedVis that would display users unfiltered feed from friends and pages. By asking open and close ended questions the authors were able to gauge understanding levels of the curation algorithm, the users had. Authors of the paper tried to answer three different research questions within one paper, and were successful in delivering adequate answer for future work.

Reflection/Questions:

It was interesting for me to read that various users had started actively making an effort towards manipulating the algorithm, especially because I am aware of it and it doesn’t bother me at all. In the initial part of the paper the authors discuss the idea about disclosure of the mechanisms of curation algorithm in order to create a trust bond between the users and the platform, howbeit I would argue that if the working mechanism of curation algorithm is made public then trolls, fake news agencies, and other malicious actors could use such information to further increase the reach of their posts/propaganda.  The authors also describe their participants as “typical facebook user”, which I would disagree with this statement because the meaning of a “typical” facebook user is fluid — it meant something different a few years ago (millennials) and now means something different (baby boomers and generation x). According to me facebook should some days show users unfiltered results, other days show them curated results; then track their activity online —   if there is an increase/decrease in user activity, for instance likes/comments/shares — then from that extensive data they should decide if the user would prefer curated results or unfiltered results. Facebook should also give users the option to let the algorithm know which friends/pages the specific would be more interested in — this might also help the algorithm learn more about the user.

[2 ] Exposure to ideologically diverse news and opinion on Facebook

Summary:

The authors of this paper focused on understanding how various facebook users interact with news on social media, the diversity of news spread on facebook in general and diversity of news spread among friend networks. Authors also studied the kind of information that the curation algorithms decide to display to a user and how does selective consumption of news affect the user. The authors also explained that selective consumption is a combination of two different factors: people tend to have more friends with same ideologies so they see reinforcing news, and the curation algorithms tend to display what it thinks the user would like the most — which is news reinforcing its ideologies ( I would argue that this is the reason fake news will never die).

Reflection/Questions:

According to me people with a certain ideological stand point will never be able to fathom the other side hence [for the most part] will never even put an effort into reading/watching news relating to a different ideological point of view. Historically we can see this in cable television, conservative people tend to watch Fox more often , moderates/ liberals tend to watch CNN. Each of these channels also understood their user base and delivered content bespoke to them. Now instead of companies determining news content, it is a curation algorithm that does it for us. I don’t think this is something that needs to be fixed or a problem that needs to be tackled (unless of course it is fake news). It is basic human psychology to find comfort in familiar and if users are forced to digest news content they are unfamiliar with, it will, on a very basic level make them uncomfortable. I also think it will be crossing the line when developers try to manipulate a users news feed, in a way that is not consistent with their usage of facebook, their friend circle and the pages they follow.

Read More

Reflection #4 – [09/06] – Vibhav Nanda

Reading:

[1] An Army of Me: Sockpuppets in Online Discussion Communities

Summary:

The authors of this paper have devoted their energy towards talking about sockpuppets in online discussion communities. So as to comprehensively study sockpuppets, and their associated online behavior the authors obtained data from nine different online discussion communities consisting of 2,129,355 discussions , 2,897,847 users, and 62,744,175 posts. They then decided to identify sockpuppets by using a combination of three elements: their IP address, their activity in the discussion post, and the time at which the comment(s) were made. By using this combination of factors, they were able to formally define sockpuppets — “a user account that post from the same IP address in the same discussion in close temporal proximity at least 3 times.” Utilizing this formal definition and an analytical model, the authors deduced 1,623 sockpuppet groups and 3,656 sockpuppets from nine different online discussion communities. Outcome of the project ensued in plethora of intuitive but interesting results including but not limited to the following list:

  1. Sockpupptets start fewer discussions, and post more in existing discussions.
  2. Sockpuppets tend to participate in discussions with more controversial topics.
  3. Sockpuppets are treated harshly by the community.
  4. Sockpuppets in a pair interact with each other more. etc.

Reflection and Questions:

I had really never thought about this area of research and hence this reading ensnared my attention and interest. Howbeit, as I read through the paper it seemed as if it was more focused towards the pretenders and less on the non-pretenders and that reflected in the way they defined sockpuppets — which is totally fine but according to me the authors should have mentioned their focus somewhere in the introduction. Since I didn’t find quality material for non-pretenders, I started thinking how would I define sockpuppets with respect to non-pretenders? Assuming complete access to a users profile, I would start by correlating basic information of the user; for instance their birthday, secret questions, name (in some cases), small variations in username, family information (if available), and contact information. Since non-pretenders do not masquerade, simply use different accounts for different use cases, I would assume that they would not have any reason to manipulate their basic information — unless the platform prevents the user from doing so. Whilst reading the paper, I started to contemplate what could be the embolding factor behind puppetmasters? The only reason I could think was the motivating factor to push their/their sponsors’ political, and ideological agenda, or dilute the opponents agenda. Howbeit in both cases I would assume that puppetmasters would be more articulate in their writing to effectively sway the audience in/against either direction, but the results of this paper — that sockpuppets write shorter sentences with more swear words and use more personal pronouns — were counterintuitive to me. As I was reading through the fifth section of the paper, it occurred to me to think about how long these accounts have been active ? or how frequently does a supposed puppetmaster create new accounts? I am not sure yet what new things we might discover by seeking answers to these questions, but I think these are interesting questions. Another correlation that I strongly thought about was to check if sockpuppets are recycled among different puppetmasters/groups? If we find this to be true, and do some analysis on the topics that these sockpuppets try to propagate or demolish the support of, then we can group the groups according to their affiliations; and if we add a spatial aspect to the groups of groups then we may be able to associate and identify what kind of ideologies are wide-spread in what part of the world. We might also be able to find out if a group is trying to propagate it’s own ideology or demolish another regions’. For instance a group from country X is spreading hate towards topic Y, but as a matter of fact topic Y is appreciated in this country X, then we know that this group is demolishing ideology in a different region and so is true for the opposite where topic Y is hated in country X.

Read More

Reflection #3 – [09/04] – [Vibhav Nanda]

Reading:

[1] A Parsimonious Language Model of Social Media Credibility Across Disparate Events

Summary:

This paper is geared towards understanding perceived credibility of information that is disseminated using social media platforms. In order to understand perceived credibility of published information, the authors of this paper decided to examine 66 million twitter messages a.k.a tweets, which were associated with 1,377 events that occurred over a period of 3 months — between October 2014 and February 2015. In order to examine these tweets from a linguistic vantage point, the authors came up with fifteen linguistic dimensions that assisted them to conceive a model that “maps language cues to perceived credibility.” In addition to the various linguistic dimensions the authors also highlighted the importance of particular phrases within these dimensions. To establish credibility of various tweets, the authors executed various experiments where the subjects were asked to rate a tweet on a 5 point Likert scale ranging from -2 (certainly inaccurate) to +2 (certainly accurate). The authors also employed nine control variables — in addition to the results from the experiment, linguistic dimensions, and identification of various phrases within these dimensions — that helped them account for the effect of  content popularity. The culmination of myriad of linguistic and statistical models ensued in a definitive parsimonious language model — howbeit the authors warn against independent usage of this model. Albeit they argue that the language model serves as an important step towards a fully autonomous system.

Reflection and Questions:

Comprehending the fact that utilization of specific words, writing styles, and sentence formations can alter the perceived credibility of a post made on social media surprises me, partly because its a new finding for me and partly because as an engineer I have never really paid attention to language formation but only to the facts in the text. Throughout the paper the authors have engaged in the idea of credibility of the post/tweet, howbeit according to my understanding it is the source of the information that necessitates “credibility” and text presented by the source necessitates “accuracy” and  “reliability”.  The authors write that “words indicating positive emotion were correlated with higher perceived credibility;” the question then arises: what about news bearing bad news? for instance death of a world leader; that news will not bear any “positive emotion.” Whilst reading the paper I came across a sentence stating that disbelief elicits ambiguity, which I disagree with. Disbelief can be used in a variety of combinations, none of which I think elicit ambiguity.

Reading the paper, I couldn’t help but think how does this model utilize slang language ? There could be a credible post that involves slang language because according to me millennial’s are more prone to trust a post that contains colloquial language instead of formal language, unless the source of information is associated with main stream media. The previous question alludes to the next question how is slang language in different countries/ usage of English in other countries taken into consideration ?  The reason I ask this is because specific words have different interpretations in different countries/ different regions of the same country resulting in different perceived credibility. As we are on the topic of interpretation of language in different regions, the question arises that: is this model universally suitable for all the languages in the world (with slight alterations), or would different languages require different models ? The main reason for this question is because people tweet in varied languages and language barrier could change perceived credibility of the post/tweet. Lets hypothesize that a post is originally made in a language other than English, howbeit English readers use the translate button on twitter/facebook to read the post, now the perceived credibility of post depends on the region the person resides in and the accuracy of translate feature of the particular social media platform. How can multiple considerations be synthesized to create a more suitable perceived credibility score for a specific situation ? 

 

Read More

Reflection #1 – [08/28] – [Vibhav Nanda]

Readings:

[1] Identity and Deception in the Virtual Community

Summary:

In this paper author shreds lights on peoples identity in the virtual world and physical world by highlighting their behavior, motivation, gains, etc. The author goes on to talk about trust circles, deception, identification of signals, and physical cues and also expresses how the absence of physical cues is going against human instincts and is giving rise to new problems that need immediate understanding. The author uses Usenet newsgroup as her muse to explain various concepts such as deception, honesty, trust, identity concealment, trolls, etc. The author elucidates various psychological and sociological derivations that the humans make via physical interaction, that are not possible to derive in the virtual world — resulting in negative consequences.

Reflection:

This paper sprouted a lot of issues regarding identity, trust, and deception that really got me thinking about societal impact of social media, and how it has changed the way we interact with others in real world, and also perceive the events that occur in real world. It also got me thinking about how social media has resulted in elimination of loneliness for elderly people but also given birth to the problem for younger generation. The paper also got me thinking about motivating factors for online deception, other than monetary gain, personal vendetta, political gain, and political/international/corporate espionage.

Questions:

  • How does creating multiple fake social media accounts(assuming all social media accounts have different personas) impact the progenitors self identity? Does it lead to identity crisis? Does it lead to other behavioral changes in the progenitor in the real world ?
  • How quickly do the AI tools need to find and delete fake accounts ? Is it possible to stop the creation of fake accounts?
  • As pointed out by the author, interacting in the virtual world requires some degree of trust, how does this impact people’s behavior in real world ? Do people become more or less skeptical of each other?
  • Extended interaction in virtual world leads to physical isolation causing more serious underlying behavioral and psychological problems. What can internet giants do to tackle this problem?
  • What are the harmful consequences of deception in the online world and how does it effect peoples psychology if they find out they have been deceived ?
  • Why do people make adjustments in self-presentation in real life ? If we are adjusting our behavior according to the receiver, is it still us ?

[2] 4chan and /b/: An Analysis of Anonymity and Ephemerality in a Large Online Community

Summary:

The author of this paper makes ephemerality and anonymity the focal-point of the paper. The author dives deep into understanding the design of ephemeral and anonymous community — 4chan. To back its point of view, the author performed content analysis on threads on /b/. The author also elaborated on how anonymity effects online communities, in addition to their users. The author goes on to talk about how the design of a lot online communities goes against evolution.  The author argues that anonymity and ephemerality increases equity in the community, fosters stronger communal identity as opposed to “bond-based attachment with individuals.”

Reflection:

Whilst reading the paper I gained perspective into how ephemerality in online communities mimics real life situations, and social media platforms with more cemented content go against the social norms in our physical world. I think in order to perfectly emulate our physical world, a social media platform/ virtual community needs combination of identity and ephemerality. The author says that anonymity can result in stronger sense of community, but I would like to argue that anonymity proliferates herd-mentality and puts into questions the basis of our physical community — ethics, morals, and mutual trust of individuals. Trusting anonymous individuals would could also result in self-doubt, according to me.

Questions:

  • What drives people towards anonymous forums ? In forums where anonymity is optional, why do people chose to being anonymous over being self-identified ?
  • How does anonymity strengthen communal identity ?
  • Does identity based reputation lead to pro-social behavior or sets up the stage for cyber bullying ?

Read More