Reflection #11 – [10/16] – [Viral Pasad]

PAPER:

Danescu-Niculescu-Mizil, C., Sudhof, M., Jurafsky, D., Leskovec, J., & Potts, C. (2013). A computational approach to politeness with application to social factors.

 

SUMMARY:

In this paper, the authors put forth a computational framework for identification of linguistic cues of politeness on conversational platforms such as Wikipedia and Stack Exchange. They build a classifier that could annotate new request texts with close to human level accuracy. They put forth their findings on the relationship between politeness and (perceived) social status.

 

REFLECTION:

  • Firstly, as the authors mention, the amount of politeness currency spent or ‘invested’ decreases the higher is a user’s perceived status. Now the question that can be asked is, why not do something, to ensure the same level of politeness exhibited by a user even after they rise in power and status.
    • These online conversational systems do have markers and provisions for reputations and power and social status, but they fail to implement the same to emulate these concepts as in the real world.
    • The stakes and motivation for a user to invest in politeness must be in proportion to his/her rank/power/reputation.
    • There could be an alternate thought here, that this should not make it permissible for newbies to be less polite. And the point about raising the stakes does not insinuate that, this is because, as in the real world, an asker/imposer requesting for certain knowledge or services will still (have to) be polite considering the social norms and their need of the said information or services.
  • Very often, it also happens that people with high power or knowledge or statuses, are rude to new and naive users using sarcasm thus avoiding having to utter rude and/or ‘bad’ things. Thus, if sarcasm detection can be coupled with politeness classifiers, it would be a very robust and deployable system.

This paper by Joshi et al [1] gives a very innovative and interesting way for sarcasm detection, which provides a novel approach that employs sequence labelling to perform sarcasm detection in dialogue.

  • Another potential approach that comes to mind for further improving and/or applying the identification of linguistic cues to identify politeness is in the domain speech recognition (via audio/video)
    • One can make use of the intonations, the pauses, volumes and tones in speech combined with the content transcription of what is being said to train and better understand politeness in speech.
    • Now, if the previously addressed problem, of sarcasm in conversational text, is addressed and handled, then one can definitely detect not only politeness but also sarcasm in speech.
  • Lastly, Wikipedia and StackExchange are more or less similar platforms, but such an analysis can and should be carried out on platforms like Reddit and 4chan where each subreddit and thread has its different norms and tolerances for rudeness and politeness. Thus more insights and levels of tolerances can be understood with such an analysis on different platforms.
    • Furthermore, it might be wise to perform an ethnographic analysis on the results of rudeness classifier to asses the norms and rudeness followed by different demographics since it is not the same for the entire population/userbase

[1] : Joshi, Aditya, Vaibhav Tripathi, Pushpak Bhattacharyya and Mark James Carman. “Harnessing Sequence Labeling for Sarcasm Detection in Dialogue from TV Series ‘Friends’.” CoNLL (2016).

Read More

Reflection #10 – [10/02] [Viral Pasad]

Paper-

[1] Examining the Alternative Media Ecosystem through the Production of Alternative Narratives of Mass Shooting Events on Twitter – Kate Starbird

Summary-

This paper constructs a network graph of the various mainstream, alternative, left and right favouring, government controlled media sources using Twitter data. Certain insights and observations regarding conspiracy theories are extracted from the network graph.

Reflection –

 

  • Thanks to users being able to self report stories, via various different platforms and mediums, it is very difficult to keep track of credible sources and genuine news content. This paper got me wondering if there was a way to actually curb misinformation and fake news online. During a discussion in class, when lateral fact checking was mentioned, I was inspired to think of it as a very ambitious project to design an automated online fact checker to tackle misinformation in online sources. But after reading this paper, I am sceptical regarding the feasibility and accuracy of such a system.

However, one approach that I believe can be employed to curb misinformation and fake news conspiracy theories is using manipulations of weights and bias adjustments to overcome the Selective Exposure and Selective Judgement employed by human readers.

  • Further, one way to perhaps (not eliminate altogether, but) reduce the spread of alternative news is that actual stories with solid backing foundations and research proofs should also use click bait titles to grab the attention of readers via invoking Selective Exposure as well as Selective Judgement in them.
  • As mentioned, users being able to self report stories, via various different platforms and mediums, makes it very difficult to keep track of credible sources and genuine news content. However, going back to a previous reflection, reputation and karma of users can probably be taken into account before/while displaying the stories posted by them. 

This way, the users can be made aware of the context that the writer is posting in and adjust their self bias (take it with a pinch of salt)

  • Further, as discussed in class, geolocation, timestamp and verbosity of a certain tweet can also be made use of to more accurately help distinguish genuine reporting of events and fake alternative news spreading misinformation.

Read More

Reflection #9 – [09/27] – [Viral Pasad]

Natalie Jomini Stroud. “Partisanship and the Search for Engaging News”

Dr  Stroud’s work motivates me to think towards the following (no pun intended) research and solutions.

Selective Exposure and Selective Judgement can be hacked and are very susceptible to being attacked by sock-puppets or bots. Inadvertent Selective Exposure by humans was something that social media platforms exploited to get more traction and engagement on their sites and platforms, but people try to understand or reverse engineer the algorithm (for display of posts on their feed) and hack into the system. If everyone knows that ‘a generic social media/online news’ site mostly shows everyone only what they find reasonable, or agree with, then this is a very great tool for marketers, sock-puppets (created for ulterior motives) to blindly put out content that they would like their (ideological) ‘followers’ to be ‘immersed’ in. This a black mirror recipe episode waiting for disaster and bound to create Echo Chambers and Incompletely Informed Opinions. Not only Incompletely Informed Opinions, it also causes Misinformation as the users who already agree with a certain ideology are unlikely to try to pick it apart in search for a shady clause or outright incorrect information altogether!

This is what was employed by Facebook via Dark Posts in 2016 where a ‘follower’ would see certain posts sent out by their influencers, but if forwarded to ‘non-followers’, they would simply be unable to open those links at all (because it is common knowledge that a non-follower would scrutinize that very post)

 

Thus, I would like to consider a design project/study in two parts, hoping to disrupt Selective Exposure and Selective Judgement. The two parts are as follows:

 

I] Algorithmic Design/Audit – How are the posts are shown (selected)

This deals with not how users’ see their posts visually, but how users’ feed are curated to show them certain kinds of posts more than certain others. With three phase design approach, this we can probably attempt to understand Algorithmic Exploitation of Selectivity Process and User bias towards or against feeds which do not follow/over exploit the inherent Selectivity Process employed users.

The users can be exposed to three kinds of feeds,

  • one, heavily exploiting the selectivity bias (almost creating an echo chamber)
  • two, a neutral feed, equivocating and displaying opposite opinions with equal weightage.
  • three, a hybrid custom feed, which shows the user, agreeable opinionated posts, but also a warning/disclaimer that “this is an echo chamber” and a way to get to other opinions as well, such as tabs or sliders saying “this is what others are saying”

With the third feed, we can also hope to learn behavioural tendencies of users when they learn that they are only seeing one side of the coin.

 

II]Feed Design – How are the posts shown (visually)

This deals with how the posts are visually displayed on user feeds. An approach to perhaps create an equivocating feed which puts the user in charge of his/her opinion by showing the mere facts.

Often, news conforming to the majority opinion has way more number of post s as compared to news conforming to the minority opinion and thus an inadvertent echo chamber is created. A News Aggregator could be employed to group the majority and minority posts in the feed. Selective Exposure will drive the user to peek into the agreeable opinion but Selective Judgement will drive the user to scrutinize more and pick apart the not so agreeable opinion. This, I believe can help disrupt Selective Exposure and Selective Judgements to a certain extent, (hopefully) creating a community of well-informed users.

Read More

Reflection #8 – [09/25] – [Viral Pasad]

Papers : 

[1] Garrett, R. Kelly. “Echo chambers online?: Politically motivated selective exposure among Internet news users.” Journal of Computer-Mediated Communication 14.2 (2009): 265-285.

[2] Resnick, Paul, et al. “Bursting your (filter) bubble: strategies for promoting diverse exposure.” Proceedings of the 2013 conference on Computer supported cooperative work companion. ACM, 2013.

 

Summary : 

In the first paper, Garrett addresses the presence of echo chambers on our social media feeds. It studies exposure among online news readers motivated by political opinions. The paper describes the effect of opinion reinforcement and opinion challenging on exposure of online news as well as the read time for each article depending on the content of each article.

In the second paper, Resnick et al deal with strategies to curb the effects of the said echo chambers in social media feeds by introducing the concept of news aggregators and subtle nudges to users. It describes approaches such as ‘ConsiderIt’, ‘Reflect’, ‘OpinionSpace’ as mediums to do so.

 

Reflection :

The question which arises is the safety of the user data obtained, which contains the opinions of participants and how favourable they are to the reinforcement or challenge of a particular topic.

The topics of both the topic take me to my idea of the project proposal. News Aggregators seem really utile, harmless and subtle ways of curbing the Echo Chamber Effect on online platforms. Opinion Grouping could be performed to group articles and posts with similar interests and opinions into concise blocks (which can be expanded to the normal view on demand)  Thus, the concise view clubs multiple posts and articles of the same majority opinion held by the user, thereby leaving space for contrary minority opinions causing opinion challenge. This way average users get balanced opinions about the subject at hand and yet be able to scrutinize more on any opinion that they agree with or disagree with. A FeedViz like interface can be developed to solve the problem and compare which approach leads to a more informed user.

One would think that users would spend more time on opinion reinforcement, but opinion challenging articles also get more read time if the user clicks on the article even once. This is because opinion challenging articles make a user scrutinize more and dig deeper to find flaws.

Read More

Reflection #7 – [09/18] – Viral Pasad

  • Papers:

    [1] Social Translucence: An Approach to Designing Systems that Support Social Processes – Erickson et. al.

    [2] The Chat Circles Series – Explorations in designing abstract graphical communication interfaces – Donath et. al.

     

    Summary:

    The first paper deals with a design theory,  “Social Translucence”. Social Translucence can be characterized by Visibility, Awareness, and Accountability. These principles guided the design of a system called Babble, a socially translucent medium for interaction in and conversations.

    The second paper underlines various approaches towards building a socially translucent system, such as Chat Circles, Chat Circles II, Talking In Circles and Tele Directions. At the mention of Circles so many times, what comes to mind is the infamous Google + which implemented ‘circles’ as a denotation of friends.

     

    Reflection:

    Accountability is one of the key factors in the translucency of a Social Media Platform. And accountability should not only mean that of users but also that of the Platforms’. Eg: Facebook is as responsible at curbing fake news as much as the user’s themselves.

    Another consideration that the Social Media Platforms make while making themselves translucent is visibility and ephemerality. Story based platforms such as Snapchat pictures disappear after seconds of a user opening them. Not only that, but the Sender is notified of attempts to take a screenshot by a given user.

    Further, when Donath et al discuss Chat Circles and their approach, there are certain issues which with the passage if time needs reconsideration. The system has visibility, awareness, and account yet it may not always be suitable for long discussions with multimedia conversations.

    Furthermore, features liked History may or may not be included in the System Design based on how human intuitive or utility intuitive, the System is supposed to be designed.

     

     

     

Read More

Reflection #5 – [09/10] – [Viral Pasad]

PAPERS:

  1. Eslami et al., ““ I always assumed that I wasn’t really that close to [ her ]”: Reasoning about invisible algorithms in the news feed,” 2015.

Bakshy, “Exposure to ideologically diverse news and opinion on Facebook,” vol. 348, no. 6239, pp. 1130–1133, 2015.

 

SUMMARY:

The first paper deals with Facebook’s ‘algorithm’ and how it’s operations are opaque and yet the power they posess in influencing user’s feeds and thereby virtual interactions and perhaps opinions. Using FeedVis, they do a longitudinal study of the algorithm and how the awareness of the algorithm influences the way people interact with Facebook and feel satisfied or otherwise.

The second platform deals with the algorithm’s nature to make the platform, an echo chamber owing to the homophilly patterns among users with a certain ideologies or opinions.

 

REFLECTION:

  • The paper itself mentions and I agree, that the results from the FeedVis study were logitudinal and more knowledge about the user patterrns could be achieved by taking into account ethnography and how the algorithm works for different for different users.
  • Another factor which the paper briefly touches upon is that the users try to speculate about this ‘opaque’ algorithm and find ways to understand and ‘hack’ into the algorithms and thus the respective feeds of their followers.
    • One such example is the entire YouTube and Instagram community trying to constantly figure out the algorithms of the respected platforms and adjusting their activities online in accordance with those.
    • Further, the lack of communication of such algorithms often demotes the feeling of a community among users and thereby affecting the user ‘patriotism’ towards the platform.
      • This was observed in the YouTube Demonitization case where several YouTubers, due to lack of effective communication from YouTube, felt less important and thus changed their online activities.
  • Furthermore, I would have liked if these studies were conducted in today’s times, mentioning Dark Posts or Unpublished Posts and how the ‘algorithm’ treats them and how is bolsters the homophily (often political) in users.
  • The use of Dark Posts is very unethical as it promotes the ‘echo chambers’ on social media sites. Not only that, the users, differing in ideologies to a certain demographic will not even see the post organically due to the “unpublished ness” of the post. Allegedly, even a link to that post will not take a user to that post if the user’s interests have been profiled different from the targeted audience of the Dark Post. Dark Posts can not only be used for targeted marketing but also for certain other unethical areas. [1]

 

[1] – http://adage.com/article/digital/facebook-drag-dark-posts-light-election/311066/

Read More

Reflection #4 – [09/06] – [Viral Pasad]

Paper:

An Army of Me: sock puppets in Online Discussion Communities Srijan Kumar, Justin Cheng, Jure Leskovec, V.S. Subrahmanian.

 

Summary:

The paper deals with the analysis of  “a user account that is controlled by an individual (or puppet master) who controls at least one other user account.” The authors analyze various aspects of Sock puppets and their behavior over nine online discussion communities. The study was conducted using the study of a dataset of 62,744,175 posts and studying the users along with discussions within them. They discuss how sock puppets may often be found in pairs, assuming the role of primary and secondary or supporter and dissenter.

 

Reflection:

  • The authors broadly define a sock puppet as a user account that is controlled by an individual (or puppet master) who controls at least one other user account. However, I prefer the traditional definition of the word: “a false online identity that is used for the purposes of deceiving others.”
  • Furthermore, it would be wise to highlight that the sock puppets are often paid partnerships with companies to push their product, but more often than not, they are also a part of Affiliate Marketing where they sell products and earn commissions. 

Not only that, these “stealth influencers” could also potentially sway public opinion on a political issue/candidate.

  • Another interesting point about pair sock puppets, that I pondered upon, was the dissenting Good Cop-Bad Cop roles that they might play. Wherein one disagrees or puts down a product/feature, which is when the primary sock puppet could swoop in and make the same product shine, by highlighting its pros (which were intentionally questioned by the secondary sock puppet).  This is a dynamic between pair sock puppets that I would want to investigate.
  • Another additional metric, worth investigating is the language/ linguistic cues used by the sock puppets to market products. Average Marketing Campaigns, keep the use of jargons to a bare minimum for the lay consumer (eg: 10x faster, 2.5x lighter) sock puppets though, using impartial terms to seem unbiased and neutral, could also be using more jargons to seem like a domain expert and intimidate a user into thinking that they really know the technicalities of the product.
  • Furthermore, I know how difficult it is to obtain clean and complete datasets, but the Disqus dataset barely consists of data with reference to products and purchases. Certain metrics used in the paper and a few other ones, if used with an Amazon Reviews or Ebay Comments Dataset, would yield a great amount of knowledge about the sock puppets and their behavior
  • Another great point to be considered about sock puppets living a dual life is their activity in their ordinary and fake account. A genuine user would have a legitimate profile history and personal data such as friend lists, other interests apart from the one topic being discussed in the post comments.
  • Another question worth asking is about false positives or false negatives as to, how would one verify the results of such a system?

Read More

Reflection #3 – [9/4] – [Viral Pasad]

  1. “A Parsimonious Language Model of Social Media Credibility Across Disparate Events.” Tanushree Mitra, Graham P. Wright, Eric Gilbert
    Mitra et. al. put forth a  study assessing the credibility of the events and the related content posted on social media websites like Twitter. They have presented a parsimonious model that maps linguistic cues to the perceived credibility level and results show that certain linguistic categories and their associated phrases are strong predictors surrounding disparate social media events.

    A model that captures text used in tweets covering 1377 events (66M tweets) they used Mechanical Turks to obtain labeled credibility annotations, Mechanical Lurk by Amazon was used. The authors used trained penalized logistic regression employing 15 linguistic and other control features to predict the credibility (Low, Medium or High) of event streams.

    The authors mention that the model is not deployable. However, the study is a great base for future work in this topic. It is simple model deals with only linguistic cues, and the Penalized Ordinal Regression seems like a prudent choice, but coupled with other parameters such as location and timestamp among other things, it could be designed as a complete system in itself.

    • The study mentions that the content of a tweet is more reliable, than source, when it comes to assessment of credibility. This would hold true almost always except for when the account posting a certain news/article is an account notorious for fake news or conspiracy theories. A simple additional classiffer could weed out such outliers from general consideration.
    • A term used in the paper, ‘stealth advertisers’ stuck to my head and it got me thinking about ‘stealth influencers’ masquerading as unbiased and reliable members of the community. They often use click-baits and the general linguistic cues possessed by them which are generally in extremes such as, “Best Gadget of the Year!!” or “Worst Decision of my Life”
    • And their tweets may often fool a naive user/mode looking for linguistic cues to assess credibility. This revolves around the study by Flanagin and Metzger, as there are characteristics worthy of being believed and then there are characteristics likely to be believed.[2] This begs the question, is the use of linguistic cues to affect credibility on social media hackable?
    • Further, Location/ Location based context is a great asset to assess credibility. Let me refer to the flash-flood thunderstorm warning issued on recently in Blacksburg. A similar type of downpour or notification would not be heeded seriously in a place which experiences a more intense pour. Thus location based context can be a great marker in the estimation of credibility.
    • The authors included the number of retweets as a predictive measure, however, if the reputation/ verified status/karma of the retweet-ers is factored into account, the prediction might become a lot more easier. This is because, multiple trolls retweeting a sassy/ fiery comeback, is different from reputed users retweeting genuine news.
    • Another factor is that linguistic cues picked up from a certain region/community/discipline may not be generalizable as every community has a different way of speaking online with jargons and argot. The community here may refer to a different academic discipline or ethnicity, The point being, that the linguistic cue knowledge has to be learned and cannot be transferred.

    [2] – Digital Media and Youth: Unparalleled Opportunity and Unprecedented Responsibility Andrew J. Flanagin and Miriam J. Metzger

Read More

Reflection #2 – 08/30 – [Viral Pasad]

Justin Cheng, Cristian Danescu-Niculescu-Mizil, Jure Leskovec (2015) – “Antisocial Behavior in Online Discussion Communities”- Proceedings of the Ninth International AAAI Conference on Web and Social Media.

TThe paper discusses about the analysis and early detection of Antisocial Behaviour in Online Discussion Communities. They analyzed the user data of three Online  Discussion Communities, namely, IGN, CNN, and Breitbart. They mention that link spammers and temporary bans have been excluded from the study. However, antisocial behavior would also involve the posting of media often found unpleasant by the community which would be out of the scope of this study. Further, the metrics they use are feature sets that can be classified into Post, Activity, Community and Moderator Feature Set, with the strongest being Moderator and Community Features respectively. They used a random forest classifier. They also used a bag of words model that used logistic regression trained on bigrams, which in spite of performing reasonably well, is less generalizable across communities.

 

  • The paper repeatedly mentions and relies heavily on Moderators in the Online Discussion Community. It may be the case that the Online Communities that the study was conducted upon had reliable moderators, but that need not be the case for other Online Discussion Platforms.
  • Going back to the last discussion in class, In a platform which lacks Moderators, a set of (power-)users with reliably high karma/reputation points could perhaps be made to ‘moderate’ or answer surveys about certain potential Future Blocked Users (FBUs).
  • The early detection of users, begs the question, how soon would be too soon to ban these users or how late would be too late? Furthermore, could an FBU be put on a watchlist after having received a warning or some hit to their reputation? (Extrapolating from the point unfair draconian post deletes with some users making their writing worse, it could also be possible that warnings make them harsher).

But this would also probably eliminate some fraction the 20% of the false positives that get identified as FBUs.

  • The study excluded the occurrences of multiple/temporary bans from the data, however, studying temporary bans could provide more insight regarding behavior change, and also, if temporary bans would worsen their writing just as well as unfair post deletion.
  • The paper states that “the more posts a user eventually makes, the more difficult it is to predict whether they will get eventually banned later on”. But using a more complex and robust classifier instead of random forest would perhaps shed light on behavior change and perhaps even increase the accuracy of the model!
  • Further, we could also learn about the role of communities in incubating antisocial behaviour by monitoring the kind of ‘virtual’ circles that the users interact with after the lift of their temporary ban. It would provide information as to what kind of ‘virtual’ company promotes or exacerbates antisocial behaviour.
  • Another useful insight for the study would be to study, self deletion of posts by the users.
  • Another thing to think about is the handling of false positives (innocent users getting profiled as FBUs) and also false negatives (crafty users who instigate debates surreptitiously or use cleverly disguised sarcasm) which the model will be unable to detect
  • Furthermore, I might be unnecessarily skeptical regarding this but I believe that the accuracy of the same model might not be translated on to other communities or platforms (such as Facebook or Quora or Reddit which cater to multi/different domain discussions and have different social dynamics as compared to CNN.com, a general news site, Breitbart.com, a political news site, and IGN.com, a computer gaming site.

But then again, I could be wrong here, thanks to

  • Facebook’s Comment Mirroring and RSS Feeders, due to which most of Facebook Comments would also get posted on the CNN or IGN threads. 
  • The feature set used in the study which covers the community aspects as well.

Read More

Reflection #1 – [08/28] – [Viral Pasad]

  • Judith S. Donath. “Identity and Deception in the Virtual Community.
  • Michael S. Bernstein, Andrés Monroy-Hernández, Drew Harry, Paul André, Katrina Panovich, Greg Vargas. “4chan and /b/: An Analysis of Anonymity and Ephemerality in a Large Online Community.

The two papers revolve around Anonymity and Volatility as features of the design of a Social Media Platform. The first paper discusses the concepts of Anonymity and Deception in the virtual community of Usernet, while the second paper discusses the Ephemerality and Anonymity on the /b/ (random) board on 4chan.org.

Anonymity can be best explained as a transaction via cash (it cannot be traced back to you) while the use of usernames or identities is analogous to that of a credit card. Every time you use your identity (card),  you contribute towards your reputation (credit history/score). Having said that, let’s consider the following ideas regarding Anonymity and Ephemerality:-

  • Keeping in mind those analogies, one would rather be anonymous if they plan to be involved in shady/rowdy/morally or politically incorrect dealings (like the ones widely prevalent on 4chan.org/b/)
  • Further, since the anonymity does not contribute to the user reputation, no user, willing to build a reputation will stay/migrate to a platform that makes them anonymous by default. Yes, Agreed that users of any such platforms soon devise ways to distinguish a regular user from a ‘newbie’ but that is still not enough motivation for anyone to ‘spend’ their time and efforts on the platform asking and answering questions without getting rewarded appropriately.
  • Thus, with no repercussions whatsoever, an anonymous platform is bound to be reputed for generation of memes and growing wild.
  • Even if a posted question gets answered, there is no way to validate/moderate the answer and identify a samaritan from the trolls because there is no way to regulate such platforms. Therefore any content one sees there, is better taken with a pinch of salt.
  • The studies, pave the way for more discourse on the grey area around Freedom of Speech. It definitely is best achieved with anonymity, but would we be comfortable with a platform which allows anonymous conversations on terrorism and/or gun laws? This is an entirely separate avenue. One may also take into consideration, Confession Pages on Facebook wherein the identity maybe recorded but not replayed back.
  • Ironically, the best use of anonymity can be done when personal details are involved. Anonymous posting sites can be used to discuss personal issues or coping mechanisms without the fear of being judged.
  • Speaking of being judged, another strong merit of the anonymous scenario is the complete lack of user details, thereby making it impossible for inhibitions and discriminations to creep into interactions on the website.
  • However, in a pseudonymous or open identity setting, the reputation/karma of users may not only increase but also diminish, thereby validating and regulating the users, making the task of deception detection slightly easier.
  • An anonymous platform could promote content quality, but with the concept of ephemerality being involved, an extra effort to bump or sage the post would not be advocated. No point in having an anonymous Reddit with ‘volatile and timed upvotes.
  • This is where a volatile system would prove beneficial. Not only that, a non-volatile system may also allow the editing of posts and storing the edit history later, making a better information-rich system.
  • Further, users might miss certain posts if they are not online owing to the ephemerality. The system of having to bump posts up, even when they are inevitably going to die after the bump threshold is reached is similar to the echo chambers on Twitter. The difference being, that Twitter data can be scraped. It may be exciting to consider solutions to pseudo ephemerality caused by a high post rate.
  • Ephemeral posts can be recorded or saved and replayed infinitely or even reposted at will.
  • Another aspect to think about is that, with tech giants incrementing their storage facilities, ephemerality would go relatively easy on storage servers, even while providing scalability.
  • Last, but not the least, a major aspect of ephemeral posts, is that the data can not be used for advertising and pattern identification as prevalent with the recent trends.

Thus, the design concepts, Ephemerality and Anonymity can be used to design social media platforms as per their merits and demerits. Instagram and Snapchat, ephemeral yet allowing saved posts can be considered hybrid systems. On a wider domain and not just images, a hybrid system allowing the best of both worlds may also be designed which would be thought provoking.

 

Read More