Reflection #12 – [10/23] – Subhash Holla H S

[1] R. M. Bond et al., “A 61-million-person experiment in social influence and political mobilization,” Nature, vol. 489, no. 7415, pp. 295–298, 2012.

[2] J. E. Guillory et al., “Editorial Expression of Concern: Experimental evidence of massive scale emotional contagion through social networks,” Proc. Natl. Acad. Sci., vol. 111, no. 29, pp. 10779–10779, 2014.

The two papers talked about social contagion and its effects in two perspectives which I feel are complementary to each other. The first paper talks about political self-expression being influenced by different levels of engagement on social media.

This reminds me of:

“In individuals, insanity is rare; but in groups, parties, nations, and epochs, it is the rule.”

― Friedrich Nietzsche

This was a quote my mother used to refer to every time I went on trips with my friends warning me of her self-coined (I believe) phenomena of “temporary group insanity”. She kept narrating how monkeys near her home in her childhood used to act crazy watching other monkeys do it. I believe the social norm translating through a network with the need for an increased “tie strength” with nodes (in this case humans) that are perceived to be our networks “center” or an important node defining our network structure is what drives humans to follow their “close friends”. This is a psychological and philosophical take on the findings in the paper. The tool called ‘sense-making’ is a Human-Computer Interaction perspective on how humans tend to reason with the cues in the environment before processing it to take a decision. A meta-cognitive process involving situation awareness and cue saliency is the concoction, I believe, that controls an individuals decision to follow a friend or group of friends into political self-expression. As the paper states, this is not helping improve their social status or the research has no indication of it. There is no compelling social reason for us to click the “I Voted” button like peer-pressure. But I believe that since we are accustomed to following our formed set of heuristics it is easy for us to remain consistent, go with what is available, more familiar and appealing to one’s ‘affect’. These will generally point to taking a decision that helps us recognize ourselves as a part of our perceived group of ‘friends’ or ‘close friends’.

The question of whether we are in a time when we are at the “end of theory” resonated, even more, when all the paper did was use the millions of data points to infer a finding rather than hypothesize something or use the data to find a grounded theory about the behaviour of the individuals or groups.

From the point of quantitative analysis procedure that the paper follows the group I believe did a good job of designing the experiment with the controlled, informed and social groups with a backed inferred definition of ‘friends’ and ‘close friends’. I would like to see if the same could translate to other venues like Amazon.

  • Would information about our ‘close friends’ or other people we relate to buying a product influence our decision to buy or not buy a product?
  • Would it be the same with movies on Netflix?
  • Can we use the information we can scrape from people who log-in on Amazon, Netflix with their Facebook IDs about their close friends to map the sort of products people would want to buy helping improve their experience?
  • Will this be overdone if we used an algorithm to keep track of a social network of an individual to then curate the content he or she sees?
  • Is this a new type of profiling?
  • Is this ethical?

The reason I ask so many questions is the alarming realization one can come to from the conclusions that the paper draws. After the propaganda that the authors talk about towards the end, they elude to the fact that social influence can impact behavior not only online but offline as well. This reminds me of the dystopian society represented by the TV series “Black Mirror” where corporations control the way we think and behave.

Coming back to reality and putting the researcher hat back on, the second paper was more catered for the interests I have. Here the authors try to show the existence of emotional contagion. The papers attempt at establishing influence and interaction on social media platform is one of interest for me. The question of whether a persons expression is captured only by the text they input onto their walls or comments on others posts is very important. Do things like videos one clicks on, likes expressed, emojis sent, etc. influence the positive and negative emotion one expresses? If there is a shift in the norm of a community, which could be a scenario one can draw from the paper, where many community members are expressing negative emotions does this influence our “behavior” just in that community or overall?

For example, if John is your average inquisitive teenager who is adventurous and is a part of this online community on E-Sports. Initially, the community is well controlled or monitored or moderated. After a few months, the new and a few old members (let’s say a majority) start expressing a lot of aggression. Now at this point, John identifies himself with the community and does not want to feel left out so starts putting a facade of aggression for the members to accept him. He spends the time when he is not on the community watching cat videos and helping the elderly as he is a youth volunteer.

Now the question is, “Does exposure to the negative emotion on the community show emotional contagion and influence his behavior?”

Read More

Reflection #11 – [10/16] – Subhash Holla H S

[On a tangent to the paper] In the recently concluded Annual HFES Meeting, Matthew Gombolay commented on “context”, that we human’s cannot explain the context behind the actions we take in a uniform manner. Politeness is one such contextual problems according to me. Why do we thank the bus driver who opens the door for us when we get off? Is it not his or her job to ferry us? Do we do this because of an inherent politeness we have as humans? Or do we do this because we saw other people do this and wanted to be a part of the group? A simple thing as thanking another individual can be contextualized in many ways. I feel that this very problem that we have not reached a consensus on contextualizing our actions is why we have not been able to teach a machine agent “context”.

Back to the topic of politeness that the paper deals with. I would like to reflect on the content in two halves, initially talking about the notion of politeness and referring to the paper where necessary and then moving to critique the procedure adopted in the research study.

I would point to two central questions that I wish to address.

What is politeness?

Is it as Brown and Levinson describe the emotional investment that we make to save our ‘face’? Is it an acknowledgment of power disparity with the person we converse with? Is it a complex mix of many things? I feel for us to converse about politeness we need to agree on the notion first. For the sake of this article, I will give the definition to which I will try and stick to.

I will define politeness as the behavioral characteristics an individual portrays, often by linguistic choice, that is civil relative to the people observing this individual.

By this definition, I give a relative measure for the word because it changes with the observers. A group might think to abstain from using cuss words as polite while another might consider the use of words like “Please”, “Thank you”, etc. in one’s conversations as a measure of politeness.

Do humans assume or adopt politeness to reach a higher power level?

From my understanding of the implications of the paper, humans tend to be polite as a means to an end. They are polite essentially to cause a disparity. This raises my question of “Are we subconsciously aware that we will be in a higher power state that the opposite party when we are polite to them?”.

My answer to the above question is that humans are polite to cause a disparity in power, but I am open to having my opinion changed.

Changing directions to critique the paper. I have the following critique:

  • The paper deals with requests only. Is this enough to comment on “politeness” as a whole? Is it already sampling in a biased manner? I feel that the work did not defend its choice of requests in a sufficient manner. At least for me. I would have liked to have a mention of why they think requests generalize to other contexts more concretely.
  • The test for inter-annotator agreement is interesting. The pairwise correlation test is definitely corroboratory to some of the claims that the authors have made. I failed to understand one aspect of this. When the classifier was compared to human performance by collecting more data. Was the inter-annotator agreement used as a measure of human-performance? If so is it not wrong? Humans are inherently different. A machine is always conformant to a given behavior. Is it not trivial to say based on disagreement that machine performs close to or better than humans?
  • I am curious about the linguistic background questionnaire that the authors used and would definitely try to learn more about the same.
  • The binary perception section mentioned the ends of the spectrum having more hits than the middle region. This reminded me of signal strength in signal detection theory. This is a Human Information Processing take where they comment that humans detect strong or weak signals very easily. But when it comes to signals that are hard to tell apart from each other, humans are bad at judging whether there is a signal or there isn’t. Only by designing the signal to have redundant dimensions can designers ensure that the right judgment of signal or no signal is made.
  • I question the choice of not using any of the second domain for training. Would it not have made the more model-agnostic by using some of the data from the second domain as well?
  • The paper talks about analyzing the requests made in these domains. I did not see a mention of the analysis of the responses that these requests got. I feel that an analysis of whether these requests were fulfilled would give valuable insight from the two parties involved rather than the retrospective nature of having a mechanical turker annotate it. An analysis of this sort would present a good baseline for comparison.

REFERENCES:

C. Danescu-niculescu-mizil, M. Sudhof, D. Jurafsky, J. Leskovec, and C. Potts, “A computational approach to politeness with application to social factors,” Proc. 51st Annu. Meet. Assoc. Comput. Linguist., pp. 250–259, 2013.

Penelope Brown and Stephen C. Levinson. 1978. Universals in language use: Politeness phenomena. In Esther N. Goody, editor, Questions and Politeness: Strategies in Social Interaction, pages 56–311, Cambridge. Cambridge University Press.

Read More

Reflection #10 – [10/02] – Subhash Holla H S

“When you are young, you look at television and think, there’s a conspiracy. The networks have conspired to dumb us down. But when you get a little older, you realize that’s not true. The networks are in business to give people exactly what they want.”                                – Steve Jobs

The paper is a good precursor to the research project that Shruthi and I are interested in. I would like to analyze the paper in terms of a few keys points that were mentioned trying to capture the things mentioned in the paper and sharing my own insights and possible explanations for them.

Conspiracy theory:

The paper eludes to this concept of people online and websites, in general, are catering to “conspiracy theories” as that is what has been seen to draw people’s attention.  But what are conspiracy theories? The paper categorizes them under “alternative narratives” not giving a formal definition of what is being considered as “conspiracy theory”. I will define it as “any propagation of misinformation” which I believe is broadly the same meaning the paper talks about as well. A couple of interesting facts that the paper talks about which I feel is worthy of address are:

  • once someone believes in a conspiracy theory of an event, it is extremely hard to dissuade them from this belief“. Here I would like to further substantiate that some opinions and beliefs are ingrained into a person and unless they are given a logical explanation over a long period of time reinforcing the notion that they might be wrong it will be difficult to get a person to change their stance. This effort is a necessary one and I feel painting the picture of where the information is coming from will help negatively reinforce the idea that they are right. If we do not put an effort to do this then “belief in one conspiracy theory correlates with an increased likelihood that an individual will believe in another” will turn out to be true as well.
  • The definition of conspiracy theorists being a part of ‘alternative to “corporate-controlled” media‘ is one I do not agree with. This raises a philosophical debate as to where we draw a line? Should we draw a line or should we look for methods that do not try and draw a line?

Bias:

“first author is a left-leaning individual who receives her news primarily through mainstream sources and who considers that alternative narratives regarding these mass shooting events to be false” was according to me a revelation in the field of human behavioral modeling. Being a part of the Human Factors community and have interacted with many Human Factors professionals this is the first time that I have seen an author explicitly mentioned inherent bias in an effort to eliminate it. Acknowledgment is the first step to elimination. I think the elimination of Bias should follow a similar procedure like the time-tested 12-step model we follow in Addiction recovery.  That could be an interesting study as a model like that could shed some light on my hypothesis that “Humans are addicted to being biased”.

Another point is the use of confusion matrix based on signal detection theory. We could use this to build a conservative or liberal model of a human and then use this generalized model to help design tools to foil the propagation of misinformation and “alternative narratives”.

General Reflection:

In general, I found a couple more observations that resonated with the previous readings of this semester.

The overall referral to misinformation propagation coupled with the video lecture where the author presents an example of Deep Fakes for misinformation propagation, sent me back to this question that I have been asking myself off late. All of the research we have analyzed is on text data. What if the same was video data? Especially in this case as we do get some if not all of our information from YouTube and other such video platforms. Will this research translate directly to that case? Is there existing and/or ongoing research on the same? Is there a need for research on it?

Theme convergence was another concept that really interested me as I would be really interested in understanding how diverse domains converge to common themes. These will help build better group behavioral models and overcome the problem of falling into Simpson’s paradox that researchers fall into, especially when dealing with human data.

PAPER:

K. Starbird, “Examining the Alternative Media Ecosystem through the Production of Alternative Narratives of Mass Shooting Events on Twitter,” Icwsm17, no. Icwsm, pp. 230–239, 2017.

Read More

Reflection #8 – [09/25] – Subhash Holla H S

[1] R. K. Garrett, “Echo chambers online?: Politically motivated selective exposure among Internet news users,” J. Comput. Commun., vol. 14, no. 2, pp. 265–285, 2009.

[2] P. Resnick, R. Garrett, and T. Kriplean, “Bursting your (filter) bubble: strategies for promoting diverse exposure,” Proc. …, pp. 95–100, 2013.

The first paper talks about selective exposure in great detail. The novelty that is presented in this paper is concerning the opinion challenge avoidance with the explanation “that people’s desire for opinion reinforcement is stronger than their aversion to opinion challenges”. The best way to capture the entirety of the argument is to present the different hypotheses of the paper as the authors present that to be the goal and set out to defend them.

H1: The more opinion-reinforcing information an individual expects a news story to contain, the more likely he or she is to look at it.

H2: The more opinion-reinforcing information a news story contains, the more time an individual will spend viewing it.

H3: The more opinion-challenging information the reader expects a news story to contain, the less likely he or she is to look at it.

H3a: The influence of opinion-challenging information on the decision to look at a news story will be smaller than the influence of opinion-reinforcing information.

H4: The more opinion-challenging information a news story contains, the more time the individual will spend viewing it.

This is reminiscent of a set of design principles that every practitioner is asked to follow, the Gestalt Principles. They are:

  • Similarity
  • Continuation
  • Closure
  • Proximity
  • Figure and ground

The above principles can be interpreted to fit the current context. Humans generally try to find similarity in information to perceive information. They also always form a mental model about most subjects and task which they try and associate to the real world. In the current context, this can be related to the first hypothesis which is reaffirmed by the cognitive dissonance theory as well. Humans have the tendency to see continuity in information, even when it might not inherently exist. This is along the same lines of thought of hypothesis 3a where the influence of opinion-reinforcing information is assumed to have more influence than opinion-challenging ones. The fact that humans always try to find closure, which I would link to trying to read between the lines, is reflected in the fourth hypothesis as people generally want to know the whole story just so that they can twist it to their own narrative when necessary. The second hypothesis can be directly linked to proximity and figure and ground, in a way could be said to map to the third hypothesis as we always see what we want to as the figure and dissociate the rest to be the ground.

In general, when I try and dissect the paper there are few queries that I am left with.

  • Why were the subjects not allowed to go back once their answers were submitted on a page? Would it not reveal that a participant is disinterested in going with the fact that the paper states is a dichotomous variable?
  • What was the rationale with the 15 minutes? Since the entire study was carefully planned out, was there a test done to determine the time allotment for the participants?
  • With the demographics of the audience as a definite skewing factor on the user data, why was no matching conducted to normalize the data to make it more representative?

A few supporting theories that I was reminded of was Group Thinking. This was presented in a book by Irving Janis in 1982 where the idea of an individual being overridden by that of the group is explained. It is relevant here as this explains how some people might be so involved in thinking in a particular way that even if they genuinely believe in an article that is opinion-challenging they just might not go with it. Another one is the paper by Z. Kunda on Motivated Reasoning. Here the author talks about how people just search for things that confirm and reaffirm what they already believe rather than searching for the actual truth.

This is a good transition to the second conference panel paper which we read on diverse exposure. This is essentially a proponent of ways to ensure that we do not have selective exposure. Though most of the panel papers background is already discussed above, the suggestion of using engagement tools like ConsiderIt, Reflect and OpinionSpace are very interesting. At the end of this reading, I have a couple of questions which I am hoping to get people’s opinion on.

  1. Should social and news media nudge users to have diverse exposure? If yes, how much?
  2. Does educating people about selective exposure solve this problem?

DISCLAIMER: I ask these under the assumption that diverse exposure is good.

Read More

Reflection #7 – [09/18] – [Subhash Holla H S]

[1]        T. Erickson and W. A. Kellogg, “Social translucence: an approach to designing systems that support social processes,” ACM Trans. Comput. Interact., vol. 7, no. 1, pp. 59–83, 2000.

[2]        J. Donath and F. Viégas, “The chat circles series,” Proc. Conf. Des. Interact. Syst. Process. Pract. methods, Tech. – DIS ’02, p. 359, 2002.

 

In the first paper, the premise is around the concept of “social translucence”. A term that the authors’ claims have the characteristics of visibility, awareness, and accountability. The paper structure seems pedagogical in how the authors explain the central term and later goes on to talk about knowledge management and knowledge communities. Finally, the entire theoretical base is implemented in the system called “Babble”.

The use of urban design and architecture was reminiscent of the Open Spaces video that we watched and studied in class. It reinforced how the domains could influence HCI for the better and how past researchers have explored certain possibilities using the same. The most important question of the paper for me was “Why is it that we speak of socially translucent systems rather than socially transparent systems?”. The paper mentions that privacy and visibility have a vital tension warranting social translucence. The power of constraints is an important idea as I feel there is a fine line between the users feeling free under reasonable restrictions and them feeling restricted. As a developer and/or designer it will be an important quality that the platform we develop needs to have. The fact that the constraints we establish should be evident to all users and clearly indicative of the reasonable restriction base is one that is difficult to achieve.

Knowledge management has been one front where organizations have improved since the time of the publication, but analyzing this on a deeper level based on the previous readings would lead me to the line “But it is interesting to think about the possibilities of a system that was designed to “know” about the notions of authorship, citation, and research communities.” If we were to design an autonomous agent which could act like such a knowledge management system I can see a similar argument made for the same.

The latter part of the paper feels like the inspiration for many of the currently existing online communities, with one in particular that jumped at me viz Reddit. The needs explained in the “Activity Support”, “Conversation Visualization and Restructuring” and “Organizational Knowledge Spaces” all are captured in these social platforms to a large extent. The paper gave a strong reason for having the particular structure for these platforms.

Finally, the three approaches to implementation that the paper mentions in the realistic, mimetic and abstract I feel can be addressed more as three particular stages in HCI. The article was published in a time where the first two approaches were not feasible because of technological advances. I feel that the next groundbreaking social platform will be with the mimetic approach.

The approaches talk about different levels of interaction which gives a good transition into the second paper. Here the entire paper talks about the “evolutionary” development of a platform. The paper essentially could be a very good case study on how to implement the theoretical base we get in the first paper into actuality. The concept of graphical environment is not a new one but they do not seem to prosper long enough or have wide enough reach to contest with the existing communication platforms which are more abstract in nature. Given this fact, I still feel with the shifting paradigm and users open to use the mimetic approach the next platform can be built mimicking the Chat Circle procedure as the entire platform has been built from the ground up with each and every feature justified. At every step, the paper tries to capture the key interface elements that were listed.

As a student of human factors, I see the possible implications of each and every claim. From the multiple hues to add visual vibrancy to informative backgrounds all the way to immersive visual scenes stimulate different levels of response for the human. I can understand why and how such models are required. With the current technological prowess, I feel such an approach could result in a platform that will see a lot more success as I still feel that even though current platforms have elements mentioned in the paper an entire platform with this approach is yet to be built.

Read More

Reflection #5 – [09/11] – Subhash Holla H S

PAPERS:

  • Bakshy, “Exposure to ideologically diverse news and opinion on Facebook,” vol. 348, no. 6239, pp. 1130–1133, 2015.
  • M. Eslami et al., ““ I always assumed that I wasn’t really that close to [ her ]”: Reasoning about invisible algorithms in the news feed,” 2015.

SUMMARY:

Paper 1:

The question that was the central part of the paper was “How do [these] online networks influence exposure to perspectives that cut across ideological lines?” for which de-identified data of 10.1 million U.S. Facebook users were measured for their ideological homophily in friend networks. The influence of ideologically discordant and the relationship with the heterogeneity of friends with such data led the authors to conclude that “individuals’ choices played a stronger role in limiting exposure to cross-cutting content.”

The comparisons and observations were captured in:

  • comparing the logical diversity of the broad set of news and opinion shared on Facebook with that shared by individuals’ friend networks
  • comparing this with the subset of stories that appear in individuals’ algorithmically ranked News Feeds
  • observing what information individuals choose to consume, given exposure on News Feed.

A point of interest as a result of the study was the suggestion that the power to expose oneself to perspectives from the other side (liberal or conservative) in social media lies first and foremost with individuals.

Paper 2:

The objective of the paper was to find “whether it is useful to give users insight into these [social media] algorithms’ existence or functionality and how such insight might affect their experience”. The development of a Facebook application called FeedVis for this purpose helped them answer three questions:

  • How aware are users of the News Feed curation algorithm and what factors are associated with this awareness?
  • How do users evaluate the curation of their News Feed when shown the algorithm outputs? Given the opportunity to alter the outputs, how do users’ preferred outputs compare to the algorithm’s?
  • How does the knowledge users gain through an algorithm visualization tool transfer to their behavior?

During the study tools of Usability study like think aloud, walkthroughs, questionnaires were employed to extract information from users. The statistical tools of Welch’s test, Chi-square test, Fisher’s exact test helped corroborate findings. The features, both passive and active, that were extracted as a potential explanation for the questions: While all the participants were exposed to the algorithm outputs, why were the majority not aware of the algorithm’s existence? Were there any differences in Facebook usage associated with being aware or unaware of the News Feed manipulation?

REFLECTIONS:

My reflection on this paper might be biased as I am under the impression that the authors of the paper are also stakeholders in the findings resulting in a conflict of interest. I would like to support my impression with a few of the reporting done by the paper:

  • The indication or suggestion of individuals choice resulting in the content that one consumes seems to suggest that the algorithm is not controlling the things individuals see but humans indirectly are which is essentially arguing against the second paper we read.
  • The limitations as stated by the author make it seem as if the author is leading us to believe in a models findings which are not robust and has the potential to be skewed.

I will acknowledge the fact that the author has a basis for the claims on cross-cutting of data and given a more robust model compensating for all the drawbacks mentioned has the same findings I will be inclined to side with the author’s findings.

The notion of echo chambers and filter bubbles point us to the argument made by the second paper where through a study it shows the need for explainability and the option to choose. This was a paper that I gave a lot of attention to as I feel close to home. I feel that the paper is a proponent for explainable AI. It tries to address the issue of the black box approach most ML and AI algorithms have with even industry leaders only aware of the inputs and outcomes not able to completely reason with the physics or mechanics behind the processing agent or algorithm. As someone who sees the need for Explainability as a requirement to build Interactive AI, I thought the findings of the paper “but obvious” at points. The fact that people expressed anger and concern falls in line with a string of previous findings resulting in the work in [1], [2], [11]–[13], [3]–[10]. Reading through these papers helps one understand the need of the hour.

The paper also approaches the problem from a Human Factors perspective rather than an HCI one which I feel is warranted. I would argue that a textbook approach is not one that is required. I would tangentially propose a new approach for a new field. Expecting one to stick to design principles, analysis techniques that were coined or thought off in an era where the current algorithms were science fiction is ludicrous according to me. We need to approach the analysis of such Human-Centered systems partly with Human Factors, partly psychology and mostly HCI.

I will be really interested in working with developing more understandable AI systems for the layman.

 

REFERENCES:

[1]        I. John D. Lee and Katrina A. See, University of Iowa, Iowa City, “Trust in Automation: Designing for Appropriate Reliance,” Hum. Factors, vol. 46, no. 1, pp. 50–80, 2004.

[2]        M. T. Ribeiro, S. Singh, and C. Guestrin, “‘Why Should I Trust You?’: Explaining the Predictions of Any Classifier,” pp. 1135–1144, 2016.

[3]        A. Freedy, E. DeVisser, G. Weltman, and N. Coeyman, “Measurement of trust in human-robot collaboration,” in 2007 International Symposium on Collaborative Technologies and Systems, 2007, pp. 106–114.

[4]        M. Hengstler, E. Enkel, and S. Duelli, “Applied artificial intelligence and trust-The case of autonomous vehicles and medical assistance devices,” Technol. Forecast. Soc. Change, vol. 105, pp. 105–120, 2016.

[5]        K. A. Hoff and M. Bashir, “Trust in automation: Integrating empirical evidence on factors that influence trust,” Hum. Factors, vol. 57, no. 3, pp. 407–434, 2015.

[6]        E. J. de Visser et al., “Almost human: Anthropomorphism increases trust resilience in cognitive agents,” J. Exp. Psychol. Appl., vol. 22, no. 3, pp. 331–349, 2016.

[7]        M. T. Dzindolet, S. A. Peterson, R. A. Pomranky, L. G. Pierce, and H. P. Beck, “The role of trust in automation reliance,” Int. J. Hum. Comput. Stud., vol. 58, no. 6, pp. 697–718, 2003.

[8]        L. J. Molnar, L. H. Ryan, A. K. Pradhan, D. W. Eby, R. M. St. Louis, and J. S. Zakrajsek, “Understanding trust and acceptance of automated vehicles: An exploratory simulator study of transfer of control between automated and manual driving,” Transp. Res. Part F Traffic Psychol. Behav., vol. 58, pp. 319–328, Oct. 2018.

[9]        A. Freedy, E. DeVisser, G. Weltman, and N. Coeyman, “Measurement of trust in human-robot collaboration,” in 2007 International Symposium on Collaborative Technologies and Systems, 2007, pp. 106–114.

[10]      T. T. Kessler, C. Larios, T. Walker, V. Yerdon, and P. A. Hancock, “A Comparison of Trust Measures in Human–Robot Interaction Scenarios.”

[11]      M. Lewis, K. Sycara, and P. Walker, “The Role of Trust in Human-Robot Interaction.”

[12]      D. B. Quinn, “Exploring the Efficacy of Social Trust Repair in Human-Automation Interactions.”

[13]      M. Lewis et al., “The Effect of Culture on Trust in Automation: Reliability and Workload,” ACM Trans. Interact. Intell. Syst. ACM Trans. Interact. Intell. Syst. ACM Trans. xxxxxxxx Mon. YYYY, vol. 30, no. x, 2016.

Read More

Reflection #4 – [09/06] – [Subhash Holla H S]

Paper: S. Kumar, J. Cheng, J. Leskovec, and V. S. Subrahmanian, “An Army of Me: Sockpuppets in Online Discussion Communities,” 2017.

Summary: The online activity analysis of “a user account that is controlled by an individual (or puppetmaster) who controls at least one other user account.” is the goal of the paper. In it, the authors identify, characterize and predict the behavior of sockpuppetry. The adopted definition of sockpuppets is different from the one understood at the mention of the word. The focus on whether a pair of accounts is sockpuppets is methodically established by:

  • First, identifying them using IP address, the time signature of the comments and the discussions posted in. This was limited to the discussions with at least 3 recurring posts.
  • Second, characterizing them using the hypothesis testing method to infer that the sockpuppets do not lead double lives. The linguistic traits helped differentiate them from normal users by showing that they use mostly first- and second-person singular personal pronouns. The activity analysis of these sockpuppets resulted in the conclusions that they start fewer discussions, participate in controversial topics, are treated harshly by the community and they have a lot of mutual interaction.

Reflections: The past few readings have been probing similar areas of social computing platforms. It has been trying to answer a security-based question where ideally all platforms will want to know the origins of each user, their behavior pattern and predict future use patterns. Now, this paper essentially is introducing another possible concern in the same area. While a lot of research (which is becoming more and more apparent with the readings) is addressing the problem from a statistical standpoint the question that popped to my head is can this be viewed from another viewpoint. Maybe we need to wear a different hat to get some new information. The solution was that of my home base of Human Factors. I wish to give three possible viewpoints that a combination of Human Factors and Human-Computer Interaction advocate for:

  • Ontology: This is a generalized map of the behavior patterns one would display if one had a set of traits. In the case of sockpuppeteers, this would essentially mean that we generalize them into categories and learn their behavior model to predict the behavior of future sockpuppeteers. This could help in the automated filtering of fake accounts, probing into non-human sockpuppets that help spread misinformation, etc. For this first, we will need to build a Persona of the common sockpuppeteer and then draw conclusions based on that.
  • Work Domain Analysis: Now the social computing platform can be considered as a work domain with the task being to post information. Since there is no Normative, or “One best way”, to analyze it we can take a “Formative” approach similar to Kim J. Vicente in his book on Cognitive Work Analysis. This could help us understand the different strategies the sockpuppeteers could use, the social organization and cooperation they have as well as their competencies.
  • Social Network Theory: The use of social network theory can help identify the string of sockpuppets that a user could potentially be using. This could prove to be a useful tool to find the root of a group of accounts. This could also help understand the interaction patterns of these accounts giving valuable insight to build the behavioral model of such individuals.

Another area where I have a few burning questions after reading this paper, which I am hoping to get some insight into is trolling.

  1. Who is a troll?
  2. How is a troll different from a sockpuppet?
  3. Can one become the other?
  4. Do they ever interact?
  5. What is their relationship?

I am hoping to get a better understanding with more reading on the same topic. I think it will be interesting to study the above mentioned interaction.

Read More

Reflection #3 – [09/04] – [Subhash Holla H S]

In [1], the work presented has a very strong argument for the need for language models in social computing platforms. This can be deconstructed using the following  two sections:

  • SUMMARY: The paper first gives a theoretical base to the concepts that are used, along with a survey of related work. Here modality, subjectivity and the other linguistic measures used have been defined to capture the different perceived dimensions of a language model. The claims of all of them are warranted with the help of previous work. The statistical framework considered the problem as an ordered logistic regression one resulting in phrase collinearity (A common property in natural language expressions). The performance of the model is well documented with a sound defense for the validity. The overall accuracy of the model is a clear indicator of its use against the considered baseline classifiers. Implications are drawn on each of the defined measures based on the inferential statistical results of the model.
  • REFLECTION: As a proponent for credibility level assessments of social media content, I favor the establishment of well-founded metrics to filter content. The paper is a strong step in the direction with a detailed account of the design process for a good linguistic model. The few immediate design opportunities that are regurgitated from the paper are:
    • The creation of a deployable automated system for content analysis adopting such a model. This can be a very interesting project where a Multi-agent Machine learning model using the CREDBANK system as its Supervised Learner, can help classify tweets in real time assigning credits to the source of the content. This will be monitored by another agent which reinforces the Supervised Learner, essentially creating a meta-learner.[2]-[5]
    • Adaptation of an ensemble of such models to form a global system which cross-verifies and credits information not just from a single platform but across multiple ones to give the metaphorical “global mean” as against the “local mean” in information. [6]
    • The model should account for the linguistic chaos even with newly created “Purrrrr” or “covfefe”. These lexiconic outliers could be captured with the use of Chaos Theory in Reinforcement Learning, which could be an entirely new avenue of research.

The paper also helped me understand the importance of capturing the different dimensions of a language model and corroborating it with evidence with tools of statistical inference.

[1]        T. Mitra, G. P. Wright, and E. Gilbert, “A Parsimonious Language Model of Social Media Credibility Across Disparate Events,” Proc. 2017 ACM Conf. Comput. Support. Coop. Work Soc. Comput. – CSCW ’17, pp. 126–145, 2017.

[2]        D. Li, Y. Yang, Y.-Z. Song, and T. M. Hospedales, “Learning to Generalize: Meta-Learning for Domain Generalization,” Oct. 2017.

[3]        F. Sung, L. Zhang, T. Xiang, T. Hospedales, and Y. Yang, “Learning to Learn: Meta-Critic Networks for Sample Efficient Learning,” Jun. 2017.

[4]        R. Houthooft et al., “Evolved Policy Gradients,” Feb. 2018.

[5]        Z. Xu, H. van Hasselt, and D. Silver, “Meta-Gradient Reinforcement Learning,” May 2018.

[6]        Marios Michailidis (2017), StackNet, StackNet Meta-Modelling Framework, URL https://github.com/kaz-Anova/StackNet

Read More

Reflection #2 – [08/30] – Subhash Holla H S

Cheng, J., Danescu-Niculescu-Mizil, C., & Leskovec, J. (2015). Antisocial Behavior in Online Discussion Communities. Proceedings of the Ninth International AAAI Conference on Web and Social Media Antisocial, 61–70. Retrieved from http://www.aaai.org/ocs/index.php/ICWSM/ICWSM15/paper/view/10469

The focus of the paper is categorizing anti-social behavior in online discussion communities. The inferential statistical approach taken by the paper by corroborating all claims with statistics is one that I appreciate. The approach in itself needs to be picked apart with a fine tooth comb to both understand the method followed and point a few discrepancies.

The paper claims to have adopted a “retrospective longitudinal analyses“. The long-term observational study in a subjects naturalistic environment is close to home as my current research hopes to study the “Evolution of trust”. A few key takeaways here are:

  • The pool of study is limited to online discussion forums and not extended to general social media platforms. Since the author has not claimed the same or provided any evidence for the possibility of the same it is safe to say that this model is not completely generalizable. In platforms like Twitter where the site structure may be similar, the model adopted here might fail. A possible reason could be the option of retweeting on Twitter.
  • The use of the propensity scores to determine causal effects by matching, according to my understanding, is a representational and reductional technique. It is representational because it considers a section of the data to represent all of it. Reductional because it discards a section of the data not used for the mapping. I wonder if this data loss has an impact on the outcome.
  • The use of Mechanical Turk is always a good way to complete work that is not possible for Artificial Intelligence. In the above Human Intelligence Task the paper mentions the use of 131 workers with each post being averaged for three workers. The question that seemed important is whether this is required if a model is being built for another platform not covered by the one mentioned in the paper. As human hours can be expensive an alternative could be explored by compromising on the quality of the label classification and building a better model which will also make it more robust.
  • The main question in the paper that I was hoping that was clearly answered but felt was not was “Can antisocial users be effectively identified early on?”. This can be a huge boon to have for any social media platform developer and/or designer. The promise of having very less or no trolls is like giving the customers a Charlies Chocolate Factory.

I wonder if this can be achieved by the introduction of an “Actor-Critic Reinforcement Learning algorithm“[1]. The use of a reinforcement learning algorithm lets the AI agent venture into the dark maze to find an exit. By rewarding the classification or flagging of a user in the right category we will be pushing it to train itself into becoming a good classifier of Anit-social behavior. The advantage of this model will be that the critic will ensure that the actor i.e. the agent performing the classification will not learn very quickly and will learn only the right things. It takes care of any anomalies that could occur. If the possibility exists then I feel this can be an area definitely worth pursuing through a course project.

REFERENCES:

[1] Konda, V. R., & Tsitsiklis, J. N. (2003). Actor-Critic Algorithms. Control Optim, 42(4), 1143–1166. https://doi.org/10.1137/S0363012901385691

Read More