PAPERS:
- Bakshy, “Exposure to ideologically diverse news and opinion on Facebook,” vol. 348, no. 6239, pp. 1130–1133, 2015.
- M. Eslami et al., ““ I always assumed that I wasn’t really that close to [ her ]”: Reasoning about invisible algorithms in the news feed,” 2015.
SUMMARY:
Paper 1:
The question that was the central part of the paper was “How do [these] online networks influence exposure to perspectives that cut across ideological lines?” for which de-identified data of 10.1 million U.S. Facebook users were measured for their ideological homophily in friend networks. The influence of ideologically discordant and the relationship with the heterogeneity of friends with such data led the authors to conclude that “individuals’ choices played a stronger role in limiting exposure to cross-cutting content.”
The comparisons and observations were captured in:
- comparing the logical diversity of the broad set of news and opinion shared on Facebook with that shared by individuals’ friend networks
- comparing this with the subset of stories that appear in individuals’ algorithmically ranked News Feeds
- observing what information individuals choose to consume, given exposure on News Feed.
A point of interest as a result of the study was the suggestion that the power to expose oneself to perspectives from the other side (liberal or conservative) in social media lies first and foremost with individuals.
Paper 2:
The objective of the paper was to find “whether it is useful to give users insight into these [social media] algorithms’ existence or functionality and how such insight might affect their experience”. The development of a Facebook application called FeedVis for this purpose helped them answer three questions:
- How aware are users of the News Feed curation algorithm and what factors are associated with this awareness?
- How do users evaluate the curation of their News Feed when shown the algorithm outputs? Given the opportunity to alter the outputs, how do users’ preferred outputs compare to the algorithm’s?
- How does the knowledge users gain through an algorithm visualization tool transfer to their behavior?
During the study tools of Usability study like think aloud, walkthroughs, questionnaires were employed to extract information from users. The statistical tools of Welch’s test, Chi-square test, Fisher’s exact test helped corroborate findings. The features, both passive and active, that were extracted as a potential explanation for the questions: While all the participants were exposed to the algorithm outputs, why were the majority not aware of the algorithm’s existence? Were there any differences in Facebook usage associated with being aware or unaware of the News Feed manipulation?
REFLECTIONS:
My reflection on this paper might be biased as I am under the impression that the authors of the paper are also stakeholders in the findings resulting in a conflict of interest. I would like to support my impression with a few of the reporting done by the paper:
- The indication or suggestion of individuals choice resulting in the content that one consumes seems to suggest that the algorithm is not controlling the things individuals see but humans indirectly are which is essentially arguing against the second paper we read.
- The limitations as stated by the author make it seem as if the author is leading us to believe in a models findings which are not robust and has the potential to be skewed.
I will acknowledge the fact that the author has a basis for the claims on cross-cutting of data and given a more robust model compensating for all the drawbacks mentioned has the same findings I will be inclined to side with the author’s findings.
The notion of echo chambers and filter bubbles point us to the argument made by the second paper where through a study it shows the need for explainability and the option to choose. This was a paper that I gave a lot of attention to as I feel close to home. I feel that the paper is a proponent for explainable AI. It tries to address the issue of the black box approach most ML and AI algorithms have with even industry leaders only aware of the inputs and outcomes not able to completely reason with the physics or mechanics behind the processing agent or algorithm. As someone who sees the need for Explainability as a requirement to build Interactive AI, I thought the findings of the paper “but obvious” at points. The fact that people expressed anger and concern falls in line with a string of previous findings resulting in the work in [1], [2], [11]–[13], [3]–[10]. Reading through these papers helps one understand the need of the hour.
The paper also approaches the problem from a Human Factors perspective rather than an HCI one which I feel is warranted. I would argue that a textbook approach is not one that is required. I would tangentially propose a new approach for a new field. Expecting one to stick to design principles, analysis techniques that were coined or thought off in an era where the current algorithms were science fiction is ludicrous according to me. We need to approach the analysis of such Human-Centered systems partly with Human Factors, partly psychology and mostly HCI.
I will be really interested in working with developing more understandable AI systems for the layman.
REFERENCES:
[1] I. John D. Lee and Katrina A. See, University of Iowa, Iowa City, “Trust in Automation: Designing for Appropriate Reliance,” Hum. Factors, vol. 46, no. 1, pp. 50–80, 2004.
[2] M. T. Ribeiro, S. Singh, and C. Guestrin, “‘Why Should I Trust You?’: Explaining the Predictions of Any Classifier,” pp. 1135–1144, 2016.
[3] A. Freedy, E. DeVisser, G. Weltman, and N. Coeyman, “Measurement of trust in human-robot collaboration,” in 2007 International Symposium on Collaborative Technologies and Systems, 2007, pp. 106–114.
[4] M. Hengstler, E. Enkel, and S. Duelli, “Applied artificial intelligence and trust-The case of autonomous vehicles and medical assistance devices,” Technol. Forecast. Soc. Change, vol. 105, pp. 105–120, 2016.
[5] K. A. Hoff and M. Bashir, “Trust in automation: Integrating empirical evidence on factors that influence trust,” Hum. Factors, vol. 57, no. 3, pp. 407–434, 2015.
[6] E. J. de Visser et al., “Almost human: Anthropomorphism increases trust resilience in cognitive agents,” J. Exp. Psychol. Appl., vol. 22, no. 3, pp. 331–349, 2016.
[7] M. T. Dzindolet, S. A. Peterson, R. A. Pomranky, L. G. Pierce, and H. P. Beck, “The role of trust in automation reliance,” Int. J. Hum. Comput. Stud., vol. 58, no. 6, pp. 697–718, 2003.
[8] L. J. Molnar, L. H. Ryan, A. K. Pradhan, D. W. Eby, R. M. St. Louis, and J. S. Zakrajsek, “Understanding trust and acceptance of automated vehicles: An exploratory simulator study of transfer of control between automated and manual driving,” Transp. Res. Part F Traffic Psychol. Behav., vol. 58, pp. 319–328, Oct. 2018.
[9] A. Freedy, E. DeVisser, G. Weltman, and N. Coeyman, “Measurement of trust in human-robot collaboration,” in 2007 International Symposium on Collaborative Technologies and Systems, 2007, pp. 106–114.
[10] T. T. Kessler, C. Larios, T. Walker, V. Yerdon, and P. A. Hancock, “A Comparison of Trust Measures in Human–Robot Interaction Scenarios.”
[11] M. Lewis, K. Sycara, and P. Walker, “The Role of Trust in Human-Robot Interaction.”
[12] D. B. Quinn, “Exploring the Efficacy of Social Trust Repair in Human-Automation Interactions.”
[13] M. Lewis et al., “The Effect of Culture on Trust in Automation: Reliability and Workload,” ACM Trans. Interact. Intell. Syst. ACM Trans. Interact. Intell. Syst. ACM Trans. xxxxxxxx Mon. YYYY, vol. 30, no. x, 2016.