02/26/2020 – Vikram Mohanty – Will you accept an imperfect AI? Exploring Designs for Adjusting End-user Expectations of AI Systems

Authors: Rafal Kocielnik, Saleema Amershi, Paul Bennett

Summary


This paper discusses the impact of end-user expectations on the subjective perception of AI-based systems. The authors conduct studies to better understand how different types of errors (i.e. False Positives and False Negatives) are perceived differently by users, even though accuracy remains the same. The paper uses the context of an AI-based scheduling assistant (in an email client) to demonstrate 3 different design interventions for helping end-users adjust their expectations of the system. The studies in this paper showed that these 3 techniques were effective in preserving user satisfaction and acceptance of an imperfect AI-based system. 

Reflection

This paper is basically an evaluation of the first 2 guidelines from the “Guidelines of Human-AI Interaction” paper i.e. making clear what the system can do, and how well it can do what it does. 

Even though the task in the study was artificial (i.e. using workers from an internal crowdsourcing platform instead of real users of a system and subjecting to an artificial task instead of a real one), the study design, the research questions and the inference from the data initiates the conversation on giving special attention to the user experience in AI-infused systems. Because the tasks were artificial, we could not assess scenarios where users actually have a dog in the fight e.g. they miss an important event by over-relying on the AI assistant and start to depend less on the AI suggestions. 

The task here was scheduling events from emails, which is somewhat simple in the sense that users can almost immediately assess how good or bad the system is at. Furthermore, the authors manipulated the dataset for preparing the High Precision and High Recall versions of the system. For conducting this study in a real-world scenario, this would require a better understanding of user mental models with respect to AI imperfections. It becomes slightly trickier when these AI imperfections can not be accurately assessed in a real-world context e.g. search engines may retrieve pages containing the keywords, but may not account context into the results, and thus may not always give users what they want.  

The paper makes an excellent case of digging deeper into error recovery costs and correlating that with why participants in this study preferred a system with high false positive rates. This is critical for system designers to keep in mind while dealing with uncertain agents like an AI core. This gets further escalated when it’s a high-stakes scenario. 

Questions

  1. The paper starts off with the hypothesis that avoiding false positives is considered better for user experience, and therefore systems are optimized for high precision. The findings however contradicted it. Can you think about scenarios where you’d prefer a system with a higher likelihood of false positives? Can you think about scenarios where you’d prefer a system with a higher likelihood of false negatives?
  2. Did you think the design interventions were exhaustive? How would you have added on to the ones suggested in the paper? If you were to adopt something for your own research, what would it be? 
  3. The paper discusses factoring in other aspects, such as workload, both mental and physical, and the criticality of consequences. How would you leverage these aspects in design interventions? 
  4. If you used an AI-infused system every day (to the extent it’s subconsciously a part of your life)
    1. Would you be able to assess the AI imperfections purely on the basis of usage? How long would it take for you to assess the nature of the AI? 
    2. Would you be aware if the AI model suddenly changed underneath? How long would it take for you to notice the changes? Would your behavior (within the context of the system) be affected in the long term? 

Read More

02/26/2020 – Ziyao Wang – Interpreting Interpretability: Understanding Data Scientists’ Use of Interpretability Tools for Machine Learning

As machine learning models are deployed in variety domains of industry, it is important to design some interpretability to help model users, such as data scientists and machine learning practitioners, better understand how these models work. However, there have been little researches focused on the evaluation of the performance of these tools. The authors in this paper did experiments and surveys to fill this gap. They interviewed 6 data scientists from a large technology company to find out the most common issues faced by data scientists. Then they conducted a contextual inquiry towards 11 participants based on the common issues using the InterpretML implementation of the Gams and the SHAP python package. Finally, they made a survey of 197 data scientists. With the experiments and surveys, the authors highlighted the misuse and over-trust problem and the need for the communication between members of HCI and ML communities.

Reflection:

Before reading this paper, I hold the view that the interpretability tools should be able to cover most of the data scientists’ need. However, now I have the view that the tools for interpretation are not designed by the ML community, which will result in the lack of accuracy of the tools. When data scientists or machine learning practitioners want to use these tools to learn how the models operate, they may face problems like misuse or over-trust. I don not think this is the users’ fault. Tools are designed for make users feel more convenient when doing tasks. If the tools will make users confuse, the developers should make change to the tools to give users better user experiences. In this case, the authors suggested that the members of HCI and ML communities should work together when developing the tools. This need the members to leverage their strength so that the designed tools can let users understand the models easily while the tools are user-friendly. Meanwhile, comprehensive instructions should be written to explain how the users can use the tools to understand the models accurately and easily. Finally, both the efficiency and accuracy of both the tools and the implementation of models will be improved.

From data scientists and machine learning practitioners’ point of view, they should try to avoid to over-trust the tools. The tools cannot fully explain the models and there may be mistakes. The users should always be critic to the tools instead of fully trusting them. They should read the instructions carefully, understand how to use the tools and what the tools are used for, what is the models being used for and how to use the models. If they can consider thoughtfully when using these tools and models, instead of guessing the meaning of the results from the tools, the number of misuse and over-trust cases will be decreased sharply.

Questions:

  1. How to design a proposed interactive interpretability tool? What kinds of interactions should be included?
  2. How to design a tool that can make users to dig the models conveniently instead of letting them use the models without knowing how the models work?
  3. How to design tools which can leverage the strength of mental models mostly

Read More

02/26/2020 – Dylan Finch – Will You Accept an Imperfect AI? Exploring Designs for Adjusting End-user Expectations of AI Systems

Word count: 556

Summary of the Reading

This paper examines the role of expectations and the role of focusing on certain types of errors to see how this impacts perceptions of AI. The aim of the paper is to figure out how the setting of expectations can help users better see the benefits of an AI system. Users will feel worse about a system that they think can do a lot and then fails to live up to those expectations rather than a system that they think can do less and then succeeds at accomplishing those smaller goals.

Specifically, this paper lays some ways to better set user expectations: an Accuracy Indicator which allows users to better expect what the accuracy of a system should be, an explanation method based on examples to help increase user understanding, and the ability for users to adjust the performance of the system. They also show the usefulness of these 3 techniques and that systems tuned to avoid false positives are generally worse than those tuned to avoid false negatives.

Reflections and Connections

This paper highlights a key problem with AI systems: people expect them to be almost perfect and companies market them as such. Many companies that have deployed AI systems have not done a good job managing expectations for their own AI systems. For example, Apple markets Siri as an assistant that can do almost anything on your iPhone. Then, once you buy one, you find out that it can really only do a few very specialized tasks that you will rarely use. You are unhappy because the company sold you a much more capable product. With so many companies doing this, it is understandable that many people have very high expectations for AI. Many companies seem to market AI as the magic bullet that can solve any problem. But, the reality is often much more underwhelming. I think that companies that develop AI systems need to play a bigger role in managing expectations. They should not sell their products as a system that can do anything. They should be honest and say that their product can do some things but not others and that it will make a lot of mistakes, that is just how these things work. 

I think that the most useful tool this team developed was the slider that allows users to choose between more false positives and more false negatives. I think that this system does a great job of incorporating many of the things they were trying to accomplish into one slick feature. The slider shows people that the AI will make mistakes, so it better sets user expectations. But, it also gives users more control over the system which makes them feel better about it and allows them to tailor the system to their needs. I would love to see more AI systems give users this option. It would make them more functional and understandable. 

Questions

  1. Will AI ever become so accurate that these systems are no longer needed? How long will that take?
  2. Which of the 3 developed features do you think is most influential/most helpful?
  3. What are some other ways that AI developers could temper the expectations of users?

Read More

02/26/2020 – Sushmethaa Muhundan – Explaining Models: An Empirical Study of How Explanations Impact Fairness Judgment

The paper explores how people make fairness judgments of ML systems and the impact that different explanations can have on these fairness judgments. The paper also explores how providing personalized and adaptive explanations can support such fairness judgments of ML systems. It is extremely important to ensure algorithm fairness and there is a need to consciously work towards avoiding the risk of amplifying existing biases. In this context, providing explanations can be beneficial in two aspects, not only do they help in providing implementation details which would otherwise be a “black box” to a user, but they also facilitate better human-in-the-loop experiences by enabling people to identify fairness issues. The COMPAS recidivism data was utilized for the study and four different explanations styles were examined: input-influence based, demographic-based, sensitivity-based, and case-based. Through the study, it is highlighted that there is no one-size-fits-all solution for an effective explanation. The dataset, context, kinds of fairness issues, and user profiles vary and need to be addressed individually. The paper proposes providing hybrid explanations as a solution to address this problem thereby providing both an overview of the ML model and information about specific cases to help aid accurate fairness judgment.

While there has been a lot of research focus on developing non-discriminatory ML algorithms, this paper specifically deals with the human aspect which is necessary to identify and remedy fairness issues. I feel that this is equally important and is often overlooked. It was interesting to note that they auto-generated the explanations, unlike previous studies. 

With respect to the different explanation styles used, I found the sensitivity-based explanation particularly interesting since it clearly shows the difference in the prediction result if certain attributes were modified. According to me, this form of explanation, out of the four proposed, is extremely effective in bringing out any bias that may be present in the ML system.

I felt that the input-influence based explanation was also effective since it had the +/- markers corresponding to features that match the particular case and this gives the users a clearer picture of which attributes specifically influenced the result thereby providing the implementation details to a certain extent.

The study results documents various insights from participants, and I found some of them to be extremely fascinating. While some believed that certain predictions were biased, others found it normal for that verdict to be predicted. It truly captured the diversity in opinions and perspectives of the same ML system based on the different explanations provided.

  1. Through this study, it is revealed that the perception of bias is not uniform and is extremely subjective. Given this lack of agreement on the definition of moral concepts, how can a truly unbiased ML system be achieved?
  2. What are some practices that can be followed by ML model developers to ensure that the bias in the input dataset is identified and removed?
  3. Apart from gender-bias and ethnic-bias, what are some other prevalent biases in existing ML systems that need to be eradicated?

Read More

2/26/20 – Jooyoung Whang – Explaining Models: An Empirical Study of How Explanations Impact Fairness Judgment

The paper provides research on Fairness, Explainable Artificial Intelligence (XAI), and people’s judgment change. The authors introduce a preprocessing method to reduce the bias of a dataset for known bias-inducing attributes. They also show four explanation methods of the classification results: Sensitivity, Input-Influence, Case, and Demographic. Using different combination of the above configurations, AI classifications of the COMPAS data was presented to MTurk workers for feedback. As a result, the paper reports that case-based explanations were often seen as less fair than other explanation methods. The authors also found that sensitivity explanations are the most effective at addressing unfairness. Finally, the paper shows that the evaluator’s position on machine learning heavily impacts his or her reaction to a classifier output and explanations.

When I looked at the paper’s sample sensitivity explanation, it gave me a strong impression that the system was racist. I think many others would have had a similar thought, especially if they do not have enough knowledge about machine learning and regression. Because of this, it concerned me that some people may be lured more towards making the opposite decision than the one that the AI made as a repulsive reaction. This is clearly adding another bias in the opposite direction. I believe an explanatory model should only give helpful information about the model instead of giving bias. Thinking of a possible solution, the authors could have rephrased the same information in a different way. For example, instead of bluntly saying that the classifier would have made a different decision, the system could have reported the probability for each label. This provides the same information but adds less obvious bias. Another solution would be preprocessing the data to not have the bias in the first place like the authors suggested.

I liked the idea of comparing the subject’s prior position to using ML with their judgment of the classifier. This relates to a reflection I made last week, where I stated the possibility that people may make decisions by putting more weight when the model makes a wrong decision. As I have expected, the paper reported that prior positions do in fact make a huge difference in a user’s judgment. Either building more trust with the users or building the software to effectively address both kinds of users would be needed to address this issue.

The followings are the questions I had while reading the paper:

1. Would there be a possibility where preprocessing the data would add bias to the data instead of removing it? What if the attribute that was thought to be unneeded for the classification was actually crucial to the judgment?

2. The authors state that one of the limitations of their study is conducting it with MTurk workers and not the actual users of the software. Do you think this was really a limitation? The attributes used for the classifier and explanations in their experiment seemed general enough for non-professionals to make a meaningful judgment.

3. If you were to design a classifier with an explanation model, which explanation method would you pick? (Out of Sensitivity, Input-Influence, Case, and Demographic) What do you like about the chosen method?

Read More

02/26/2020 – Bipasha Banerjee – Will You Accept an Imperfect AI? Exploring Designs for Adjusting End-user Expectations of AI Systems

Summary

The paper talks about user-expectation when it comes to end-user applications. It is essential to make sure that the user-expectations are set to an optimal level so that the user does not find the end product underwhelming. Most of the related work done in this area highlights the fact that user disappointment occurs if the initial expectation is set to “too high”. Initial expectations can originate from advertisements, product reviews, brands, word of mouth, etc. They tested their hypothesis on an AI-powered scheduling assistant. They created an interface similar to the Microsoft Outlook email system. The main purpose of the interface was to detect if an email was sent with the intention of scheduling a meeting. If so, the AI would automatically highlight the meeting request sentence and then allow the user to schedule the meeting. The authors designed three techniques, namely, accuracy indicator, example-based explanations, and control slider, to design for adjusting end-user expectations. Most of their hypotheses were proved to be true. Yet, it was found that an AI system based on high recall had better user acceptance than high precision. 

Reflection

The paper was an interesting read on adjusting the end-user expectation. The AI scheduling assistant was used as a UI-tool to evaluate the users’ reactions and expectations of the system. The authors conducted various experiments based on three design techniques. I was intrigued to find out that the high precision version did not result in a high perception of accuracy. An ML background practitioner always looks at precision (false positive). From this, we can infer that the task at hand should be the judge of what metric we should focus on. It is certainly true that here, displaying a wrongly highlighted sentence would annoy the user less than completely missing out on the meeting details in an email. Hence, I would say this kind of high recall priority should be kept in mind and adjusted according to the end goal of the system.

 It would also be interesting to see how such expectation oriented experiments performed in the case of other complex tasks. This AI scheduling task was straight-forward, where there can be only one correct answer. It is necessary to see how the expectation based approach fairs when the task is subjective. By subjective, I mean, the success of the task would vary from user to user. For example, in the case of text summarization, the judgment of the quality of the end product would be highly dependent on the user reading it. 

Another critical thing to note is the expectation can also stem from a user’s personal skill level and subsequent expectation from a system. As a crowd-worker, having a wrongly highlighted line might not affect as much when the number of emails and tasks are less. How likely is this to annoy busy professionals who might have to deal with a lot of emails and messages with meeting requests. Having multiple incorrect highlights a day is undoubtedly bound to disappoint the user.

Questions 

  1. How does this extend to other complex user-interactive systems?
  2. Certain tasks are relative, like text summarization. How would the system evaluate success and gauge expectations in such cases where the task at hand is extremely subjective?
  3. How would the expectation vary with the skill level of the user? 

Read More

2/26/20 – Jooyoung Whang – Will You Accept an Imperfect AI? Exploring Designs for Adjusting End-user Expectations of AI Systems

This paper seeks to study what an AI system could do to get more approved by users even if it is not perfect. The paper focuses on the concept of “Expectation” and the discrepancy between an AI’s ability and a user’s expectation for the system. To explore this problem, the authors implemented an AI-powered scheduling assistant that mimics the look of MS Outlook. The agent detects in an E-mail if there exists an appointment request and asks the user if he or she wants to add a schedule to the calendar. The system was intentionally made to perform worse than the originally trained model to explore mitigation techniques to boost user satisfaction given an imperfect system. After trying out various methods, the authors conclude: Users prefer AI systems focusing on high precision and users like systems that gives direct information about the system, shows explanations, and supports certain measure of control.

This paper was a fresh approach that appropriately addresses the limitations that AI systems would likely have. While many researchers have looked into methods of maximizing the system accuracy, the authors of this paper studied ways to improve user satisfaction even without a high performing AI model.

I did get the feeling that the designs for adjusting end-user expectations were a bit too static. Aside from the controllable slider, the other two designs were basically texts and images with either an indication of the accuracy or a step-by-step guide on how the system works. I wonder if having a more dynamic version where the system reports for a specific instance would be more useful. For example, for every new E-mail, the system could additionally report to the user how confident it is or why it thought that the E-mail included a meeting request.

This research reminded me of one of the UX design techniques: think-aloud testing. In all of their designs, the authors’ common approach was to close the gap between user expectation and system performance. Think-aloud testing is also used to close that gap by analyzing how a user would interact with a system and adjusting from the results. I think this research approached it in the opposite way. Instead of adjusting the system, the authors’ designs try to adjust the user’s mental model.

The followings are the questions that I had while reading the paper:

1. As I’ve written in my reflection portion, do you think the system will be approved more if it reported some information about the system for each instance (E-mail)? Do you think the system may appear to be making excuses for when it is wrong? In what way would this dynamic version be more helpful than the static design from the paper?

2. In the generalizability section, the authors state that they think some parts of their study are scalable to other kinds of AI systems. What other types of AI could benefit from this study? Which one would benefit the most?

3. Many AI applications today are deployed after satisfying a certain accuracy threshold which is pretty high. This can lead to more funds and time needed for development. Do you think this research will allow the stakeholders to lower the threshold? In the end, the stakeholders just want to achieve high user satisfaction.

Read More

02/26/2020 – Explaining Models: An Empirical Study of How Explanations Impact Fairness Judgment – Yuhang Liu

This paper mainly explores the injustice of the results of machine learning. These injustices are usually reflected in gender and race, so in order to make the results of machine learning better serve people, the author of the paper conducted an empirical study with four types of programmatically generated explanations to understand how they impact people’s fairness judgments of ML systems. In the experiment, these four interpretations have different characteristics, and after the experiment, the author has the following findings:

  1. Some interpretations are inherently considered unfair, while others can increase people’s confidence in the fairness of the algorithm;
  2. Different interpretations can more effectively expose different fairness issues, such as the model-wide fairness issue and the fairness difference of specific cases.
  3. There are differences between people, different people have different positions, and the perspective of understanding things will affect people’s response to different interpretation styles.

In the end, the authors obtained that in order to make the results of machine learning generally fair, in different situations, different corrections are needed and differences between people must be taken into account.

Reflection:

In another class this semester, the teacher gave three reading materials on the results of machine learning and increased discrimination. In the discussion of those three articles, I remember that most students thought that the reason for discrimination should not be Is the inaccuracy of the algorithm or model, and I even think that machine learning is to objectively analyze things and display the results, and the main reason that people feel uncomfortable and even feel immoral in the face of the results is that people are not willing to face these results. It is often difficult for people to have a clear understanding of the whole picture of things, and when these unnoticed places are moved to the table, people will be shocked or even condemn others, but it is difficult to really think about the cause of things. But after reading this paper, I think my previous understanding was narrow: First, the results of the algorithm and the interpretation of the results must be wrong and discriminatory in some cases. So only if we resolve this discrimination can the results of machine learning be able to better serve people. At the same time, I also agree with the ideas and conclusions in the article. Different interpretation methods and different emphasis will indeed affect the fairness of interpretation. All the prerequisites to eliminate injustices are to understand the causes of these injustices. At the same time, I think the main solution to eliminate injustice is still on the researcher. Reason why I think computer is fascinating is it can always keep things rational and objective to deal with problems. People’s response to different results and the influence of different people on different model predictions are the key to eliminating this injustice. Of course, I think people will think that part of the cause of injustice is also the injustice of our own society. When people think that the results of machine learning carry discrimination based on race, sex, religion, etc., should we think about this discrimination itself, should we pay more attention to gender equality, ethnic equality and how to make the results look better.

Question:

  1. Do you think that this unfairness is more because the results of machine learning mislead people or it is existed in people’s society for a long time.
  2. The article proposes that in order to get more fair results, more people need to be considered, what changes should users make.
  3. How to combine the points of different machine learning explanations to create a fairer explanation.

Read More

02/26/2020 – Subil Abraham – Explaining models

A big concern with the usage of current ML systems is the issue of fairness and bias when making their decisions. Bias can creep into ML decisions through either the design of the algorithm or through training datasets that are labeled in a way to bias against certain kinds of things. The example used in this paper is the bias against African Americans in an ML system used by judges to predict the probability of a person re-offending after committing a crime. Fairness is hard to judge when ML systems are black boxes so this paper proposes that if ML systems expose reasons behind the decisions (i.e. the idea of explainable AI), a better judgement of the fairness of the decision can be made by the user. To this end, this paper examines the effect of four of different kinds of explanations of the ML decisions on people’s judgements of the fairness of that decision.

I believe this is a very timely and necessary paper in these times, with ML systems being used more and more for sensitive and life changing decisions. It is probably impossible to stop people from adopting these systems so the next best thing is making explainability of the ML decisions mandatory, so people can see and judge if there was potentially bias in the ML system’s decisions. It is interesting that people were mostly able to perceive that there were fairness issues in the raw data. You would think that that would be hard but the generated explanations may have worked well enough to help with that (though I do wish they could’ve shown an example comparing a raw data point and a processed data point that showed how their pre-processing cleaned things). I did wonder why they didn’t show confidence levels to the users in the evaluation, but their explanation that it was something they could not control for makes sense. People could have different reactions to confidence levels, some thinking that anything less than a 100% is insufficient, while others thinking that 51% is good enough. So keeping it out is a limiting but is logical.

  1. What other kinds of generated explanations could be beneficial, outside of the ones used in the paper?
  2. Checking for racial bias is an important case for fair AI. In what other areas is fairness and bias correction in AI critical?
  3. What would be ways that you could mitigate any inherent racial bias of the users who are using explainable AI, when they are making their decisions?

Read More

02/26/2020 – Subil Abraham – Will you accept an imperfect AI?

Reading: Rafal Kocielnik, Saleema Amershi, and Paul N. Bennett. 2019. Will You Accept an Imperfect AI? Exploring Designs for Adjusting End-user Expectations of AI Systems. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19), 1–14. https://doi.org/10.1145/3290605.3300641

Different parts of our lives are being infused with AI magic. With this infusion, however, comes problems, because the AI systems deployed aren’t always accurate. Users are used to software systems being precise and doing exactly the right thing. But unfortunately they can’t extend that expectation for AI systems because they are often inaccurate and make mistakes. Thus it is necessary for developers to set expectations of the users ahead of time so that the users are not disappointed. This paper proposes three different visual methods of setting the user’s expectations on how well the AI system will work: an indicator depicting accuracy, a set of examples demonstrating how the sytem works, and a slider that controls how aggressively the system should work. The system under evaluation is a detector that will identify and suggest potential meetings based on the language in an email. The goal of the paper isn’t to improve the AI system itself, but rather to evaluate how well the different expectation setting methods work given an imprecise AI system.

I want to note that I really wanted to see an evaluation on the effects of mixed techniques. I hope that it will be covered in possible future work they do but am also afraid that such work might never get published because it would be classified as incremental (unless they come up with more expectation setting methods beyond the three mentioned in this paper, and do a larger evaluation). It is useful to see that we now have numbers to back up that high-recall applications under certain scenarios are perceived as more accurate. It makes intuitive sense that it would be more convenient to deal with false positives (just close the dialog box) than false negatives (having to manually create a calendar event). Also, seeing the control slider brings to mind the trick that some offices play where they have the climate control box within easy reach of the employees but it actually doesn’t do anything. It’s a placebo to make people think it got warmer/colder when nothing has changed. I realize that the slider in the paper is actually supposed to do what it advertised, but it makes me think of other places where a placebo slider can be given to a user to make them think they have control when in fact the AI system remains completely unchanged.

  1. What other kinds of designs can be useful for expectation setting in AI systems?
  2. How would these designs look different for a more active AI system like medical prediction, rather than a passive AI system like the meeting detector?
  3. The paper claims that the results are generalizable for other passive AI systems, but are there examples of such systems where it is not generalizable?

Read More