02/26/2020 – Dylan Finch – Will You Accept an Imperfect AI? Exploring Designs for Adjusting End-user Expectations of AI Systems

Word count: 556

Summary of the Reading

This paper examines the role of expectations and the role of focusing on certain types of errors to see how this impacts perceptions of AI. The aim of the paper is to figure out how the setting of expectations can help users better see the benefits of an AI system. Users will feel worse about a system that they think can do a lot and then fails to live up to those expectations rather than a system that they think can do less and then succeeds at accomplishing those smaller goals.

Specifically, this paper lays some ways to better set user expectations: an Accuracy Indicator which allows users to better expect what the accuracy of a system should be, an explanation method based on examples to help increase user understanding, and the ability for users to adjust the performance of the system. They also show the usefulness of these 3 techniques and that systems tuned to avoid false positives are generally worse than those tuned to avoid false negatives.

Reflections and Connections

This paper highlights a key problem with AI systems: people expect them to be almost perfect and companies market them as such. Many companies that have deployed AI systems have not done a good job managing expectations for their own AI systems. For example, Apple markets Siri as an assistant that can do almost anything on your iPhone. Then, once you buy one, you find out that it can really only do a few very specialized tasks that you will rarely use. You are unhappy because the company sold you a much more capable product. With so many companies doing this, it is understandable that many people have very high expectations for AI. Many companies seem to market AI as the magic bullet that can solve any problem. But, the reality is often much more underwhelming. I think that companies that develop AI systems need to play a bigger role in managing expectations. They should not sell their products as a system that can do anything. They should be honest and say that their product can do some things but not others and that it will make a lot of mistakes, that is just how these things work. 

I think that the most useful tool this team developed was the slider that allows users to choose between more false positives and more false negatives. I think that this system does a great job of incorporating many of the things they were trying to accomplish into one slick feature. The slider shows people that the AI will make mistakes, so it better sets user expectations. But, it also gives users more control over the system which makes them feel better about it and allows them to tailor the system to their needs. I would love to see more AI systems give users this option. It would make them more functional and understandable. 

Questions

  1. Will AI ever become so accurate that these systems are no longer needed? How long will that take?
  2. Which of the 3 developed features do you think is most influential/most helpful?
  3. What are some other ways that AI developers could temper the expectations of users?

One thought on “02/26/2020 – Dylan Finch – Will You Accept an Imperfect AI? Exploring Designs for Adjusting End-user Expectations of AI Systems

  1. I agree with your reflections in that current companies can definitely do better expectation settings of their AI products to lessen user’s disappointment at a later point in time. Regarding the second question, I feel that all three features are great for a realistic expectation setting. I particularly found the example-based explanation impactful since the users are shown example sentences along with their predictions. This gives a sense of what to expect from the system and also showcases scenarios where the AI fails to predict correctly. This would help shape the mental model of the user while using the system and help make informed decisions regarding accepting or rejecting the system’s prediction.

Leave a Reply