02/26/20 – Nan LI – Will You Accept an Imperfect AI? Exploring Designs for Adjusting End-user Expectations of AI Systems

Summary:

The key motivation of this paper is to investigate the influence factors of user satisfaction and acceptance on an imperfect AI-powered system, here the example used in this paper is an email Scheduling Assistant. To achieve this goal, the author conducted a number of experiments based on three techniques for setting expectations: Accuracy Indicator, Example-based Explanation, and Performance Control. Before the experiments, the author presents three main topic research questions, which about the impact factors on accuracy and acceptance of High Precision(low False Positives) and High recall; the effective design techniques for setting appropriate end-user expectations of AI systems, and impact of expectation-setting intervention techniques. A series hypothesis also made before experiments. Finally, the experiment results indicate the expectation adjustment techniques demonstrated in the paper have impacted the intended aspects of expectations and able to enhance user satisfaction and acceptance with an imperfect AI system. Out of the expectation, the conclusion is that a High Recall system can increase user satisfaction and acceptance than High Precision.

Reflection

I think this paper talked about a critical concern of AI-powered system from an interesting and practical direction. The whole approach of this paper reminds me of the previous paper which talked about a summary of the guideline of Human AI interaction. The first principle is to let humans have a clear mind about what AI can do and the second principle is to let humans understand how well can AI do on what it can do. Thus, I think the three expectation adjusting techniques are designed to give the user a clear clue of these two guidelines. However, instead of using text only to inform the user, the author designed three interfaces based on the principles that combining visualization and text, striving for simplicity.

These designs enable informed of the system accuracy very intuitively. Besides, these designs also allow the user to control the detection accuracy, so that the user could apply their own requirement. Thus, through several adjustments of the control and feedback experience, the user would finally combine their expectation with an appropriate adjustment. I believe this should be the main reason that these techniques could increasing user satisfaction and acceptance with an imperfect AI system successfully.

However, as users mentioned in the paper, the conclusion that users are more satisfied and accept a system with High Recall instead of High Precision based on the fact that users can easily recover from a False Positive in their experiment platform than from a False Negative. In my perspective, the satisfaction between High Recall and High Precision should be different based on vary AI system. Nevertheless, nowadays, the AI system has been wildly applied to the high-stakes domain such as health care or criminal predictive. For these systems, we might want to adjust to different systems to optimize for different goals.

Questions:

  1. Can you see any other related guidelines applied to expectation adjustment techniques designed in the paper?
  2. Is there any other way that we can adjust the user expectation of an imperfect AI system?
  3. What do you think are the key factors that able to decrease user expectations? Do you have a similar experience?

Word Count:525

Read More