The perception and acceptance of AI systems are impacted by the expectations that the users have on that system as well as their prior experiences working with AI systems. Hence, expectation setting before interacting with the system is extremely pivotal to avoid any inflated expectations which in turn could lead to disappointment if they are not met. A Scheduling Assistant system has been used as an example in this paper and expectation adjustment techniques are discussed. The paper focusses on exploring methods to shape the user’s expectation before they use the system and study the impacts on the acceptance of the system by the user. Apart from this, the impact of different AI imperfections is also studied, specifically the cases of false positives vs false negatives. Accuracy indicator, example-based explanation and performance control are the three techniques proposed and evaluated. Via the studies conducted, it is concluded that a better expectation setting done before using a system decreases the chances of disappointment by highlighting the flaws of the system beforehand.
The study conducted assumes that the users are new to the environment and dedicates time explaining the interface at the initial stage of the experiment. I felt that this was helpful since the people involved in the survey can now follow along. I found this missing in some of the earlier papers read where it was assumed that all the readers had sufficient prior knowledge to follow along. Also, despite the fact that the initial performance of the system was ninety-three percent on the test dataset, in order to gauge the sentiments of the users and evaluate their expectation setting hypothesis, the authors decided to set the accuracy to fifty percent. I felt that this greatly improved the scope for disappointment, thereby helping them efficiently validate their expectation setting system and its effects. I felt that the decision to use visualizations as well as a short summary of the intent in their explanation was helpful since this eradicated the need for the users to read lengthy summaries and would offer better support for user decision. It was also good to note the authors take on deception and marketing as a means to set false expectations. This study went beyond such techniques and focused on shaping the expectations of the people via explaining the accuracy of the system. I felt that this perspective was more ethical compared to the other means adopted in this area.
- Apart from the expectations that users have, what other factors influence the perception and acceptance of AI systems by the users?
- What are some other techniques, visual or otherwise that can be adopted to set expectations of AI systems?
- How can the AI system developers tackle trust issues and acceptance issues? Given that perceptions and individual experiences are extremely diverse, is it possible for an AI system to be capable of completely satisfying all its users?
I agree with your comment that the perspective of the paper was more ethical. Yes, the authors went beyond traditional methods of gauging expectations, namely, brand trust, word of mouth, marketing, advertisements, etc. I believe that setting accuracy to fifty percent was pivotal in establishing the expectations low. I also liked your comment on how AI developers can tackle acceptance and satisfy all users. I believe that the AI/ML domain develop tools/software mostly catered to a specific group. The applications tend to be specific, and thus, I think they won’t be able to satisfy the majority. However, having said so, performing customer discovery, in the beginning, would help them better understand what people are looking for when it comes to such a system. A survey before anything is developed essentially to understand the need and expectations of the users.