Summary
In this paper, the authors, try to understand the user experience of conversational agents by examining the factors that motivate users to work with these agents, and also try to propose design considerations that overcome the current limitation and improve human interaction. During their study, they found that there is a huge gap between user expectations and conversational agents’ operations.
They also found that there are limited studies about how agents are used on a daily bases and most of these studies were not about user experiences and more focus on technical architecture, language learning, and other areas.
The authors conducted interviews with 14 individuals who use conversational agents regularly, and their ages varies from 25 to 60 years. Some of these individuals have in depth technical knowledge and the others are regular users of technologies.
They found that the key motivation of using the conversational agents was time saving where users ask the CA to execute simple tasks that normally require multiple steps like checking the weather, setting reminders, setting alarms, getting directions. They also found that the users started the engagement through playful interaction like asking the CA to tell them a joke or playing a music. Only few users, who have technical knowledge, reported using these systems on basic work-related tasks.
The user’s interactions were mainly on non-critical tasks and have reported that the agents were not that successful when they are asked to execute complex tasks. The studies shows that users don’t trust conventional agents when it comes to executing critical tasks like sending emails or making a phone calls and they need a visual confirmation to complete these kind tasks. They also mentioned that these systems don’t accept feedback and there are no transparencies of how things are working internally.
The authors suggest considering ways reveal system intelligence, reconsidering the interactional promise made by humorous engagement, considering how best to indicate capability though interaction, and rethinking system feedback and design goals in light of the dominant use case, as areas for future investigation and development.
Reflection
I found the results reported by the study to be very interesting. Most users learned to use these CA systems as they go by trying different words and keywords unit it worked out, and the conversational agents failed to have a natural interaction with humans.
I also thought that companies like Google, Amazon, Microsoft, and Facebook have developed conversational systems that can perform much better than answering simple questions and struggling with complex questions, but it appears that is not the case. These companies have developed very sophisticated AI systems and services and it seems to me that there are some limitation like computational power or latency considerations are preventing these systems from performing well.
I agree with the authors that providing feedback can improve human interaction with CA systems and communicating the capability can lower the expectation which leads to reducing the gap between the expectation and the operation.
Questions
- The authors mentions that most users felt unsure as to whether their conversational agents had a capacity to learn, can we use reinforcement learning to help the CA to adapt and learn while engaging with users in a single session?
- The authors mentioned that CA systems are generally good with simple tasks, but not with complex tasks and they are struggling with understanding human requests. Do you think that there are technical limitation or other factors preventing the system from performing well with humans? what are these factors?
- The authors mentioned that most instances, the operation of the CA systems failed to bridge the gap between user expectation and system operation. If that the case for conversational agents, do you think that we are far away from deploying autonomous cars, which are far more complicated than CAs, in real time setting since it has direct interaction with environments?