03/24/2020 – Akshita Jha – All Work and No Play? Conversations with a Question-and-Answer Chatbot in the Wild

Summary:
“All Work and No Play? Conversations with a Question-and-Answer Chatbot in the Wild” by Liao et. al. talks about conversational agents and their interactions with the end-users. The end-user of a conversation agent might want something more than just information from these chatbots. Some of these can be playful conversations. The authors study a field deployment of human resource chatbot and discuss the interest areas of users with respect to the chatbot. The authors also present a methodology involving statistical modeling to infer user satisfaction from the conversations. This feedback from the user can be used to enrich conversational agents and make them better interact with the end-user in order to guarantee user satisfaction. The authors primarily discuss 2 research questions: (i) What kind of conversational interactions did the user have with the conversational agent in the wild, (ii) What kind of signals given by the user to the conversational agents can be used to study human satisfaction and engagement. The findings show that the main areas of conversations include “feedback-giving, playful chit-chat, system inquiry, and habitual communicative utterances.” The authors also discuss various functions of conversational agents, design implications, and the need for adaptive conversational agents.

Reflection:
This is a very interesting paper because it talks about the surprising dearth of research in the gap between user interactions in the lab and those in the wild. It highlights the differences between the two scenarios and the varying degree of expectations that the end-user might have while interacting with a conversational agent. The authors also mention how the conversation is almost always initiated by the conversational agent which might not be the best scenario depending upon the situation. The authors also raise an interesting point where the conversational agent mostly functions as a question answering system. This is far from ideal and prevents the user from having an organic conversation. To drive home this point further, the authors compare and contrast the signals of an informal playful conversation with that of a functional conversation in order to provide a meaningful and nuanced understanding of user behavior that can be incorporated by the chatbot. The authors also mention that the results were based on survey data which was done in a workplace environment and do not claim generalization. The authors also study only work professionals and the results might not hold for a population from a different age group. An interesting point here is that the users strive for human-like conversations. This got me thinking if this a realistic goal to strive for? What would the research direction look like if we modified our expectations and treated the conversational agent as an independent entity? It might help to not evaluate the conversational agents with human-level conversation skills.

Questions:
1. Have you interacted with a chatbot? What has your experience been like?
2. Which feature do you think should be a must and should be incorporated in the chatbot?
3. Is it a realistic goal to strive for human-like conversations? Why is that so important?

4 thoughts on “03/24/2020 – Akshita Jha – All Work and No Play? Conversations with a Question-and-Answer Chatbot in the Wild

  1. Hello, Akshita.

    Those are a lot of nice reflections about the paper.
    I did not notice that many shortcomings in the paper as you did,
    but they all make sense.
    It is interesting you bring up treating the agents as separate entities.
    It’s far more unconstrained than what the authors did (more “wild”).
    I am also concerned if it’ll be possible to extract good information from a study like that.
    I think the author’s had to put the basic constraints due to this difficulty.

    As for your first question,
    I interacted with telephone bots (the kind that you get from calling a bank or phone service).
    I felt that the experience was too slow-paced and boring because you needed to listen until the bot finished the sentence.
    I agree it’d be better if the experience was more organic.

  2. My experience with most chatbots have been usually to make the conversation go to a place where a human operator would take place, but then that comes from prior experience when I try to talk to a customer care and am experiencing a barrier of press XYZ digits, etc. This almost makes me wonder, should chatbots be deceptive enough to begin with? As in, should a chatbot disguise itself as a human being and then when stuck, can tell, lemme transfer you to the person above me.

  3. In my experience with chatbots or, generally, support bots is that I first try out some samples to understand its method and then update my mental models to utilize the bots to its full extent. Example, learning the best way to set alarms or know about weather, etc.

Leave a Reply