03/25/2019 – Nurendra Choudhary – All Work and No Play? Conversations with a Question-and-Answer Chatbot in the Wild

Summary

In this paper, the authors study a Human Resources chatbot to analyze the interactions between the bot and its users. Their primary aim is to utilize the study to enhance the interactions of conversational agents in terms of behavioral aspects such as playfulness and information content. Additionally, the authors emphasize on the adaptivity of the systems based on particular user’s conversational data.

For the experiments, they adopted an agent called Chip (Cognitive Human Interface Personality). Chip has access to all the company related assistance information. The human subjects for this experiment are new employees that need constant assistance to orient themselves in the company. Chip is integrated into the IM services of the company to provide real-time support.

Although Chip is primarily a question-answer agent, the authors are more interested in the behavioral ticks in the interaction such as playful chit-chats, system inquiry, feedbacks and habitual communicative utterances. They utilize the information from such ticks to further enhance the conversational agent and improve its human-like behavior (and not focus solely on answer-retrieval efficiency).

Reflection

All Work and No play is a very appropriate title for the paper. Chip is primarily applied in a formal context where social interactions are considered unnecessary (if not inappropriate). However, human interactions always include a playful feature to improve quality of communication. No matter the context, human conversation is hardly ever void of behavioral features. The features exhibit emotions and significant subtext. Given the setting, it is a good study to analyze the effectiveness of conversational agents with human behavior features. However, some limitations of the study include the selection bias (as indicated in the paper too). The authors pick conversation subparts that are subjectively considered to include the human behavior features. However,I do not see a better contemporary method in the literature to efficiently avoid the selection bias.

Additionally, I see this study as part of a wider move of the community towards appending human-like behavior to their AI systems. If we look at the current popular AI conversation agents like Alexa, Siri, Google Assistant and others, we find a common aim to enhance human-specific features with limited utilitarian value such as jokes, playful ticks among others. I believe this type of learning also mitigates the amount of adaptation humans need before being comfortable with the system. In the previous classes, we have seen the adaptation of mental models with a given AI tool. If the AI systems behave more like humans and learn accordingly, humans would not need significant learning to adopt these AI tools in their daily life. For example, when voice assistants did not include these features, they were significantly less prevalent than in the current society and they are only projected to widen their market.

Questions

  1. How appropriate is it to have playful systems in an office environment? Is it sufficient to have efficient conversational agents or do we require human-like behavior in a formal setting?
  2. The features seem even more relevant for regular conversational agents. How will the application and modeling differ in those cases?
  3. The authors select the phrases or conversational interactions as playful or informal based on their own algorithm. How does this affect the overall analysis setup? Is it fair? Can it be improved?
  4. We are trying to make the AI more human-like and not using it simply as a tool. Is this part of a wider move as the future area of growth in AI? 

Word Count: 590

One thought on “03/25/2019 – Nurendra Choudhary – All Work and No Play? Conversations with a Question-and-Answer Chatbot in the Wild

  1. I believe having “playful” systems in an office environment would be truly beneficial to the workplace mentality. Given a working scenario where there is a lack of people (working remotely) or lack of communication I believe a “friendly” conversation would help raise the morality of the worker. Given the current scenario, I think this puts a bigger emphasis on pushing AI out to help handle more momentous or routine tasks of connecting common questions if they have already been answered somewhere. Having a more “playful” system would raise the human adoption of it since it’s more usable.

Leave a Reply