2/5/2020 – Jooyoung Whang – Guidelines for Human-AI Interaction

The paper is a good extraction of various design recommendations of human-AI interaction systems that have been collected for more than 20 years since the rise of AI. The authors run 4 iterations of filtering to end up with a final set of 18 guidelines that have been thoroughly reviewed and used. Their source of data comes from commercial AI products, user reviews, and related literature. In each of the iterations, the authors:

1. Extracted the initial set of guidelines

2. Reduced the number down via internal evaluation

3. Performed user study to verify relevance and clarity

4. Tested the guidelines with experts of the field

The authors provide a nicely summarized table containing all the guidelines and their examples. Rather than going in-depth about the resulting guidelines themselves, the authors focus more on the process and feedback that they received. The authors conclude by stating that the provided guidelines are mostly for general design cases and not specific ones.

When I was examining the guideline table, I liked how it was divided into four cases in the design iteration. In a usability engineering class that I took, I learned that a product’s design lifecycle consists of Analyze, Design, Prototype, and Evaluate, in their respective order (and can repeat). I could see that the guidelines focus a lot on Analyze, Design, and Evaluate. It was interesting that prototyping wasn’t strongly implied in the guidelines. I assume it may have been because the whole design iteration was considered a pass of prototyping. It may also have been because a system involving artificial intelligence is too hard to create a low fidelity prototype. The reason for going through a prototyping process is to quickly filter out what works and what doesn’t. As the nature of artificial intelligence requires extensive training and complicated reasoning, a pass of prototyping will accordingly take longer than other kinds of products.

It is very interesting that the guidelines (for long term) instruct that the AI system must inform its actions to the users. In my experience using AI systems such as voice recognition not knowing about machine learning techniques, the system mostly appeared as a black box. I also observed many people who intentionally tried not to use these kinds of systems because of suspicion. I think revealing portions of information and giving control to the users is a very good idea. This will allow more people to quickly adjust to the system.

The followings are the questions that came up to me when I was reading the paper:

1. As in my reflection, it is expensive to go through an entire design process for human-AI systems. Would there be a good workaround for this problem?

2. How much control do you think is appropriate to give to the users of the system? The paper mentions informing how the system will react to certain user actions and allowing the user to choose whether or not to use the system. But can we and should we allow further control?

3. The paper focuses on general cases of designing human-AI systems. They note that they’ve intentionally left out special cases. What kinds of special systems do you think will not need to follow the guidelines?

Leave a Reply