02/05/2020 – Nurendra Choudhary – Guidelines for Human-AI Interaction

Summary

In this paper, the authors propose a set of 18 guidelines for human-AI interaction design. The guidelines are codified 150 AI-related design recommendations collected from diverse sources. Additionally, they also validate the design from both the users’ and expert’s perspective.

For the users, the principles are evaluated by 49 HCI practitioners by testing out a familiar AI-driven feature of a product. The goal is to estimate the number of guidelines followed/not followed by the feature. The feedback form also had a field for “does not apply” with a corresponding explanation field. Also, the review included a clarity component to find out ambiguity in the guidelines. From the empirical study, the authors were able to conclude that all the guidelines were majorly clear and hence could be applied to human-AI interactions. The authors revised the guidelines according to the feedback and conducted an expert review

The guidelines are really suitable when deploying ML systems in the real-world. Generally, in the AI community, researchers do not find any immediate concrete benefits for developing user-friendly systems. However, when such systems need to be deployed for real-world users, the user experience or human-AI interaction becomes a crucial part of the overall mechanism.

For the experts, the old and new guidelines were presented and they agreed on the revised guidelines for all but one (G15). From this, the authors conclude the effectiveness of the review process.

Reflection

Studying its applicability is really important (like the authors did in the paper), because I do not feel all of them are necessary for the diverse number of  applications. It is interesting to notice that for photo organizers, most of the guidelines are already being followed and that they include the most number of “does not apply”. Also, e-commerce seems to be plagued with issues. I think this is because of the gap in transparency. The AI systems in photo-organizers need to be advertised to the users and it directly affects their decisions. However, on the other hand for e-commerce, the AI systems work in the background to influence user choices.

AI systems steadily learn new things and its several times not interpretable by the researchers who invented them. So, this I believe is an unfair ask. However, as the AI research community pushes for increased interpretability in the systems, I believe it is possible and will definitely help users. Imagine if you could explicitly set the features attached to your profile to improve your search recommendations.

Similarly, focus on “relevant social norms” and “mitigate social biases” are presently not currently focus but I believe these will grow over time to form a dominant area of ML research. 

I think we can use these guidelines as tools to diversify AI research into more avenues focusing on building systems that inherently maintain these principles. 

Questions

  1. Can we study the feasibility and cost-to-benefit ratio of making changes to present AI systems based on these guidelines?
  2. Can such principles be evaluated from the other perspective? Can we give better data guidelines for AI to help it learn?
  3. How frequently does the list need to evolve with the evolution of ML systems?
  4. Do the users always need to know the changes in AI? Think about interactive systems, the AI learns in real-time. Wouldn’t there be too many notifications to track for a human user? Would it become something like spam?

Word Count: 569

Leave a Reply