02/05/2020 – Palakh Mignonne Jude – Guidelines for Human -AI Interaction

SUMMARY

In this paper, the authors propose 18 design guidelines for human-AI interaction with the aim that these guidelines would serve as a resource for practitioners. The authors codified over 150 AI-related design recommendations and then through multiple phases, and refinement processes modified this list and defined 18 generally applicable principles. As part of the first phase, the authors reviewed AI products, public articles, and relevant scholarly papers. They obtained a total of 168 potential guidelines which were then clustered to form 35 concepts. This was followed by a filtration process that reduced the number of concepts to 20 guidelines. As part of phase 2, the authors conducted a modified heuristic evaluation attempting to identify both applications and violations of the proposed guidelines. They utilized 13 AI-infused products/features as part of this evaluation study. This phase helped to merge, split, and rephrase different guidelines and reduced the total number of guidelines to 18. In the third phase, the authors conducted a user study with 49 HCI practitioners in an attempt to understand if the guidelines were applicable across multiple products and to obtain feedback about the clarity of the guidelines. The authors ensured that the participants had experience in HCI and were familiar with discount usability testing methods. Modifications were made to the guidelines based on the feedback obtained from the user study based on the level of clarity and relevance of the guidelines. In the fourth phase, the authors conducted an expert evaluation of the revisions. These experts comprised of people who had work experience in UX/HCI and were well-versed with discount usability methods. With the help of these experts, the authors assessed whether the 18 guidelines were easy to understand. After this phase, they published a final set of 18 guidelines.

REFLECTION

After reading the 1999 paper on ‘Principles of Mixed-initiative User Interfaces’, I found that the study performed by this paper was much more extensive as well as more relatable as the AI-infused systems considered were systems that I had some knowledge about as compared to the LookOut system that I have never used in the past. I felt that the authors performed a thorough comparison and included various important phases in order to formulate the best set of guidelines. I found that it was interesting that this study was performed by researchers from Microsoft 20 years after the original 1999 paper (also done at Microsoft). I believe that the authors provided a detailed analysis of each of the guidelines and that it was good that they included identifying applications of the guidelines as part of the user study.

I felt that some of the violations reported by people were very well thought out; for example, when reporting a violation for an application where the explanation was provided but inadequate with respect to the navigation product – ‘best route’ was suggested, but no criteria was given for why the route was the best. I feel that such notes provided by the users were definitely useful in helping the authors better assimilate good and generalizable guidelines.

QUESTION

  1. Which, in your experience, among the 18 guidelines did you find to be most important? Was there any guideline that appeared to be ambiguous to you? For those that have limited experience in the field of HCI, were there any guidelines that seemed unclear or difficult to understand?
  2. The authors mention that they do not explicitly include broad principles such as ‘build trust’, but instead made use of indirect methods by focusing on specific and observable guidelines that are likely to contribute to building trust. Is there a more direct evaluation that can be performed in order to measure building trust?
  3. The authors mention that it is essential that designers evaluate the influences of AI technologies on people and society. What methods can be implemented in order to ensure that this evaluation is performed? What are the long-term impacts of not having designers perform this evaluation?
  4. For the user study (as part of phase 3), 49 HCI practitioners were contacted. How was this done and what environment was used for the study?

Leave a Reply