2/5/20 – Lee Lisle – Principles of Mixed-Initiative User Interfaces

Summary

               The author, Horvitz, proposes a list of twelve principles for mixed-initiative, or AI-assisted programs that should underlie all future AI-assisted programs. He also designs a program called LookOut, which focuses on email messaging and scheduling. It will automatically parse emails (and it seems like other messaging services) and extracts possible event data for the user to add the event to their calendar, inferring dates and locations when needed. It also has an intermediary step where the user can edit the suggested event fields (time/location/etc.). In the description of LookOut’s benefits, the paper clearly lays out some probability theory of how it guesses what the user wants. It also lays out why each behind-the-scenes AI function is performed the way it is in LookOut.

Personal Reflection

               I was initially surprised about this paper’s age; I had thought that this field was defined later than it apparently was. For example, Google was founded only a year before this paper was published. It was even more jarring to see Windows 95 (98?) in the figures. Furthermore, when the author starts describing LookOut, I realized that this is baked into a lot of email systems today, such as Gmail and the Apple Mail application, where they automatically can create links that will add events to your various calendars. The other papers we have read for this class tend to stay towards overviews or surveys of literature rather than a single example and deep dive into explaining its features.

It is interesting that “poor guessing of user’s goals” has been an issue this long. This problem is extremely persistent and speaks to how hard it is to algorithmically decide or understand what a user wants or needs. For example, Lookout was trained on 1000 messages, while (likely) today’s services are trained on millions, if not orders of magnitude more. While I imagine the performance is much better today, I’m curious what the comparative rates of false positives/negatives there are.

This paper was strong overall, with a deep dive into a single application rather than an overview of many. Furthermore, it made arguments that are, for the most part, still relevant in the design of today’s AI-assisted programs. However, I would have liked the author to specifically mention the principles as they came up in the design of his program. For example, he could have said that he was fulfilling his 5th principle in the “Dialog as an Option for Action” section. However, this is a small quibble in the paper.

               Lastly, while AI assistants should likely have an embodiment occasionally, the Genie metaphor (along with ClippyTM style graphics) is gladly retired now, and should not be used again.

Questions

  1. Are all of the principles listed still important today? Is there anything they missed with this list, that may have arisen from faster and more capable hardware/software?
  2. Do you think it is better to correctly guess what a user wants or is it better to have an invocation (button, gesture, etc.) to get an AI to engage a dialog?
  3. Would using more than one example (LookOut, in this case) strengthened the paper’s argument of what design principles were needed?  Why or why not?
  4. Can an AI take action incorrectly and not bother a user?  How, and in what instances for LookOut might this be performed?

Leave a Reply