2/5/2020 – Jooyoung Whang – Principles of Mixed-Initiative User Interfaces

This paper seeks to find when it is good to allow direct user manipulation versus automated services (agents) for a human-computer interaction system. The author ends up with the concept of mixed-initiative user interfaces, a system that seeks to pull out maximum efficiency using both sides’ perks and collaboration. In the proposal, the author claims that the major factor to consider when providing automated services is addressing the performance uncertainty and predicting the user’s goals. According to the paper, many poorly designed systems fail to gauge when to provide automated service and misinterpret user intention. To overcome these problems, the paper addresses that automated services should be provided when it is certain it can give additional benefits than when doing it manually by the user. The author also writes that effective and natural transfer of control to the user should be provided so that the users can efficiently recover and step forward towards their goals upon encountering errors. The paper also provides a use case of a system called, “LookOut.”

I greatly enjoyed and appreciated the example that the author provided. I personally have never used LookOut before, but it seemed like a good program from reading the paper. I liked that the program gracefully handled subtleties such as recognizing phrases like “Hmm..” to sense that a user’s thinking. It was also interesting that the paper tries to infer a user’s intentions using a probabilistic model. I recognized keywords such as utility and agents that also frequently appear in the machine learning context. In my previous machine learning experience, an agent acted according to policies leading to maximum utility scores. The paper’s approach is similar except it involves user input and the utility is the user’s goal achievement or intention. The paper was a nice refresher for reviewing what I learned in AI courses as well as putting humans into the context.

The followings are the questions that I’ve come up with while reading the paper:

1. The paper puts a lot of effort in trying to accurately acquire user intention. What if the intention was provided in the first place? For example, the user could start using the system by selecting their goal from a concise list. Would this benefit the system and user satisfaction? Would there be a case where it won’t (such as even misinterpreting the provided user goal)?

2. One of the previous week’s readings provided the idea of affordances (what a computer or a human is each better at doing than the other). How does this align with automated service versus direct human manipulation? For example, since computers are better at processing big data, tasks related to this would preferably need to be automated.

3. The paper seems to assume that the user always has a goal in mind when using the system. How about purely exploratory systems? In scientific research settings, there are a lot of times when the investigators don’t know what they are looking for. They are simply trying to explore the data and see if there’s anything interesting. One could claim that this is still some kind of a goal, but it is a very ambiguous one as the researchers don’t know what would be considered interesting. How should the system handle these kinds of cases?

Vikram Mohanty

I am a 3rd year PhD student in the Department of Computer Science at Virginia Tech. I work at the Crowd Intelligence Lab, where I am advised by Dr. Kurt Luther. My research focuses on developing novel tools that leverage the complementary strengths of Artificial Intelligence (AI) and collective human intelligence for solving complex, open-ended problems.

Leave a Reply