02/05/20 – Runge Yan – Principles of Mixed-Initiative User Interfaces

A propriate collaboration of user and machine is promising in efficiency and the effort for developing “agents” is persuasive. With the real-world case of LookOut system, several principles and methods are demonstrated.

To make sure an additional interface is worth using, the system should follow certain principles in design and implementation. significant value should be added into automation; user’s attention directly influences the effect of service; negotiation between costs and benefits often determines the action; a variety of goals should be understood; maintain a continuous learning process, etc.

LookOut system provides calendaring and scheduling services based on emails and user behaviors: in an interactive situation, the system goes through a 2-phase analysis to decide whether the assistance is needed & what level of service would suit best (manual operation, automated assistance or social agent). The possibility of user having a goal and the likelihood of machine providing service (action/dialog) comprise determine the threshold of best practice.

With the principles and problems addressed, a combination of reasoning machine and direct user operation is likely to be improved further on.

Reflection

I’m quite surprised that this paper is published in 1999. By that time, the concepts and guidelines of HCI are clearly addressed. Some of the points are exactly what we have today while other are somewhat developed into today’s idea. Although it’s a simple task compared to the interaction we encounter today, details on “agents” and both sides of interaction are quite comprehensive. The principles take several crucial elements into consideration: additional value, user attention, decision threshold, learning process, etc. These are basically what in my mind when I think about the complicated interaction process. A 2-phase analysis is essential for an “agent” we’d like to count on; the several modalities fit well in real-time situation; a failure recover mechanism and a evolving learning process.

When I’m using windows 98 and XP on my father’s PC, I’ve seen a cute lion icon on desktop provided by a popular antivirus software “Rising (Rui Xing)”. The lion was quite smart, as I look back at it: It won’t bother you when your mouse is navigating through another software window; if your mouse pass and stay around it, it will gently ask you if you need any service or you just want to play a bit; also, it is draggable and will stay in a scope where you often let it stay. The most amazing thing is that if I stopped what I’m working on and stared a little bit at the lion, it would become all sleepy and began snoring in a really cute way!

I’ve known several basic ideas about HCI and now I got so many “That was amazing” lookback. I don’t know my (indirect, meaningless) behavior largely determined the action of the machine.

Question:

  1. If (as I see it) this paper addresses such important guidelines in HCI, what holds back the (fast) development of the entire system? / What can we do better to accelerate this process?
  2. How important it is to make user feel natural as they interact with machine? Should users be notified about what’s going on? (Like “If you play a lot with the lion it will infer what you want at a certain time based on your behavior”) Is that one of the reasons why the companies collect our data and we are uncomfortable with it?

Leave a Reply