Summary of the Reading
This paper seeks to help solve some of the issues present with automation in software. Often times, when a user’s tries to automate an action using an agent or tool, they may not get the result they were expecting. The paper lists many of the key issues with the then current implementations of this system.
The paper points out many of the issues that can plague systems that try to take action on behalf of the user. These include things like not adding value for the user, not considering the agent’s uncertainty about the user’s goals, not considering the status of the user’s attention when trying to suggest an action, not inferring the ideal action in light of costs and benefits, not employing a dialog to resolve key conflicts, and many others. After listing these key problems, the authors go on to describe a system that tries to solve many of these issues.
Reflections and Connections
I think that this paper does a great job of finding listing the obstacles that exist for systems that try to automate tasks for a user. It can be very hard for a system to automatically do some tasks for the user. Many times, the intentions of the user are unknown. For example, an automatic calendar event creating agent may try to create a calendar hold for a birthday party for a person that the user does not care about, so they would not want to go to the birthday party. There are many times where a user’s actions depend on much more than simply what is in an email or what is on the screen. That is why it is so important to take into account that fact that the automated system could be wrong.
I think that the authors of this paper did a great job trying to plan for and correct items when the automated system is wrong about something. Many of the key issues they identify have to do with the agent trying to correctly guess when the user does actually need to use the system and what to do when that guess is wrong. I think that the most important issues that they list are the ones that have to do with error recovery. No system will be perfect, so you should at least have a plan for what will happen when the system is wrong. The system that they describe is excellent in this department. It will automatically go away if the user does not need it, and it will use dailogs to get missing information and correct mistakes. This is exactly what a system like this should do when it encounters an error or does something wrong. There should be a way out and a way for the user to correct the error.
Questions
- Which of the critical factors listed in the paper do you think is the most important? The least?
- Do you think that the system the developed does a good job meeting all of the issues they brought up?
- Agents are not as popular as they used to be and this article is very old. Do you think these ideas still hold relevance today?