02/05/20 – Dylan Finch – Principles of mixed-initiative user interfaces

Summary of the Reading

This paper seeks to help solve some of the issues present with automation in software. Often times, when a user’s tries to automate an action using an agent or tool, they may not get the result they were expecting. The paper lists many of the key issues with the then current implementations of this system.

The paper points out many of the issues that can plague systems that try to take action on behalf of the user. These include things like not adding value for the user, not considering the agent’s uncertainty about the user’s goals, not considering the status of the user’s attention when trying to suggest an action, not inferring the ideal action in light of costs and benefits, not employing a dialog to resolve key conflicts, and many others. After listing these key problems, the authors go on to describe a system that tries to solve many of these issues.

Reflections and Connections

I think that this paper does a great job of finding listing the obstacles that exist for systems that try to automate tasks for a user. It can be very hard for a system to automatically do some tasks for the user. Many times, the intentions of the user are unknown. For example, an automatic calendar event creating agent may try to create a calendar hold for a birthday party for a person that the user does not care about, so they would not want to go to the birthday party. There are many times where a user’s actions depend on much more than simply what is in an email or what is on the screen. That is why it is so important to take into account that fact that the automated system could be wrong. 

I think that the authors of this paper did a great job trying to plan for and correct items when the automated system is wrong about something. Many of the key issues they identify have to do with the agent trying to correctly guess when the user does actually need to use the system and what to do when that guess is wrong. I think that the most important issues that they list are the ones that have to do with error recovery. No system will be perfect, so you should at least have a plan for what will happen when the system is wrong. The system that they describe is excellent in this department. It will automatically go away if the user does not need it, and it will use dailogs to get missing information and correct mistakes. This is exactly what a system like this should do when it encounters an error or does something wrong. There should be a way out and a way for the user to correct the error. 

Questions

  1. Which of the critical factors listed in the paper do you think is the most important? The least?
  2. Do you think that the system the developed does a good job meeting all of the issues they brought up? 
  3. Agents are not as popular as they used to be and this article is very old. Do you think these ideas still hold relevance today?

Read More

02/05/2020 – Palakh Mignonne Jude – Principles of Mixed-initiative User Interfaces

SUMMARY

This paper, that was published in 1999, reviews principles that can be used when coupling automated services with direct manipulation. Multiple principles for mixed-initiative UI have been listed in this paper, such as developing significant value-added automation, inferring ideal action in the light of costs, benefits, and uncertainties, continuing to learn by observing, etc.  The author focusses on the LookOut project – an automated scheduling service for Microsoft Outlook – which was an attempt to aid users in automatically adding appointments to their calendar based on the messages that were currently viewed by the user. He then discusses about the decision-making capabilities of this system under uncertainty – LookOut was designed to parse the header, subject, and body of a message and employ a probabilistic classification system in order to identify the intent of the user. The LookOut system also offered multiple interaction modalities which included direct manipulation, basic automated-assistance, social-agent modality. The author also discusses inferring beliefs from user goals as well as mapping these beliefs to actions. The author discusses the importance of timing these automated services such that they are not invoked before the user is ready for the service.

REFLECTION

I found it very interesting to read about these principles of mixed-initiative UI considering that they were published in 1999 – which, incidentally, was when I first learnt to use a computer! I found that the principles being considered were fairly wide-spread considering the year of publication. However, principles such as ‘considering uncertainty about use’s goals’ and ‘employing dialog to resolve key uncertainties’ could have perhaps been addressed by performing behavior modeling. I was happy to learn that the LookOut system had multiple interaction modalities that could be configured by the user and was surprised to learn that the system employed an automated speech recognition system that was able to understand human speech. It did, however, make me wonder about how this system performed with respect to different accents; even though the words under consideration were basic words such as ‘yes’, ‘yeah’, ‘sure’, I wondered about the performance of the system. I also thought that it was nice that the system was able to identify if a user seemed disinterested and that the system waited in order to obtain a response. I also felt that it was a good design strategy to implement a continued training mechanism and that users could dictate a training schedule for the same. However, if the user were to dictate a training schedule, I wonder if it would cause a difference in the user’s behavior versus if they were to act without knowing that their data would be monitored at that given point in time (consent would be needed, but perhaps randomly observing user behavior would ensure that the user is not made too conscious about their actions).

QUESTIONS

  1. Not having explored the AI systems of the 90s, I am unaware about the way these systems work. The paper mentions that the LookOut system was designed to continue to learn from users, how was this feedback loop implemented? Was the model re-trained periodically?
  2. Since data and the bias present in the data used to train a model is very important, how were the messages collected in this study obtained? The paper mentions that the version of LookOut being considered by the paper was trained using 500 relevant and 500 irrelevant messages – how was this data obtained and labeled?
  3. With respect to the monitoring of the length of time between the review of a message and the manual invocation of the messaging service, the authors studied the relationship based on the size of the message and the time users dwell on the same. What was the demographic of the people used as part of this study? Would there exist a difference in the time taken when considering native versus non-native English speakers?

Read More

02/05/2020 – Sushmethaa Muhundan – Principles of Mixed-Initiative User Interfaces

There has been a long-standing debate in the field of user-interface research regarding which area to focus on: building entirely automated interface agents or building tools that enhance direct manipulation by the users. This paper reviews key challenges and opportunities for building mixed-initiative user interfaces that enable users and intelligent agents to collaborate efficiently by combining the above-mentioned areas into one hybrid system that would help leverage the advantages of both the areas. The expected costs and benefits of the agent’s action are studied and compared to the costs and benefits of the agent’s inaction. A large portion of the paper deals with managing uncertainties that agents may have regarding the needs of the user. Three categories of actions are adopted according to the inferred probability the agent calculates regarding the user’s intent. No action is taken if the agent deems it correct, the agent engages the user in a dialog to understand the user’s intent if the intent is unclear or the agent goes ahead and provides the action if the inferred probability of the user wanting the action is high. The LookOut system for scheduling and meeting management, which is an automation service integrated with Microsoft Outlook, has been used as an example to explain this system.


The paper describes multiple interaction modalities and their attributes which I found interesting. These range from manual operation mode to basic automated-assistance mode to social-agent model. An alerting feature in the manual operation mode made the user aware that the agent would have taken some action at that point if it was in an automatic mode. I particularly found this interesting since this gives the users a sense of how the system reacts. If that happened to be helpful to the user, their trust level would automatically increase and as a result, the likelihood of them using the system the next time increases. I also liked the option of engaging in a dialog with the user as an option for action. During times of uncertainty, rather than guessing the intent of the user and risk damaging the trust of the user on the system, I feel that it is better to clarify the user’s intent by asking the user. It was also good to know that the automated systems analyze the user’s behavior and learn from it. The system takes into account past interactions and modifies its parameters to make a more informed decision the next time around. 

While the agent learns from the relationship established between the length of the email and the time taken by the user to respond to it, I feel that a key factor missing is the current attention span of the user. While this is hard to gauge and learn from, I feel that it plays an equally important role in determining the time taken to respond to an email thereby affecting the agent’s judgment. 

  • Does the system account for scenarios involving the absence of the user immediately after the opening of an email? Would the lack of response have a negative impact on the feedback loop of the system?
  • Apart from the current program context, the user’s sequence of actions and choice of words used in queries, what are some additional attributes that can be considered by the agent while inferring the goal of the user?
  • I found the concept of context-dependent changes to utilities extremely interesting. Have there been studies conducted that involve a comprehensive set of scenarios that can affect the utility of a system based on context?

Read More

02-05-2020 Yuhang Liu -Principles of Mixed-Initiative User Interfaces

This paper discusses the prospect of human-computer interaction. The paper proposes that the current prospect of human-computer interaction is controversial. Some people believe that people should focus on developing new metaphors and tools that enhance users ’abilities to directly manipulate objects. Others believe that directing effort toward developing interface agents that provide automation is much more important. So this paper later proposed a new system-outlook. This system is an email system, which creates a new interaction mode based on combining two ideas. The paper specifically describes the factors of the system, the realization ideas of each factor, and the realization principle and mathematical model of these factors are described in the second half of the paper. And these factors include: Value-added automation, Considering user’s goals, Considering user’s attention in the timing of services, Inferring ideal action, using dialog to resolve key uncertainties, Allowing invocation and termination, Minimizing the cost of poor guesses, Scoping precision of service to match uncertainty, variation in goals, Providing mechanisms for efficient agent−user collaboration to refine results, Employment socially appropriate behaviors for agent−user interaction, Maintaining working memory, Continuing to learn by observing.

Aiming at these factors and methods mentioned in this article, I would like to put forward my views on several of them.

  1. Continuing to learn by observing. I think this will be the inevitable development direction of human-computer interaction, and even the development direction of other computer science fields. Because people have different habits and education backgrounds, a universal interaction system, Not enough good for everyone. In a interaction, people are required to adapt to the system, but at the same time, the system should also learn human habits to serve people more conveniently.
  2. Maintaining working memory. I think this is came from the idea of computer caching. The introduction of the concept of caching in the system will undoubtedly serve people better. At the same time, I think there are two other benefits. 1. Recording the previous operations can become a training set for machine learning. Since we want the system to become more intelligent, and we hope the system can adapt to different people, so it is necessary to record the operations of each person. 2. This idea also inspired us to bring other successful functions that have been practiced and applied to other fields, it might bring some great impact.
  3. Most of the remaining factors are designed for human uncertainty. In my opinion, in the development of human-computer interaction, these factors can be merged into one factor, that is, human uncertainty. However, it can be seen from these factors that the difficulty of human-computer interaction is focused on human diversity and uncertainty, so overcoming this problem will directly affect the interaction experience.

questions:

1.For the two directions mentioned in the paper, which one do you think is more important, or the combination of the two?

2.Do you think it is urgent to add the concept of machine learning in human-computer interaction to better serve human beings.

3.Besides the factors of OutLook system mentioned in the paper, which factor do you think can be extended.

Read More

02/05/20 – Runge Yan – Principles of Mixed-Initiative User Interfaces

A propriate collaboration of user and machine is promising in efficiency and the effort for developing “agents” is persuasive. With the real-world case of LookOut system, several principles and methods are demonstrated.

To make sure an additional interface is worth using, the system should follow certain principles in design and implementation. significant value should be added into automation; user’s attention directly influences the effect of service; negotiation between costs and benefits often determines the action; a variety of goals should be understood; maintain a continuous learning process, etc.

LookOut system provides calendaring and scheduling services based on emails and user behaviors: in an interactive situation, the system goes through a 2-phase analysis to decide whether the assistance is needed & what level of service would suit best (manual operation, automated assistance or social agent). The possibility of user having a goal and the likelihood of machine providing service (action/dialog) comprise determine the threshold of best practice.

With the principles and problems addressed, a combination of reasoning machine and direct user operation is likely to be improved further on.

Reflection

I’m quite surprised that this paper is published in 1999. By that time, the concepts and guidelines of HCI are clearly addressed. Some of the points are exactly what we have today while other are somewhat developed into today’s idea. Although it’s a simple task compared to the interaction we encounter today, details on “agents” and both sides of interaction are quite comprehensive. The principles take several crucial elements into consideration: additional value, user attention, decision threshold, learning process, etc. These are basically what in my mind when I think about the complicated interaction process. A 2-phase analysis is essential for an “agent” we’d like to count on; the several modalities fit well in real-time situation; a failure recover mechanism and a evolving learning process.

When I’m using windows 98 and XP on my father’s PC, I’ve seen a cute lion icon on desktop provided by a popular antivirus software “Rising (Rui Xing)”. The lion was quite smart, as I look back at it: It won’t bother you when your mouse is navigating through another software window; if your mouse pass and stay around it, it will gently ask you if you need any service or you just want to play a bit; also, it is draggable and will stay in a scope where you often let it stay. The most amazing thing is that if I stopped what I’m working on and stared a little bit at the lion, it would become all sleepy and began snoring in a really cute way!

I’ve known several basic ideas about HCI and now I got so many “That was amazing” lookback. I don’t know my (indirect, meaningless) behavior largely determined the action of the machine.

Question:

  1. If (as I see it) this paper addresses such important guidelines in HCI, what holds back the (fast) development of the entire system? / What can we do better to accelerate this process?
  2. How important it is to make user feel natural as they interact with machine? Should users be notified about what’s going on? (Like “If you play a lot with the lion it will infer what you want at a certain time based on your behavior”) Is that one of the reasons why the companies collect our data and we are uncomfortable with it?

Read More

02/05/2020-Donghan Hu-Principles of Mixed-Initiative User Interfaces.

Principles of Mixed-Initiative User Interfaces.

Some researchers are aiming at the development and application of automated services, which can sense users’ activities and then take automated actions. Other researchers are focusing on metaphors and conventions that can enhance users’ ability to directly manipulate information to invoke specific services. This paper stated principles that can provide a method for integrating research indirect manipulation with work on interface agents.

The author listed 12 critical factors while combining automated services with direct manipulation interfaces: 1) developing significant value-added automation. 2) considering uncertainty about a user’s goals. 3) considering the status of a user’s attention in the timing of services. 4) inferring ideal action in light of costs, benefits, and uncertainties. 5) employing dialog to resolve key uncertainties. 6) allowing efficient direct invocation and termination. 7) minimizing the cost of poor guesses about action and timing. 8) scoping precision of service to match uncertainty, variation in goals. 9) providing mechanisms for efficient agent-user collaboration to refine results. 10) employing socially appropriate behaviors for agent-user interaction. 11) Maintaining the working memory of recent interactions. 12) continuing to learn by observing. As a result, the author designed mixed-initiative user interfaces which can enable users and intelligent agents to collaborate efficiently on the LookOut system. This system can elucidate difficult challenges and promising opportunities for improving HCI through the elegant combination of reasoning machinery and direct manipulation.

I have read several papers about ubiquitous computing recently. One of the core features of the ubiquitous computing is that users are gradual cannot feel the existed computing and technologies which are surrounding them. Hence, I think that applications and systems which can sense users’ activities and then provide users with specific services will become prevalent in the future. Especially with the development of machine learning and artificial intelligence, we may not even need mixed interfaces or software anymore in the future. According to this, I consider that “Considering uncertainty about a user’s goals” is the most important factor. Humans are complex. It is impossible to fulfill everyone’s motivations and goals with several common services. Hence, customizing various features by considering a user’s uncertainty goal is really significant. What’s more, I think that maintaining the working memory of recent interactions is a great design claim which can assist users with the process of self-reflection. Users should know and understand what they did in the past.

Among these 12 critical factors for leveraging off automated services and direct manipulation interfaces, which factor do you consider as the most important factor?

If you are an HCI researcher, do you prefer to focus on developing applications that can sense users’ activities and offer services, or concentrating on designing tools that allow users to manipulate interfaces directly to access information and then invoke services?

How do you think about the “Dialog” interaction between the user and the system? Do you think it is useful or not?

Read More

2/5/20 – Lee Lisle – Principles of Mixed-Initiative User Interfaces

Summary

               The author, Horvitz, proposes a list of twelve principles for mixed-initiative, or AI-assisted programs that should underlie all future AI-assisted programs. He also designs a program called LookOut, which focuses on email messaging and scheduling. It will automatically parse emails (and it seems like other messaging services) and extracts possible event data for the user to add the event to their calendar, inferring dates and locations when needed. It also has an intermediary step where the user can edit the suggested event fields (time/location/etc.). In the description of LookOut’s benefits, the paper clearly lays out some probability theory of how it guesses what the user wants. It also lays out why each behind-the-scenes AI function is performed the way it is in LookOut.

Personal Reflection

               I was initially surprised about this paper’s age; I had thought that this field was defined later than it apparently was. For example, Google was founded only a year before this paper was published. It was even more jarring to see Windows 95 (98?) in the figures. Furthermore, when the author starts describing LookOut, I realized that this is baked into a lot of email systems today, such as Gmail and the Apple Mail application, where they automatically can create links that will add events to your various calendars. The other papers we have read for this class tend to stay towards overviews or surveys of literature rather than a single example and deep dive into explaining its features.

It is interesting that “poor guessing of user’s goals” has been an issue this long. This problem is extremely persistent and speaks to how hard it is to algorithmically decide or understand what a user wants or needs. For example, Lookout was trained on 1000 messages, while (likely) today’s services are trained on millions, if not orders of magnitude more. While I imagine the performance is much better today, I’m curious what the comparative rates of false positives/negatives there are.

This paper was strong overall, with a deep dive into a single application rather than an overview of many. Furthermore, it made arguments that are, for the most part, still relevant in the design of today’s AI-assisted programs. However, I would have liked the author to specifically mention the principles as they came up in the design of his program. For example, he could have said that he was fulfilling his 5th principle in the “Dialog as an Option for Action” section. However, this is a small quibble in the paper.

               Lastly, while AI assistants should likely have an embodiment occasionally, the Genie metaphor (along with ClippyTM style graphics) is gladly retired now, and should not be used again.

Questions

  1. Are all of the principles listed still important today? Is there anything they missed with this list, that may have arisen from faster and more capable hardware/software?
  2. Do you think it is better to correctly guess what a user wants or is it better to have an invocation (button, gesture, etc.) to get an AI to engage a dialog?
  3. Would using more than one example (LookOut, in this case) strengthened the paper’s argument of what design principles were needed?  Why or why not?
  4. Can an AI take action incorrectly and not bother a user?  How, and in what instances for LookOut might this be performed?

Read More

02/04/20 – Mohannad Al Ameedi – Principles of mixed-initiative user interfaces

Summary

In this paper, the author presents first two research efforts related to the human interaction. The first effort focuses on user direct manipulation and the second focuses on automated services. The author suggests an approach that can integrated both by offering a way to allow the user to directly manipulate user interface elements, and also use automation to decrease the amount of interaction necessary to finish a task. The author presents factors that can make the integration between the automation and direct manipulation more effective. These factors include developing added value automation, considering user goals uncertainty, considering the user attention in the timing of the automated service, considering the cost and benefit of the action uncertainty, involving direct dialog with the user, and other factors.

The author proposes a system called lookout which can help users to schedule meetings and appointments by reading the user emails and extract useful information related to meetings and auto populate some fields, like the meeting time and meeting recipients, which can save the user mouse and keyboard interaction. The uses probabilistic classification system to help reduce the amount of interaction necessary to accomplish scheduling tasks. The system also uses sound recognition feature developed by Microsoft research to offer additional help to the users during the direct manipulation of the suggested information. The lookout system combines both automated services and ability to allow direct manipulation of the system. The system also asses the user interaction to decide when the automated service will not be helpful to the user to make sure that the uncertainty is well considered. The lookout system improves the human-computer interaction through the combination of reasoning machinery and direct manipulation.

Reflection

I found the idea of maintaining of a working memory of user interaction interesting. This approach to learn from the user experience as stated in the paper, but it can also use machine learning methods to predict the next required action for a new user or an existing user by learning from all other users.

I also found the lookout system very interesting since it is integrating the direct user input with automated service. The sound recognition features that was developed by Microsoft research is not only allow the user to interact via mouse and keyboard but it also give another interaction media which is extremely important to users with disabilities.

Cortana, which is a Microsoft intelligent agent, does parsing to the email too and check if the user send an email that contains a keyword like “I will schedule a meeting” or “I will follow up on that later” and then will remind the user to follow up and will present two buttons asking the user to directly interact with the alert by either dismiss the alert or by asking the system to send a follow up again next day.

Questions

  • Can we use the human interaction data used in lookout as a labeled data and develop a machine learning algorithm that can predict the next user interaction?
  • Can we use LookOut idea in a different domain?
  • The author suggests dozen factors to integrate the automation services with direct manipulation, which factors you think can be useful to the crowdsourcing users?  

Read More

2/5/2020 – Jooyoung Whang – Principles of Mixed-Initiative User Interfaces

This paper seeks to find when it is good to allow direct user manipulation versus automated services (agents) for a human-computer interaction system. The author ends up with the concept of mixed-initiative user interfaces, a system that seeks to pull out maximum efficiency using both sides’ perks and collaboration. In the proposal, the author claims that the major factor to consider when providing automated services is addressing the performance uncertainty and predicting the user’s goals. According to the paper, many poorly designed systems fail to gauge when to provide automated service and misinterpret user intention. To overcome these problems, the paper addresses that automated services should be provided when it is certain it can give additional benefits than when doing it manually by the user. The author also writes that effective and natural transfer of control to the user should be provided so that the users can efficiently recover and step forward towards their goals upon encountering errors. The paper also provides a use case of a system called, “LookOut.”

I greatly enjoyed and appreciated the example that the author provided. I personally have never used LookOut before, but it seemed like a good program from reading the paper. I liked that the program gracefully handled subtleties such as recognizing phrases like “Hmm..” to sense that a user’s thinking. It was also interesting that the paper tries to infer a user’s intentions using a probabilistic model. I recognized keywords such as utility and agents that also frequently appear in the machine learning context. In my previous machine learning experience, an agent acted according to policies leading to maximum utility scores. The paper’s approach is similar except it involves user input and the utility is the user’s goal achievement or intention. The paper was a nice refresher for reviewing what I learned in AI courses as well as putting humans into the context.

The followings are the questions that I’ve come up with while reading the paper:

1. The paper puts a lot of effort in trying to accurately acquire user intention. What if the intention was provided in the first place? For example, the user could start using the system by selecting their goal from a concise list. Would this benefit the system and user satisfaction? Would there be a case where it won’t (such as even misinterpreting the provided user goal)?

2. One of the previous week’s readings provided the idea of affordances (what a computer or a human is each better at doing than the other). How does this align with automated service versus direct human manipulation? For example, since computers are better at processing big data, tasks related to this would preferably need to be automated.

3. The paper seems to assume that the user always has a goal in mind when using the system. How about purely exploratory systems? In scientific research settings, there are a lot of times when the investigators don’t know what they are looking for. They are simply trying to explore the data and see if there’s anything interesting. One could claim that this is still some kind of a goal, but it is a very ambiguous one as the researchers don’t know what would be considered interesting. How should the system handle these kinds of cases?

Read More

02/05/20 – Vikram Mohanty – Principles of Mixed-Initiative User Interfaces

Paper Authors: Eric Horvitz

Summary

This is a formative paper on how mixed-initiative user interfaces should be designed, taking into account the principles surrounding users’ abilities to directly manipulate the objects, and combining it with principles of interface agents targeted towards automation. The paper outlines 12 critical factors for the effective integration of automated services with direct manipulation interfaces, and illustrates these points through different features of LookOut, a piece of software that provides automated scheduling services from emails in Microsoft Outlook.

Reflection

  1. This paper has aged well over the last 20 years. Even though this work has led to updated renditions which take into account recent developments in AI, the core principles outlined in this paper (i.e. being clear about the user’s goals, weighing in costs and benefits before intervening during the user’s actions, ability for users to refine results, etc.) still hold true till date.
  2. The AI research landscape has changed a lot since this paper came out. To give some context, modern AI-based techniques such as deep learning wasn’t prevalent both due to the lack of datasets and computing power. The internet was nowhere as big as it is right now. The cost of automating everything back then would obviously be bottlenecked by the lack of datasets. That feels like a strong motivation for aligning automated actions with the user’s goals and actions and factoring in context-dependent costs and benefits. For e.g. assigning a likelihood that an email message that has just received the focus of attention is in the goal category of “User will wish to schedule or review a calendar for this email” versus the goal category of “User will not wish to schedule or review a calendar for this email” based on the content of the messages.” This is predominantly goal-driven and involves exploring the problem space to generate the necessary dataset. Right now, we are not bottlenecked by problems like lack of computing power or unavailability of datasets, and if we do not follow what the paper advocates about aligning automated actions with the user’s goals and actions or factoring in the context, we may end up with meaningless datasets or unnecessary automation.
  3. These principles do not treat agent intervention lightly at all. In a fast-paced world, in the race towards automation, this particular point might get lost easily. For LookOut’s intervention with a dialog or action, multiple studies were conducted to identify the most appropriate timing of messaging services as a function of the nature of the message. Carefully handling the presentation of automated agents is crucial for a positive user experience.
  4. The paper highlights how the utility of system taking action when a goal is not desired can depend on any combination of the user’s attention status or the screen real estate or users being more rushed. This does not seem like something that can be easily determined by the system on its own or algorithm developers. System developers or designers may have a better understanding of such real-world possible scenarios, and therefore, this calls for researchers from both fields to work together towards a shared goal.
  5. Uncertainties or the limitations of AI should not come in the way of solving hard problems that can benefit users. Designing intelligent user interfaces that can leverage the complementary strengths of humans and AI can help solve problems that cannot be solved on its own by either parties. HCI folks have long been at the forefront of thinking about how humans will interact with AI, and how to do work that allows them to do so effectively.

Questions

  1. Which principles, in particular, do you find useful if you are designing a system where the intelligent agent is supposed to aid the users in open-ended problems that do not have a clear predetermined right/wrong solution i.e. search engines or Netflix recommendations?
  2. Why don’t we see the “genie” or “clippy” anymore? What does it tell about this – “employing socially appropriate behaviors for agent−user interaction”?
  3. A) For folks who work on building interfaces, do you feel some elements can be made smarter? How do you see using these principles in your work? B) For folks who work on developing intelligent algorithms, do you consider end-user applications in your work? How do you see using these principles in your work? Can you imagine different scenarios where your algorithm isn’t 100% accurate.

Read More