02/05/2020-Donghan Hu-Guidelines for Human-AI Interaction

Guidelines for Human-AI Interaction

In this paper, the authors focus on the problem that human-AI interaction researches need the light of advances in this growing technology. According to this, the authors propose 18 generally applicable design guidelines for the designs and studies for human-AI interactions. Based on a 49-participant involved user study, writers test the validation of these guidelines. These 18 guidelines are: 1) make clear what the system can do, 2) make clear how well the system can do what it can do, 3) time services based on context, 4) show contextually relevant information, 5) match relevant social norms, 6) mitigate social biases, 7) support efficient invocation, 8) support efficient dismissal, 9) support efficient correction, 10) scope services when in doubt, 11) make clear why the system did what it did, 12) remember recent interaction, 13) learn from user behavior, 14) update and adapt cautiously 15) encourage granular feedback, 16) convey the consequences of user actions, 17) provide global controls and 18) Notify users about changes. After the user study,

After reading this paper, I am kind of surprised that authors can purpose 18 guidelines for human-AI interaction designing. I am most interested in the category of “During interaction”. This discussion focus factors about time, context, personal data and social norms. In my opinion, providing users with specific services that can assist their interactions should also be considered in this part. For example, accessible and assistant. In addition, considering social norms is a great idea. Individuals who use the AI system have many kinds of background, abilities, and ethics. We cannot treat every person with the same design of the applications and systems. Allowing users to design their preferred user interfaces, features, and functions in one general system is a promising but challenging research question. I think this is a promising topic in the future. At present, many applications and systems allow users to customize their own features with the provided default settings. Players can design their own models for games, like the Steam platform. For Google Chrome, users can design their own theme based on their motivations and goals. I believe this feature can be achieved by multiple human-AI interaction systems later.

Among these 18 different guidelines, I notice that an AI application does not have to require all these guidelines. Hence, do some of the guidelines have high majorities than others? Or, in the process of designing, researchers should treat each of them equally?

In your opinion, which guidelines do you consider are more important and will focus on them in the future? Or which guidelines you might have ignored in the previous researches?

In this paper, the authors mentioned the tradeoff between generality and specialization. How do you think to solve this problem?

Will these guidelines become useless due to the increase of specialization in various kinds of applications and systems in the future?

Read More

02/05/2020-Donghan Hu-Principles of Mixed-Initiative User Interfaces.

Principles of Mixed-Initiative User Interfaces.

Some researchers are aiming at the development and application of automated services, which can sense users’ activities and then take automated actions. Other researchers are focusing on metaphors and conventions that can enhance users’ ability to directly manipulate information to invoke specific services. This paper stated principles that can provide a method for integrating research indirect manipulation with work on interface agents.

The author listed 12 critical factors while combining automated services with direct manipulation interfaces: 1) developing significant value-added automation. 2) considering uncertainty about a user’s goals. 3) considering the status of a user’s attention in the timing of services. 4) inferring ideal action in light of costs, benefits, and uncertainties. 5) employing dialog to resolve key uncertainties. 6) allowing efficient direct invocation and termination. 7) minimizing the cost of poor guesses about action and timing. 8) scoping precision of service to match uncertainty, variation in goals. 9) providing mechanisms for efficient agent-user collaboration to refine results. 10) employing socially appropriate behaviors for agent-user interaction. 11) Maintaining the working memory of recent interactions. 12) continuing to learn by observing. As a result, the author designed mixed-initiative user interfaces which can enable users and intelligent agents to collaborate efficiently on the LookOut system. This system can elucidate difficult challenges and promising opportunities for improving HCI through the elegant combination of reasoning machinery and direct manipulation.

I have read several papers about ubiquitous computing recently. One of the core features of the ubiquitous computing is that users are gradual cannot feel the existed computing and technologies which are surrounding them. Hence, I think that applications and systems which can sense users’ activities and then provide users with specific services will become prevalent in the future. Especially with the development of machine learning and artificial intelligence, we may not even need mixed interfaces or software anymore in the future. According to this, I consider that “Considering uncertainty about a user’s goals” is the most important factor. Humans are complex. It is impossible to fulfill everyone’s motivations and goals with several common services. Hence, customizing various features by considering a user’s uncertainty goal is really significant. What’s more, I think that maintaining the working memory of recent interactions is a great design claim which can assist users with the process of self-reflection. Users should know and understand what they did in the past.

Among these 12 critical factors for leveraging off automated services and direct manipulation interfaces, which factor do you consider as the most important factor?

If you are an HCI researcher, do you prefer to focus on developing applications that can sense users’ activities and offer services, or concentrating on designing tools that allow users to manipulate interfaces directly to access information and then invoke services?

How do you think about the “Dialog” interaction between the user and the system? Do you think it is useful or not?

Read More

01/29/2020-Donghan Hu-Human Computation: A Survey and taxonomy of a Growing Field

In this paper, the authors focused on the problem that due to the rapid growth of computing technology, current methods are not well supported by a single framework that can understand each new system in the context of old helpfully. Based on this research question, authors categorized multiple human computations systems aiming at identifying parallels between different systems, classifying systems into different dimensions, and disclosing defects which existed in current systems and work. Then, the authors compared human computing with other related ideas, terms, and areas. For example, deafferenting human computing with social computing, crowdsourcing. For the classification, the authors divided different systems into six dimensions: motivation, quality control, aggregation, human skill, process orders, and task request cardinality. For each dimension, the authors explained sample values and listed one example. Due to the development of human computation, new systems can be categorized into current dimensions, or new dimensions and sample values will be created in the future.

From this paper, I knew that human computing is a wild topic which is hard to be defined clearly. There are two main parts that consist of human computing: 1) problems fit the general paradigm of computation, 2) the human participation id directed by the computational systems or process. Human computation binds human activities and computers tightly. For the six dimensions, I am kind of confused that how authors categorized these systems into these six dimensions. I think that authors need to talk more about how and why. From this form, I can find that one system can be categorized into multiple dimensions due to its complex features, for example, Mechanical Turk. And I think this is one possible reason that systems are hard to be classified in human computing easily. Because one system may solve many human computing problems and implements multiple features increasing the difficulty of understanding its context. What’s more, I am quite interested in the “Process order” dimension. From this part, it helps me to understand how people interact with computers. For different process order, people can generate different questions that need them to solve. And it is impossible to come up with a solution as a panacea that works well in each processed order. We should consider questions like feedback, interactions, learning effects, curiosity and so on.

What’s more, I am interested in the idea that focusing on only one style of human computation may become a tendency that can potentially missing more suitable solutions to a problem. Thinking differently in multiple ways would help us quickly solve the research questions. We are not supposed to limit us on one narrow topic or one single area.

Question 1: how can we use this classification of human computation systems?

Question 2: how and why authors come up with these six dimensions? I think more explanations are needed.

Question 3:  If one system is classified into multiple dimensions and sample values, can I treat these values equally? Or there is one majority values and dimension?

Read More

01/29/2020-Donghan Hu-An Affordance-Based Framework for Human Computation and Human-Computer Collaboration

Many researchers’ primary goals are to develop tools and methodologies that can facilitate human-machine collaborative problem solving and to understand and maximize the benefits of the partnership of size and complexity. The first problem is how do we tell if a problem would benefit from a collaborative technique? This paper mentioned that even though deploying various collaborative systems has led to many novel approaches to difficult problems, it has also led to the investment of significant time, expense and energy. However, these problems might be solved better by only depending on human or machine techniques. The second problem is how do we decide which tasks to delegate to which party and when? The authors stated that we are still lacking a language for describing the skills and capacity of the collaborating team. For the third question, how does one system compare to others trying to solve the same problem? Lacking of no common language or measures by which to describe new systems is one important reason.  About the research contributions, authors picked out 49 publications from 1271 papers which represent the state of the art in the study of human-computer collaboration and human computation. Then authors identify grouping based on human- and machine-intelligence affordances which form the basis of a common framework for understanding and discussing collaboration works. Last, the authors talked about the unexplored areas for future work. Each of the current frameworks is specific to a subclass of collaborative systems which is hard to extend them to a broader class of human-computer collaborative systems.

Based on the definition of “affordance”, I know both humans and machines bring to the partnership opportunities for action, and each mush be able to perceive and access these opportunities in order for them to be effectively leveraged. It is not surprised for me that the bandwidth of information presentation is potentially higher in the visual perception than any of the other senses. I consider that visual perception as the most important information processing for humans in most cases, that’s why there are a plethora of research studies combined with human visual processing to solve various problems. I am quite interested in the concept of sociocultural awareness. Individuals understand their actions in relation to others and to the social, cultural and historical context in which they are carried out. I think this is a paramount view in the study of HCI. Different individuals in different environments with different cultural backgrounds would behave different interactions with the same computers. In the future, I consider that cultural background should become an important factor in the studies of HCI.

I found that various applications are categorized into multiple affordances. If so, how can the authors answer the third question? For example, if two systems are trying to solve the same problem, but each of them have different human or computer affordance, how can I say which is better? Does different affordance have different weight values? Or we should treat them equally?

Less tools are designed for creativity, social and bias-free affordance, what does this mean? Is it mean that these affordances are less important or researchers are still working on these areas?

Read More

01/22/20 – Donghan Hu – Ghost Work

In the chapter of Instruction of the ghost work, the book describes what is ghost work, how does ghost work work and why we need ghost work. Ghost work focuses on work that is task-based and content-driven which can be finished through the Internet and APIs. This kind of work includes various tasks, like labeling, editing, sorting, proofreading and so on. While technology is developing fast, people are looking forward to the future with robots and AI, ghost work is playing an important role in not only computer science but the whole area of humans socially currently.

In chapter 1, the authors give us several examples of how people do their ghost works for different companies, institutions, and platforms. They are MTurk (Amazon), UHRS (Microsoft), LeadGenius, AMARA, and Upwork. This chapter specifically describes how a person acts as a ghost worker in various short-time jobs. Firstly, a person needs to verify his identity to get the opportunity as a ghost worker. After verification, he will be able to pick this preferable work among hundreds of choices. Finally, if he does a good job, he will be paid due to his great effort. What’s more, this chapter analyzed people who work as ghost workers and surveyed why people like to find ghost workers instead of hiring someone else officially.

After reading these two chapters, I am interested in the point: “paradox of automation’s last mile”. From this view, it seems that is impossible to fully achieve the automation in computer science. In many cases, the best choice does not exist for AI, while humans can make select the best options based on their personal situations. However, I consider this view as a motivation for people who work in computer science chasing the final goal of automation. We can try our best to make this “last mile” further and further. In addition, I really like the point that comparing with computers, humans have creativity and innovation which are never can be replaced by computers. This idea makes me kind of proud as being an HCI student. For the ghost work, I used to think that these trivial tests are finished by computers automatically. Now, I know that thousands of ghost workers made a great effort to improve our software, applications and the Internet world. I agree with the method that AMARA allows workers to return their tasks, especially for macro-ghost work. If someone takes a ghost work which he is not familiar with, nobody can guarantee the correctness of this task. With wrong feedback, companies need to assign this work to other people again.

For the question part, I am interested that after a person finishes and submits his ghost work, will there be other people who check his work again?
If not, what would happen if someone does his work carelessly or select a wrong option by mistake?
In this book, the authors mentioned that people know nothing about the person behind each ghost work, I am curious about the responsibility and correctness problem, especially for that Uber example.
In addition, for those macro-ghost work that contains multiple people working together, I am confused about the effectiveness of working with several anonymous people.

Read More