02/05/2020 – The Role of Humans in Interactive Machine Learning – Subil Abraham

Reading: Saleema Amershi, Maya Cakmak, William Bradley Knox, and Todd Kulesza. 2014. Power to the People: The Role of Humans in Interactive Machine Learning. AI Magazine 35, 4: 105–120. https://doi.org/10.1609/aimag.v35i4.2513

Machine learning systems typically are built by collaboration between the domain experts and the ML experts. The domain experts provide data to the ML experts, who will carefully configure and tune the ML model which is then sent back to the domain experts for review, who will then recommend further changes and the cycle continues until the model reaches an acceptable accuracy level. However, this tends to be a slow and frustrating process and there exists a need to get the actual users involved in a more active manner. Hence, the study of interactive machine learning arose to identify how users can best interact with and improve the ML models through faster, interactive feedback loops. This paper surveys the field, looking at what users like and don’t like when teaching machines, what kind of interfaces are best suited for these interaction cycles and what unique interfaces can exist beyond the simple labelling-learning feedback loop.

When reading about the novel interfaces that exist for interactive machine learning, I find there is an interesting parallel between the development of the “Supporting Experimentation of Inputs” type of interface and to that of text editors. The earliest text editor was the typewriter, where an input once entered could never be taken back. A correction would require starting over or the use of an ugly whiteout. With electronics came the text editors where you could edit only one line at a time. And finally, today we have these advanced, feature rich, editors and IDEs with autocomplete suggestions, in line linting and automatic type checking and error feedback. It would be interesting to see what the next stage of ML model editing would look like if they continued on this trajectory, where we can go from simple “backspace key” type experimentation to more features parallel to what modern text editors have for words. The idea of allowing “Combining Models” as a way to create models draws another interesting parallel to car manufacturing, where cars went from being handcrafted to being built on an assembly line with standardized parts.

I also think their proposal for creating a universal language to connect the different ML fields might end up creating a language that is too general and the different fields, though initially unified, might end up splitting off again due to using only subsets of the language that don’t overlap with each other or by creating new words because the language does not have anything specific enough.

  1. Is the task of creating a “universal language” a good thing? Or would we end up with something too general to be useful and cause fields to create their own subsets?
  2. What other kinds of parallels can we see in the development of machine learning interfaces, like the parallels to text editor development and car manufacturing?
  3. Where is the “Goldilocks zone” for ML systems that are giving context to the user for the sake of transparency? There is a spectrum between “Label this photo with no context” to “here is every minute detail, number of pixels, exact gps location, all sorts of other useless info”. How do we decide which information the ML system should provide as context?

Read More

02/05/2020 – Sushmethaa Muhundan – Power to the People: The Role of Humans in Interactive Machine Learning

The paper promotes the importance of studying users and having ML systems learning interactively from them. The effectiveness of such systems that take into account their users and learn from them is often better than traditional systems and this illustrated using multiple examples. The authors feel that the involvement of users would lead to better user experiences and more robust learning systems. Interactive ML systems offer more rapid, focused and incremental model updates when compared to traditional ML systems by involving the end-user to interact and drive the system towards the intended behavior. This was often restricted to skilled practitioners in the traditional ML systems and had led to delays in incorporating end-users feedback. The benefits of interactive ML systems are two-fold: not only do they help validate the system’s performance with real users, but they also help in gaining insights for future improvement. User interaction with interactive ML was studied in detail and common themes were presented in this paper. Novel interfaces for interactive ML was also discussed that aimed at leveraging human knowledge more effectively and efficiently. These involved new methods for receiving inputs as well as providing outputs which in turn gave the user more control over the learning system and made the system more transparent.

Active learning is an ML paradigm that involves the learner choosing the examples from which they learn. It was interesting to learn about the negative impacts of this paradigm which led to frustration amongst users in the setting of interactive learning. It was uncovered that the users found the stream of questions annoying. On one hand, users are wanting to get involved in such studies to better understand the ecosystem while on the other hand, certain models are getting negative feedback. Another aspect that I found interesting was that the users were open to learning about the internal workings of the system and how the feedback affected the system. The direct impact of their feedback on subsequent iterations of the model motivated them to get more involved. It was also good to note that users were willing to give detailed feedback if given a choice as opposed to just helping with classification. 

Regarding future work, I agree with the authors in that standardization of work done so far on interactive ML under different domains is required in order to avoid duplication of work by researchers in different domains. Converging on and adopting a common language is the need of the hour to help accelerate research in this space. Also, given the subjective nature of the studies explained in this paper, I feel that a comprehensive study needs to be done and a thorough round of testing involving a diverse number of people is necessary before adopting any new interface since we do not want the new interface to be counter-productive as was in several cases cited here.

  • The paper talks about the trade-off between accuracy and speed while dealing with research on user interactions with interactive machine learning due to the requirement for rapid model updates. What are some ways to handle this trade-off?
  • While interactive ML systems involve interaction with end-users, how can the expertise of skilled practitioners be leveraged and combined with these systems to make the process more effective?
  • What are some innovative methods that can be used to experiment with crowd-powered systems to investigate how crowds of people might collaboratively drive such systems?

Read More

02/05/2020- Yuhang Liu -Power to the People: The Role of Humans in Interactive Machine Learning

This paper proposes a new machine learning model-interactive machine learning. The ability to build this learning model is largely driven by advances in machine learning. However, more and more researchers are aware of the importance of studying the users of these systems. In this paper, the author promotes this method and demonstrates how it can lead to a better user experience and a more effective learning system. After exploring many examples, the authors also reached the following conclusions:

  1. This machine learning mode is different from the traditional machine learning mode. Because the user participates, the interaction cycle is faster than the traditional machine learning cycle, which increases the possibility of interaction between the user and the machine.
  2. Researching users is the key to advancing research in this area. Knowing the user can better design the system and better respond to people.
  3. It is unneccesarry to be restrict learning system and the user, because it will lead the interaction process more transparent and produce better results.

First of all, from the text, we know that models in interactive machine learning are updated faster and more concentrated. This is because the user checks interactively and adjusts subsequent inputs. Due to these fast interaction cycles, even users with little or no machine learning expertise can guide machine learning through low-cost trial and error or specialized experiments on input and output. This also shows that the foundation of interactive machine learning is fast, centralized and incremental interaction cycles. And these cycles will help users participate in the process of machine learning. These cycles also lead to tight coupling between users and the system, making it impossible to study the system in isolation. Therefore, we found that in the new system, the machine and the user interact with each other, and in my opinion, in the future, there will be more and more research on the user, and people will eventually pay more attention to the user, because the user experience can ultimately determine the quality of a product, and for this system, the user can influence the machine learning, and the feedback from the machine to the user can ultimately determine the quality of the learning process.

Secondly, the paper mentions that a common language across diverse fields should be developed, which coincides with last week’s paper “Affordance-based framework for human-computer collaboration”, although the domains mentioned are different, and this paper proposes is later, but I think this reflects a same idea, we should establish a common language, for example, in the process of interactive machine learning, there are many ways to analyze and describe the various interactions between humans and machine learners. Therefore, there is an important opportunity to bring together and adopt a common language in these areas to help accelerate research and development in this area, but also in other areas. In this way, in the process of cross-disciplinary integration, we will also have new discoveries and have new impacts.

Questions:

1.Do you think that frequent interactions must have a positive impact on machine learning?

2.For beginners in machine learning, do you think this interactive machine learning is beneficial?

3.In machine learning, which one have a significant impact on the learning result, human or the model’s efficiency.

Read More

02/05/20 – Nan LI – Power to the People: The Role of Humans in Interactive Machine Learning

Summary:

The author in this paper indicated that interactive machine learning can promote the democratization of applied machine learning, which enables users to make use of machine learning-based systems to satisfy their own requirements. However, achieving effective end-user interaction through interactive machine learning brings new challenges. To addressing these challenges and highlight the role and importance of users in the interactive machine learning process, the author presented case studies and the discussion based on the results. For the first section of the case studies presented in the paper indicate that end-user always expect richer involvement in the interactive machine-learning process than just label instances or as an oracle. Besides, the transparency of system work could improve the user experience and the accuracy of the resulting models. Then, the case studies in the next sections indicate richer user interactions were beneficial within a limited boundary, and may not be appropriate for all scenarios. Finally, the author discussed the challenges and opportunities for interactive machine learning systems such as the desire for developing common language across diverse fields, etc.

Reflection:

Personally, I am not very familiar with machine learning. However, after reading this paper, I think the interactive machine learning system could amplify the effects of machine learning on our daily life to a great extent. Especially users with no or little machine learning knowledge could involve in the learning process could not only improve the accuracy of learning outcomes but also richer the interaction between users and products.

One typical example I have experienced the interactive machine learning is one of the features of Netease Cloud Music Player – Private Radio. The private radio recommends music you may like based on your playlist, and then require your feedback, which is like or not. The more feedback you provided, the more likely you would like the next recommendation. Thus, the user study results presented in the paper that end-user would like richer interactive is reasonable. I would also like to tag the recommend music not just like or not, which may also include the reason such as I like this because of the melody or lyrics.

I also agree with the scenario that transparency can help people provide better labels. In my opinion, the transparency of how system works have the same effect as providing users feedback on how their operations influenced the system. A good understanding of the impact of users’ actions would allow them to proactively five more accurate feedback. Regard as the Music Player example, if my private radio always recommends music I like, in order to hear more good music, I will more willing to provide feedback. Conversely, if my feedback has no influence on the radio recommendation situation, I will just give up this feature.

Questions:

  • Do you have a similar experience in the interactive machine-learning system?
  • What is your expectation of these systems?
  • What do you think of the tradeoff between machine learning and human-computer interaction in this interactive learning system?
  • Talk about any of the challenges faced by the interactive learning system which demonstrated at the end of the paper.

Read More

02/05/20 – Dylan Finch – Power to the People: The Role of Humans in Interactive Machine Learning

Summary of the Reading

Interactive machine learning is another form of machine learning that allows for much more precise and continuous changes to the model, rather than large updates that drastically change the model. In interactive machine learning models, domain experts are able to continuously update the model as it produces results, reacting to the predictions it makes in almost real time. Examples of this type of machine learning system include online recommender systems like those on Amazon and Netflix.

In order for this type of system to work, there needs to be an oracle who can correctly label data. Usually this is a person. However, people do not like being an oracle and in some cases, they can be quite bad at it.Humans would also like richer more rewarding interactions with the machine learning algorithms. The paper suggests some way that these interactions could be made richer for the person training the model.

Reflections and Connections

At the end of the paper, the authors say that these new types of interaction with interactive machine learning is a potentially powerful tool that needs to be applied to the right circumstances. I completely agree. I think that this technology, like all technologies, will be useful in some places and not in others. I think that in cases of a simple recommender system, most people are happy to just give a rating every now and then or answer a survey question every now and then. In cases like this, I think that richer interactions would take away from the simplicity and usefulness of the system. But in other cases, it would be nice to be able to kind of work with the machine learning model to generate better answers in the future. 

I also think that in some fields, technologies like the ones presented in his paper will be extremely valuable. I think that in life, it is very easy to get stuck in a rut and to not be able to think outside of the ways that we have always done things. But, it is important to do that to push technology forward. We have always thought of machine learning as an algorithm asking an oracle about specific examples. When we create interactive machine learning, we replaced the oracle with a person and applied the same ideas. But, as this paper points out, people are not oracles and they don’t like to be treated like them. So the ideas in this paper could be very impromat to unlock new ways of using machine learning in conjunction with people. And, the more we play to the strengths of people, we will be able to create better machine learning algorithms that take advantage of those strengths.

Questions

  1. What is one place you think could use interactive machine learning besides recommender systems?
  2. Which of the presented models for new ways for people to interact with machine learning algorithms do you think has the most promise?
  3. Can you think of any other new interfaces for interactive machine learning not mentioned in the paper?

Read More

02/05/2020 – Nurendra Choudhary – Power to the People: The Role of Humans in Interactive Machine Learning (Amershi et. al.)

Summary

The authors discuss the relatively new area of interactive machine learning systems. The previous ML development workflow relied on a laborious cycle of development by ML researchers, critique and feedback by domain experts and back to fine-tuning and development. Interactive ML enables faster feedback and its direct integration to the learning architecture making the process much faster. The paper  

describes case-studies of the effect these systems have from a human and the algorithm’s perspective.

For the system, the instant feedback provides a more robust learning method where the model can fine-tune itself in real-time leading in a much better user experience.

For humans, labelling data is a very mundane task and interactivity makes it albeit a little complex. This increases important efficiency features like attention and thought. This makes the entire process more efficient and precise.

Reflection

The part that I liked the most was “humans are not oracles”. This puts into question the importance of robustness in labeled datasets. ML systems consider datasets as the ground truth, but this cannot be true anymore. We need to apply statistical measures like confidence intervals even for human annotation. Furthermore, this means ML systems are going to mimic all the potential limitations and problems that plague human society (discrimination and hate speech are such examples). In my opinion, I believe the field of Fairness will rise to significance as more complex ML systems show the clear bias that they learn from human annotations.

Another important aspect is the change in human behaviour due to machines. I think this is not emphasized enough. When we got to know the inner mechanism of search, we modified our queries to match something the machine can understand. This signals our inherent mechanism to adapt based on machines. I think this can be observed throughout the development of human civilization (technology changing our routines, politics, entertainment and even conflicts). Interactive ML simulates this adaptation feature in the context of AI.

Another point: “People Tend to Give More Positive Than Negative Feedback to Learners” is interesting. This means people give feedback based on their nature. It is natural to us. For example, people have different methods of teaching and understanding.  However, AI does not differentiate between its feedback based on the nature of its trainers. I think we need to study this more closely and model our AI to handle human nature. The interesting part to study is the triviality or complexity of modeling human behavior in conjunction with the primary problem.

Regarding the transparency of ML systems, the area has seen a recent push towards interpretability. This is a field of study focusing on understanding the architecture and function of models in a deterministic way. I believe transparency will bring more confidence towards the field. Popular questions like “Is AI going to end the world?” and “Are robots coming?” tend to arise from the lack of transparency in these non-deterministic architectures.

Questions

  1. Can we use existing games/interactive systems to learn more complex data for the machine learning algorithms?
  2. Can we model the attention of humans to understand how it might have affected the previous annotations?
  3. Can we trust datasets if human beings lose attention over a period of time?
  4. From an AI perspective, how can we improve AI systems to account for human error and believe their labels to be ground truth?

Word Count: 574

Read More

02/05/2020 – Sukrit Venkatagiri – Power to the People: The Role of Humans in Interactive Machine Learning

Paper: Power to the People: The Role of Humans in Interactive Machine Learning

Authors: Saleema Amershi, Maya Cakmak, W. Bradley Knox, Todd Kulesza

Summary:
This paper talks about the rise of interactive machine learning systems, and how to improve these systems while also as users’ experiences through a set of case studies. Typically, a machine learning workflow involves a complex, back-and-forth process of collecting data, identifying features, experimenting with different machine learning algorithms, tuning parameters, and then having the results be examined by practitioners and domain experts. Then, the model is updated to take in their feedback, which can affect performance and start the cycle anew. In contrast, feedback in interactive machine learning systems are iterative, rapid, andare explicitly focused on user interaction and the ways in which end users interact with the system. The authors present case studies that explore multiple interactive and intelligent user interfaces, such as gesture-based music systems as well as visualization systems such as CueFlick and ManiMatrix. Finally, the paper concludes with a discussion of how to develop a common language across diverse fields, distilling principles and guidelines for human-machine interaction, and a call to action for increased collaboration between HCI and ML.

Reflection:
The several case studies are interesting in that they highlight the differences between typical machine learning workflows and novel IUIs, as well as the differences between humans and machines. I find it interesting that most workflows often leave the end-user out of the loop for “convenience” reasons, but it is often the end-user who is the most important stakeholder.

Similar to [1] and [2], I find it interesting that there is a call to action for developing techniques and standards for appropriately evaluating interactive machine learning systems. However, the paper does not go into much depth into this. I wonder if it is because of the highly contextual nature of IUIs that make it difficult to develop common techniques, standards, and languages. This in turn highlights some epistemological issues that need to be addressed within both the HCI and ML communities. 

Another fascinating finding is that people valued transparency in machine learning workflows, but that this transparency did not always equate to higher (human) performance. Indeed, it may just be a placebo effect where humans feel that “knowledge is power” but that it would not have made any difference. Transparency has other benefits, other than how it relates to accuracy, however. For example, transparency in how a self-driving car works can help debug whom to exactly blame in the case of a self-driving car accident. Perhaps the algorithm was at fault, a pedestrian was, the driver was, the developer was, or it was due to unavoidable circumstances, a la, a force of nature. With interactive systems, it is crucial to understand human needs and expectations.

Questions:

  1. This paper also talks about developing a common language across diverse fields. We notice the same idea in [1] and [2]. Why do you think this hasn’t happened yet?
  2. What types of ML systems might not work for IUIs? What types of systems would work well?
  3. How might these recommendations and findings change with systems where there is more than one end-user, for example, an IUI that helps an entire town decide zoning laws, or an IUI that enables a group of people to book a vacation together.
  4. What was the most surprising finding from these case studies?

References:
[1] R. Jordon Crouser and Remco Chang. 2012. An Affordance-Based Framework for Human Computation and Human-Computer Collaboration. IEEE Transactions on Visualization and Computer Graphics 18, 12: 2859–2868. https://doi.org/10.1109/TVCG.2012.195
[2] Jennifer Wortman Vaughan. 2018. Making Better Use of the Crowd: How Crowdsourcing Can Advance Machine Learning Research. Journal of Machine Learning Research 18, 193: 1–46. http://jmlr.org/papers/v18/17-234.html

Read More

02/05/20 – Runge Yan – Power to the People: The Role of Humans in Interactive Machine Learning

When given a pattern and clear instruction on classification, a machine can learn quickly on a certain task. Cases are addressed to provide a sense of what’s users’ role in interactive machine learning: how does the machine influence the users and vice versa. Then, several features of people involving in interactive machine learning are stated as guidelines to understand the end-user effect on learning process:

People are active, tend to give positive awards and they want to be a model for the learner. Also, with the nature of human intelligence, they want to provide extra information in a rather simple decision, which lead to another feature that proper transparency in the system is valued by people and therefore help reduce error rate in labeling.

Several guidelines are presented in the interactivity. Instead of a small number of professionals designing the system, people can involve more in the process and collect the data they want. A novel interactive machine learning system should be flexible on input and output: User could try input with reasonable variation, assess the quality of the model and ask even the model directly; the output can be evaluated by the users rather than “experts”, a possible explanation of error case can be provided by users and the modification of models is no longer forbidden for users.

Details are discussed to further suggest what methods are better fit in a more interactive system: Common language, principles and guidelines, techniques and standards, volume handling, algorithm and collaboration between HCI, etc. This paper laid a comprehensive foundation for future research on this topic.

Reflection

I once contributed to a dataset on sense making and explanation. My job is to write two similar sentences where only one word (phrase) is different – one of them is common sense and the other is nonsense. For further information, I wrote three sentences trying to explain why the nonsense is appropriate, with only one of them best describe the reason. The model should understand the five sentences, pick the nonsense and find the best explanation. I was asked to be kind of extreme, for example, to write down a pair of “I put an eggplant in the fridge” and “I put an elephant in the fridge”. A mild difference is not allowed, for example, “I put a TV in the fridge.” A model will learn quickly for extreme comparison, however, I’d prefer an iterative learning process where the difference narrows down (Still one of them is nonsense and the other is common sense).

When I try to be a contributor on Figure Eight (previously CrowdFlower), the tutorial and intro task is quite friendly. I was asked to identify a LinkedIn account – whether he/she ’s still working in the same position/ same company. The assistance of decision makes me feel comfortable – I know what’s my job and some possible obstacles along the way, and I can tell the difficulties increase in a reasonable way. When there’s more information in that cannot be described by selecting options, I’m able to provide additional notes to the system, which makes me feel that my work is valuable.

More interactivity is needed to improve the model into next level, but with a previous restricted rule and its restricted output, the openness of the system is a crucial point to determine.

Question

  1. More flexibility means more workload on the system and more requirement on users. How to balance user contribution? For example, if this user wants to do an experiment input and that user is unwilling to do. Will the system accept both input or only the qualified users?
  2. How do we address the contribution of the users?

Read More

02/05/2020 – Bipasha Banerjee – Power to the People: The Role of Humans in Interactive Machine Learning

Summary 

The article was an interesting read on interactive machine learning published by Amershi et al. in the AI magazine in 2014. The authors pointed out the problems with traditional machine learning (ML). In particular, the time and efforts that are wasted to get a single job done. The process involves time-consuming interactions between machine learning practitioners and domain experts. In order to make this process efficient, continuous interactive approaches are needed to make the model interactive. The authors mentioned that the updates in the interactive strategies are quicker, get updated quickly based on user feedback. Another benefit of this approach that they pointed out where users with little or no ML experience could interact as the idea is input-output driven. They gave several case studies of such applications as the Crayons system. They mentioned some observations which demonstrate how the end users’ involvement affected the learning process. Some novel interfaces were also proposed in this article for interactive machine learning like the assessment of model qualities, timing queries to users, among others.

Reflection

I feel that interactive machine learning is a very useful and novel approach to machine learning. Having users get involved in the learning process indeed saves time and effort as opposed to the traditional approach. In traditional ML approaches, the collaboration between the practitioner and the domain experts are not seamless. I enjoyed reading about interactive based learning and how users are directly involved in the learning process. Case studies like learning for gesture-based music or image segmentation demonstrate how users provide feedback to the learner immediately after looking at the output. In traditional ML algorithms, we do involve human components during training. It is mainly in the form of annotated training labels. However, whenever domain-specific work is involved (e.g., the clustering of low-level protein problems), the task of labeling by crowd workers becomes tricky. Hence, this method of experts and end-users being involved in the learning process is productive. This is essentially a human-in-the-loop approach, as mentioned in the “Ghost Work” text. However, human interaction said it is different from the human interaction that occurs when humans are actively trying to interact with the system. The article mentioned various observations when dealing with humans, and it was interesting to see how humans behave, have biases. This was brought forward by last week’s reading about affordances. We found that humans do tend to have a bias, in this case, positive bias whereas machines tend to have an unbiased opinion (debatable as machines are trained by humans, and that data is prone to bias).

Questions

  1. How to deal with human bias effectively?
  2. How can we evaluate how well the system performs when the input data is not free from human errors? (E.g., humans tend to demonstrate how the learner should behave, which may/may not be the correct approach. They tend to have biases too)
  3. Most of the case studies mentioned are interactive in nature (teaching concepts to robots, interactive Crayons System, etc.). How does this extend to domains that are non-interactive like text analysis?

Read More

02/05/20 – Vikram Mohanty – Power to the People: The Role of Humans in Interactive Machine Learning

Paper Authors: Saleema Amershi, Maya Cakmak, W. Bradley Knox, Todd Kulesza

Summary

This paper highlights the usefulness of intelligent user interfaces or the power of human-in-the-loop workflows for improving machine learning models, and makes the case for moving from traditional machine learning workflows to interactive machine learning platforms. Implicitly, domain experts, or the potential users of such applications, can provide high-quality data points. In order to facilitate that, the role of user interfaces and user experience is illustrated via numerous examples. The paper outlines some challenges and future direction of research for understanding better how user interfaces interact with learning algorithms and vice-versa.

Reflections

  1. The case study with proteins and biochemists illustrates a classic case of frustration associated with iterative design, while striving to align with user needs. However, in this example, the problem space was focused on getting a ML model right for the users. As the case study showed, interactive machine learning applications seemed to be the right fit for solving this problem as opposed to iteratively tuning the model manually by the experts. The research community is rightfully moving in the direction of producing smarter applications, and in order to ensure more (better?) intelligibility of these applications, building user interfaces/applications for interactive machine learning seem to be an effective and cost-efficient route.  
  2. In the realm of intelligent user interfaces, even though human users are not just good enough for providing quality training data and provide a lot more value beyond that, my reflection will center around the “human-in-the-loop” aspect to keep the discussion aligned with the paper’s narrative. The paper, without explicitly mentioning, also shows how we can get good quality training labels without relying solely on crowdsourcing platforms like AMT or Figure Eight, but rather, by focusing on the potential users of such applications, who are often domain experts for the applications. The trade-offs between collecting data from novice workers on AMT and domain experts are pretty obvious: quality vs cost.
  3. The authors, through multiple examples, also make an effective argument about the inevitable role of user interfaces in ensuring a stream of good-quality data. The paper further stresses the importance of user experiences in generating rich and meaningful datasets.
  4. “Users are People, Not Oracles” is the first point, and seems to be a pretty important one. If applications are built with the sole intention of collecting training data, there’s a risk of user experience being sacrificed, which may affect good quality data and the cycle ceases to exist.
  5. Because it is difficult to decouple the contributions of the interface design or the algorithm chosen, coming up with an effective evaluation workflow seems like a challenge. However, it seems to be very context-dependent and following recent guidelines such as https://pair.withgoogle.com/ or https://www.microsoft.com/en-us/research/project/guidelines-for-human-ai-interaction/ can go a long way in improving these interfaces.

Questions

  1. For researchers working on crowdsourcing platforms, even it’s for a simple labeling task, how did you handle poor quality data? Did you ever re-evaluate your task design (interface/user experience)?
  2. Let’s say you work in a team with domain experts. Domain experts use an intelligent application in their every day work to accomplish a complex task A (the main goal of the team) and a result, you get data points (let’s call it A-data). As a researcher, you see the value of collecting data points B-data from the domain experts, which may improve the efficiency of task A. However, in order to collect B-data, domain experts have to perform task B, which is an extra task and deviates from A (which is their main objective and what they are paid for). How would you handle this situation? [This is pretty open-ended]
  3. Can you think of any examples where collecting negative user feedback (which can significantly improving the learning algorithm) also fits the natural usage of the application?

Read More