02/04/2020 – Akshita Jha – Power to the People: The Role of Humans in Interactive Machine Learning

Summary:
“Power to People: The Role of Humans in Interactive Machine Learning” by Amershi et. al. talks about the tightly coupled interactivity between systems and end users and how to better user experiences while improving system performance. The workflow for conventional machine learning algorithms involves a long drawn out process of training/pre-training, fine-tuning, iteratively tuning hyper-parameters, etc. to improve the target metrics. In comparison, the feedback in the interactive machine learning workflow are rapid, focused and incremental. Prominent real-world examples of interactive machine learning systems include recommender systems like Amazon and Netflix. Interactive machine learning has also been used for image segmentation where the users were asked to mark the foreground and the background image. The system took this feedback into consideration and improved its performance. Similarly, interactive music composition definitely helps improve the system but has also shown to train the students. The authors also present case studies that explore novel interfaces for interactive machine learning. For example, experimentation providing the ability to the end user to modify the input to observe the effect on the final result or the output, studies attempting to understand the efficacy of active vs. passive learning, enabling the users to query the learner as opposed to only answering questions, enabling users to provide active feedback and critique the learner’s output etc. In all the above examples, the user and the system are tightly coupled and form a cohesive unit which is difficult to study in isolation.

Reflections:
The paper presents several case surveys that highlights the differences between machines and humans. One particular case study that I found particularly interesting was where the researchers tried to use human feedback for training a reinforcement learning based model. In conventional reinforcement learning, the agent works in a simulated task environment and receives rewards based on each of its actions. The agent then tries to find ideal policies to best complete the task at hand. It does this maximizing the rewards. Unlike machine learning’s tendency to penalize the agent, humans in the loop focused on giving positive feedback more than the negative feedback which motivated the agent to follow a greedy algorithm. This led to an undesired effect on the agent that actively avoided getting to the goal. This result is fascinating for several reasons: (i) It effectively demonstrates the difference between the way the computers learns and the manner in which human psychology operates and (ii) It shows what can be changed in the system to incorporate human feedback and make it more effective and user friendly. Another unexpected insight was that people value transparency. It was surprising to find out that knowing more about the “black box” model helped in getting better labels. In order to design effective systems, it is critical to understand what humans expect while interacting with a system.

Questions:
1. Which systems do we interact with most on a daily system? Are they interactive?
2. Can we develop metrics to appropriately evaluate a model’s ability to interact?
3. Apart from reinforcement learning are there other any specific machine learning algorithms that might benefit from having humans in the loop?

Leave a Reply