Summary
The article was an interesting read on interactive machine learning published by Amershi et al. in the AI magazine in 2014. The authors pointed out the problems with traditional machine learning (ML). In particular, the time and efforts that are wasted to get a single job done. The process involves time-consuming interactions between machine learning practitioners and domain experts. In order to make this process efficient, continuous interactive approaches are needed to make the model interactive. The authors mentioned that the updates in the interactive strategies are quicker, get updated quickly based on user feedback. Another benefit of this approach that they pointed out where users with little or no ML experience could interact as the idea is input-output driven. They gave several case studies of such applications as the Crayons system. They mentioned some observations which demonstrate how the end users’ involvement affected the learning process. Some novel interfaces were also proposed in this article for interactive machine learning like the assessment of model qualities, timing queries to users, among others.
Reflection
I feel that interactive machine learning is a very useful and novel approach to machine learning. Having users get involved in the learning process indeed saves time and effort as opposed to the traditional approach. In traditional ML approaches, the collaboration between the practitioner and the domain experts are not seamless. I enjoyed reading about interactive based learning and how users are directly involved in the learning process. Case studies like learning for gesture-based music or image segmentation demonstrate how users provide feedback to the learner immediately after looking at the output. In traditional ML algorithms, we do involve human components during training. It is mainly in the form of annotated training labels. However, whenever domain-specific work is involved (e.g., the clustering of low-level protein problems), the task of labeling by crowd workers becomes tricky. Hence, this method of experts and end-users being involved in the learning process is productive. This is essentially a human-in-the-loop approach, as mentioned in the “Ghost Work” text. However, human interaction said it is different from the human interaction that occurs when humans are actively trying to interact with the system. The article mentioned various observations when dealing with humans, and it was interesting to see how humans behave, have biases. This was brought forward by last week’s reading about affordances. We found that humans do tend to have a bias, in this case, positive bias whereas machines tend to have an unbiased opinion (debatable as machines are trained by humans, and that data is prone to bias).
Questions
- How to deal with human bias effectively?
- How can we evaluate how well the system performs when the input data is not free from human errors? (E.g., humans tend to demonstrate how the learner should behave, which may/may not be the correct approach. They tend to have biases too)
- Most of the case studies mentioned are interactive in nature (teaching concepts to robots, interactive Crayons System, etc.). How does this extend to domains that are non-interactive like text analysis?