02/05/2020 – Nurendra Choudhary – Power to the People: The Role of Humans in Interactive Machine Learning (Amershi et. al.)

Summary

The authors discuss the relatively new area of interactive machine learning systems. The previous ML development workflow relied on a laborious cycle of development by ML researchers, critique and feedback by domain experts and back to fine-tuning and development. Interactive ML enables faster feedback and its direct integration to the learning architecture making the process much faster. The paper  

describes case-studies of the effect these systems have from a human and the algorithm’s perspective.

For the system, the instant feedback provides a more robust learning method where the model can fine-tune itself in real-time leading in a much better user experience.

For humans, labelling data is a very mundane task and interactivity makes it albeit a little complex. This increases important efficiency features like attention and thought. This makes the entire process more efficient and precise.

Reflection

The part that I liked the most was “humans are not oracles”. This puts into question the importance of robustness in labeled datasets. ML systems consider datasets as the ground truth, but this cannot be true anymore. We need to apply statistical measures like confidence intervals even for human annotation. Furthermore, this means ML systems are going to mimic all the potential limitations and problems that plague human society (discrimination and hate speech are such examples). In my opinion, I believe the field of Fairness will rise to significance as more complex ML systems show the clear bias that they learn from human annotations.

Another important aspect is the change in human behaviour due to machines. I think this is not emphasized enough. When we got to know the inner mechanism of search, we modified our queries to match something the machine can understand. This signals our inherent mechanism to adapt based on machines. I think this can be observed throughout the development of human civilization (technology changing our routines, politics, entertainment and even conflicts). Interactive ML simulates this adaptation feature in the context of AI.

Another point: “People Tend to Give More Positive Than Negative Feedback to Learners” is interesting. This means people give feedback based on their nature. It is natural to us. For example, people have different methods of teaching and understanding.  However, AI does not differentiate between its feedback based on the nature of its trainers. I think we need to study this more closely and model our AI to handle human nature. The interesting part to study is the triviality or complexity of modeling human behavior in conjunction with the primary problem.

Regarding the transparency of ML systems, the area has seen a recent push towards interpretability. This is a field of study focusing on understanding the architecture and function of models in a deterministic way. I believe transparency will bring more confidence towards the field. Popular questions like “Is AI going to end the world?” and “Are robots coming?” tend to arise from the lack of transparency in these non-deterministic architectures.

Questions

  1. Can we use existing games/interactive systems to learn more complex data for the machine learning algorithms?
  2. Can we model the attention of humans to understand how it might have affected the previous annotations?
  3. Can we trust datasets if human beings lose attention over a period of time?
  4. From an AI perspective, how can we improve AI systems to account for human error and believe their labels to be ground truth?

Word Count: 574

Read More