02/05/20 – Runge Yan – Power to the People: The Role of Humans in Interactive Machine Learning

When given a pattern and clear instruction on classification, a machine can learn quickly on a certain task. Cases are addressed to provide a sense of what’s users’ role in interactive machine learning: how does the machine influence the users and vice versa. Then, several features of people involving in interactive machine learning are stated as guidelines to understand the end-user effect on learning process:

People are active, tend to give positive awards and they want to be a model for the learner. Also, with the nature of human intelligence, they want to provide extra information in a rather simple decision, which lead to another feature that proper transparency in the system is valued by people and therefore help reduce error rate in labeling.

Several guidelines are presented in the interactivity. Instead of a small number of professionals designing the system, people can involve more in the process and collect the data they want. A novel interactive machine learning system should be flexible on input and output: User could try input with reasonable variation, assess the quality of the model and ask even the model directly; the output can be evaluated by the users rather than “experts”, a possible explanation of error case can be provided by users and the modification of models is no longer forbidden for users.

Details are discussed to further suggest what methods are better fit in a more interactive system: Common language, principles and guidelines, techniques and standards, volume handling, algorithm and collaboration between HCI, etc. This paper laid a comprehensive foundation for future research on this topic.

Reflection

I once contributed to a dataset on sense making and explanation. My job is to write two similar sentences where only one word (phrase) is different – one of them is common sense and the other is nonsense. For further information, I wrote three sentences trying to explain why the nonsense is appropriate, with only one of them best describe the reason. The model should understand the five sentences, pick the nonsense and find the best explanation. I was asked to be kind of extreme, for example, to write down a pair of “I put an eggplant in the fridge” and “I put an elephant in the fridge”. A mild difference is not allowed, for example, “I put a TV in the fridge.” A model will learn quickly for extreme comparison, however, I’d prefer an iterative learning process where the difference narrows down (Still one of them is nonsense and the other is common sense).

When I try to be a contributor on Figure Eight (previously CrowdFlower), the tutorial and intro task is quite friendly. I was asked to identify a LinkedIn account – whether he/she ’s still working in the same position/ same company. The assistance of decision makes me feel comfortable – I know what’s my job and some possible obstacles along the way, and I can tell the difficulties increase in a reasonable way. When there’s more information in that cannot be described by selecting options, I’m able to provide additional notes to the system, which makes me feel that my work is valuable.

More interactivity is needed to improve the model into next level, but with a previous restricted rule and its restricted output, the openness of the system is a crucial point to determine.

Question

  1. More flexibility means more workload on the system and more requirement on users. How to balance user contribution? For example, if this user wants to do an experiment input and that user is unwilling to do. Will the system accept both input or only the qualified users?
  2. How do we address the contribution of the users?

Leave a Reply