The paper promotes the importance of studying users and having ML systems learning interactively from them. The effectiveness of such systems that take into account their users and learn from them is often better than traditional systems and this illustrated using multiple examples. The authors feel that the involvement of users would lead to better user experiences and more robust learning systems. Interactive ML systems offer more rapid, focused and incremental model updates when compared to traditional ML systems by involving the end-user to interact and drive the system towards the intended behavior. This was often restricted to skilled practitioners in the traditional ML systems and had led to delays in incorporating end-users feedback. The benefits of interactive ML systems are two-fold: not only do they help validate the system’s performance with real users, but they also help in gaining insights for future improvement. User interaction with interactive ML was studied in detail and common themes were presented in this paper. Novel interfaces for interactive ML was also discussed that aimed at leveraging human knowledge more effectively and efficiently. These involved new methods for receiving inputs as well as providing outputs which in turn gave the user more control over the learning system and made the system more transparent.
Active learning is an ML paradigm that involves the learner choosing the examples from which they learn. It was interesting to learn about the negative impacts of this paradigm which led to frustration amongst users in the setting of interactive learning. It was uncovered that the users found the stream of questions annoying. On one hand, users are wanting to get involved in such studies to better understand the ecosystem while on the other hand, certain models are getting negative feedback. Another aspect that I found interesting was that the users were open to learning about the internal workings of the system and how the feedback affected the system. The direct impact of their feedback on subsequent iterations of the model motivated them to get more involved. It was also good to note that users were willing to give detailed feedback if given a choice as opposed to just helping with classification.
Regarding future work, I agree with the authors in that standardization of work done so far on interactive ML under different domains is required in order to avoid duplication of work by researchers in different domains. Converging on and adopting a common language is the need of the hour to help accelerate research in this space. Also, given the subjective nature of the studies explained in this paper, I feel that a comprehensive study needs to be done and a thorough round of testing involving a diverse number of people is necessary before adopting any new interface since we do not want the new interface to be counter-productive as was in several cases cited here.
- The paper talks about the trade-off between accuracy and speed while dealing with research on user interactions with interactive machine learning due to the requirement for rapid model updates. What are some ways to handle this trade-off?
- While interactive ML systems involve interaction with end-users, how can the expertise of skilled practitioners be leveraged and combined with these systems to make the process more effective?
- What are some innovative methods that can be used to experiment with crowd-powered systems to investigate how crowds of people might collaboratively drive such systems?
Hi Sushmethaa,
I like your perspective on the reading. Even though I read the same paper my take on it was a bit different. Interesting discussion points. I think that in regards of the practitioners involvement, it is important to have their input in the development phase and having that iterative manner in development. They are “experts in the field” after all. And even if we value the end user, it’s only fair to have an experts opinion in development.
Hello Lulwah!
Thank you for your comment. I completely agree with you regarding the practitioner’s involvement. Even I feel that involving practitioners in the iterative development stages is going to be beneficial. I feel that the combination of the practitioner’s perspective and the user’s perspective would complete the picture and benefit the system.
“What are some innovative methods that can be used to experiment with crowd-powered systems to investigate how crowds of people might collaboratively drive such systems?”
I think the user-interactions are just going to add training data to model the ML system. So, the number of people would not matter. An interesting case is of search engine, where individual user features affect the ML system, so I kind of do not understand the effectiveness of crowd-powered systems in such a case. However, there maybe a module in search engines that decide on the popularity or UI/UX of a certain page and interactive systems might help in efficiently training such mechanisms.