A lot of work has been independently done along the tangents of improving computers to allow humans to use them better, and separately in helping machines do work by themselves. The paper makes the case that in the quest for automation, research in augmenting humans to do work by improving the intelligence of tools has fallen to the wayside. This provides a rich area of exploration. The paper explores three tools in this space that work with the users in a specific domain and predict what they might need or want next, based on a combination of context clues from the user. Two of the three tools, Data Wrangler and Voyager use domain specific languages to represent to the user the operations that are possible, thus providing a shared representation of data transformations for the user and the machine. The last tool, for language translation, does not provide a shared representation but presents the suggestions directly because there is no real way of using a DSL here outside of exposing the parse tree which doesn’t really make sense for an ordinary end user. The paper also makes several suggestions of future work. This includes methods for better monitoring and introspection tools in these human AI systems, allowing shared representations to be designed by AI based on the domain instead of being pre-designed by a human, and finding techniques that would help to identify the right balance between human control and automation for a given domain.
The paper uses these three projects as a framing device to discuss the idea of developing better shared representations and their importance in human AI collaboration. I think its an interesting take, especially the idea of using DSLs as a means of communicating ideas between the human user and the AI underneath. They backed away from discussing what a DSL would look like for the translation software since anything outside of autocomplete suggestions don’t really make sense in that domain, but I would be interested in further exploration in that field. I also find it interesting and it makes sense that people might not like the machine predictions being thrust upon them, either because it influences the thinking or it is just annoying. I think the tools discussed manage to make a good balance in staying out of the users way. Yes, the user will be influenced but that is inevitable because the other option is to not give the predictions at all and now you get no benefit.
Although I see the point that the article is trying to make about shared representations (at least, I think I do), I really don’t see the reason for the article existing besides just the author saying “Hey look at my research, this research is very important and I’ve done things with it including making a startup”. The article doesn’t contribute any new knowledge. I don’t mean for that to sound harsh, and I can understand how reading this article is useful from a meta perspective (saves us the trouble of reading the individual pieces of research that are summarized in this article and trying to connect the dots between them).
- In the translation task, Why wouldn’t a parse tree work? Are there other kinds of structured representations that would aid a user in the translation task?
- Kind of a meta question, but do you think this paper was useful on its own? Did it provide anything outside of summarizing the three pieces of research the author was involved in?
- Is there any way for the kind of software discussed here, where it makes suggestions to the user, to avoid influencing the user and interfering with their thought process?