Summary
In this paper, the authors study system designs that included different kinds of interaction between human agency and automation. They utilize human control and their complementary strengths over algorithms to build a more robust architecture that absorbs from both their strengths. They share case studies of interactive systems in three different problems – data wrangling, exploratory analysis and natural language translation.
To achieve synchronization between automation and human agency, they propose a design of shared representations for augmented tasks with predictive models for human capabilities and actions. The authors criticize the need for the AI community’s push towards complete automation and argue that the focus should rather be on systems that are augmented with human intelligence. In their results, they show that such models are more usable in current situations. They depict the utilization for interactive user interfaces to integrate human feedback into AI and improve the systems and also provided correct results for that problem instance. They utilize shared representations for AI that can be edited by humans for removing inconsistencies, thus, integrating human capability for those tasks.
Reflection
This is a problem we have discussed in class several times. However, the outlook of this paper is really interesting. It shows shared representation as a method for integrating human agency. Several papers we have studied utilize human feedback as part of augmenting the learning process. However, this paper discusses auditing the output of the AI system. Representations for AI is a very critical attribute. Its richness decides the efficiency of the system and its lack of interpretability is generally the reason several AI applications are considered black-box models. I think shared representations in a broader sense, also, suggests broader AI understanding akin to unifying human and AI capabilities in the most optimal way.
However, such representations might limit the capability of the AI mechanisms behind them. The optimization in AI is with respect to a task and that basic metric decides the representations in AI models. The models are effective because they are able to detect patterns in multi-dimensional spaces that humans cannot comprehend. The paper aims to make that space comprehensible, thus, eliminating the very basic complication that makes an AI effective. Hence, I am not sure if it is the best idea for long-term development. I believe we should stick to current feedback loops and only accept interpretable representations with statistically insignificant results differences.
Questions
- How do we optimize for quality of shared representations versus quality of system’s results?
- Humans that are needed to optimize shared representations may be fewer when compared to the number of people who can complete the task. What would be the cost-benefit ratio for shared representations? Do you think the approach will be worth it in the long-term?
- Do we want our AI systems to be fully automatic at some point? If so, how does this approach benefit or limit the move towards the long-term goal?
- Should there be separate workflows or research communities that work on independent AI and AI systems with human agency? What can these communities learn from each other? How can they integrate and utilize each other’s capabilities? Will they remain independent and lead to other sub-areas of research?
Word Count: 545
Hello Nurendra,
To address your third question, I was thinking of that where in some cases we when AI models perform well without human intervention maybe we would want those models to be fully automatic. Having an approach where agency is limited for AI models in this case won’t be beneficial. It’s an interesting point for sure, the question is are we there yet?