04/08/20 – Lulwah AlKulaib-Agency

Summary

The paper considers the design of systems that enable rich and adaptive interaction between people and algorithms. The authors attempt to balance the complementary strengths and weaknesses of humans and algorithms while promoting human control and skillful action.They aim to employ AI methods while ensuring that people remain in control. Supporting that people should be unconstrained in pursuing complex goals and exercising domain expertise.They share case studies of interactive systems that they developed in three fields: data wrangling, exploratory analysis, and natural language translation that integrates proactive computational support into interactive systems. They examine the strategy of designing shared representations that augment interactive systems with predictive models of users’ capabilities and potential actions, surfaced via interaction mechanisms that enable user review and revision for each case study. These models enable automated reasoning about tasks in a human centered fashion and can adapt over time by observing and learning from user behavior. To improve outcomes and support learning by both people and machines, they describe the use of shared representations of tasks augmented with predictive models of human capabilities and actions. They conclude with how they could better construct and deploy systems that integrate agency and automation via shared representations. They also mention that they found that neither automated suggestions nor direct manipulation play a strictly dominant role.But that a fluent interleaving of both modalities can enable more productive, yet flexible, work.

Reflection

The paper was very interesting to read. The case studies presented were thought provoking. They’re all papers based on research that I have read and gone through while learning about natural language processing and the thought of them being suggestive makes me wonder about such work. How user-interface toolkits might affect design and development of models.

I also wonder as presented in the future work, how to evaluate systems across varied levels of agency and automation. What would the goal be in that evaluation process? Would it differ across machine learning disciplines?  The case studies presented in the paper had specific evaluation metrics used and I wonder how that generalizes to other models. What other methods could be used for evaluation in the future and how does one compare two systems  when comparing their results is no longer enough?

I believe that this paper sheds some light to how evaluation criteria can be topic specific, and those will be shared across applications that are relevant to human experience in learning. It is important to pay attention to how they promote interpretability, learning, and skill acquisition instead of deskilling workers. Also, it’s essential that we think of appropriate designs that would optimize trade offs between automated support and human engagement.  

Discussion

  • What is your takeaway from this paper?
  • Do you agree that we need better design tools that aid the creation of effective AI-infused interactive systems? Why? Or Why not?
  • What determines a balanced AI – Human interaction?
  • When is AI agency/control harmful? When is it useful?
  • Is insuring humans being in control of AI models important? If models were trained by domain experts and domain expertise, then why do we mistrust them?

3 thoughts on “04/08/20 – Lulwah AlKulaib-Agency

  1. Hi Lulu, I agree with you on the point that we need the appropriate design of Human-AI interaction to enable productive and flexible work. That relates to your question on balance. From my point of view, balance here depends on the scope of the task itself, and the capabilities of human and AI on the task. Ideally, we can leverage their complementary strengths and assign the members of the Human-AI team the portion of work they are good at. In that way, we also need to be aware that the Human-AI interaction may not be the best work mode in every task. Depending on the objective and expectation of the task, complete automation or pure human efforts can accomplish work effectively and efficiently.

  2. Hi Lulwah, your last question definitely doesn’t have a simple answer. AI models are usually not built by domain experts (as a lot of them are built using novice crowds from MTurk). Building AI models using domain experts will certainly be a costly affair. Also, the folks using AI models (or applications built on top of these models) are subject to a layer of opaqueness, and often do not get to know about how the model was built (or who built it). So, there certainly exists a correlation between trust and transparency (or in other words, the basis for the domain of Fairness, Accountability, and Transparency). A counter-argument could be domain experts not providing accuracy over novice crowds, but I guess that’s why this question doesn’t have a simple answer.

  3. To answer the third question, I believe “with power comes responsibility”. If a human is able to edit shared representations of the AI, they should be able to completely comprehend the semantics of that dimension. AI generally works better than humans due to pattern recognition at a level incomprehensible to humans. Hence, human edits might be counter-productive in some cases. There should be a thorough evaluation before such interventions.

Leave a Reply