01/29/20 – Vikram Mohanty – An Affordance-Based Framework for Human Computation and Human-Computer Collaboration.

Paper Authors: R. Jordon Crouser and Remco Chang

Summary

This paper provides an overview summary of some of the popular systems (back in 2012), which were built around human-computer collaboration. Based on this analysis, the authors uncover different key patterns in human and machine affordances, and propose an affordance-based framework that will help researchers think and strategize better about problems that can benefit from collaboration. Such an affordance-based framework, according to the authors, would enable easy comparison between systems via common metrics (discussed in the paper). In the age of intelligent user interfaces, the paper gives researchers a foundational direction or lens to break down problems and map the solution space in a meaningful manner.

Reflection

  1. This paper is a great reference resource for setting some foundational questions on human-computer collaboration – How do we tell if a problem would benefit from a collaborative solution? How do we decide which tasks to delegate to which party, and when? How do we compare different systems solving the same problem? At the same time, it also sets some foundational goals and objectives for a system rooted in human-computer collaboration. The paper illustrates all the concepts through different successful examples of systems, making it easy to visualize the bin in which your (anticipated) research would fit. 
  2. This paper makes a great motivating argument about developing systems from the problem space, rather than jumping directly to solutions, which may often lead to investment of significant time and energy into developing inefficient collaboration.
  3. The paper makes the case for evolving from a prior established framework (i.e. function allocation) for human-machine systems into the proposed affordance-based one. Even though they proposed this framework in 2012, which is also when deep learning techniques started becoming popular, I feel that this framework is dynamic and broad enough to accommodate the ubiquity of current AI and intelligent user interfaces.
  4. Following the paper’s direction of updating theories with technology’s evolution, I would argue for a “sequel” paper to discuss AI affordances as an extension to the machine affordances. This would require an in-depth discussion of the capacities and limitations of state-of-the-art AIs designed for different tasks, some of which currently fall under human affordances, such as visual perception (computer vision), creativity (language models), etc. While AIs may be far from being perfect in these tasks, they still provide imperfect avoidances. Inevitably, this also means re-focusing some of the human affordances described in the paper, and may be part of a bigger question i.e. “what is the role of humans in the age of AI?”. This also pushes the boundaries for what can be achieved with such hybrid interaction, e.g. AI’s last-mile problems [1].
  5. Currently, many different algorithms interact with human users via intelligent user interfaces (IUIs), and form a big part of decision-making processes. Over the years, researchers from different communities have pointed out how different algorithms can result in different forms of bias [2, 3] and have pushed for more fairness, accountability, transparency and interpretability of these algorithms in an effort to mitigate these biases. The paper, based in 2012, did not account for algorithms within machine affordances, and thus considered bias-free analysis as a machine affordance. 8 years later, to be able to detect biases still remains as somewhat more of a human affordance.

Questions

  1. Now, in 2020, how would you expand upon the machine affordances discussed in the paper?
  2. Does AI fit under machine affordances, or deserves a separate section – AI affordances? What kind of affordances does AI provide humans, and vice-a-versa? In other words, how do you envision this paper in current times? 
  3. For the folks working on AI or ML systems, is it possible for you to present the inaccuracies of the algorithms you are working on in descriptive, qualitative terms? Do you see human cognition, be it through novice or expert workers, as competent enough to fill in the gaps?
  4. Does this paper change the way you view your proposed project? If so, how does it change from before? Is it more in terms of how you present your paper?

Vikram Mohanty

I am a 3rd year PhD student in the Department of Computer Science at Virginia Tech. I work at the Crowd Intelligence Lab, where I am advised by Dr. Kurt Luther. My research focuses on developing novel tools that leverage the complementary strengths of Artificial Intelligence (AI) and collective human intelligence for solving complex, open-ended problems.

6 thoughts on “01/29/20 – Vikram Mohanty – An Affordance-Based Framework for Human Computation and Human-Computer Collaboration.

  1. Hi I would like to address the 3rd question of yours about leveraging human cognition to improve AI systems. Lack of interpretability is one of the common problems of black-box models. Why model interpretability is important? Do we need to care about how the models output the results besides it achieves desirable performance? The fact is if we neglect the reasoning, the decision made via the AL or ML system may not be trustworthy. Let me provide an example, if we use a deep learning model for image recognition by providing training data of dogs and other animals. The model may achieve very high performance on the task of image classification. However, the features learned in the models may not be the ones we expect it to learn. If the majority of dog images have grass as the background, the features models learned for the classification task may be the environmental background instead the dog itself. I think this could be a good spot to involve human to analyze the correct vs incorrect cases and design mechanisms like control experiments to examine whether a model works or not on certain tasks or data sets.

  2. I agree with your position that this paper should be repeated and address a myriad array of advances in each topic. I thought that the algorithms that “draw” a picture with the help of a human collaborator show that AI can possibly have some creativity, like you mentioned. Similarly, there have been machine algorithms that help write a story or book (or even the crowdsourcing a story study) could show the power of using the machine to address the affordances that were traditionally human based. This line of thought, however, should note that a human is not going to have the affordances that a computer has in this study. This is likely a one way process.

    I also wanted to answer your question 2, while writing the above paragraph I realized that I think that an AI should be considered the machine – a machine just running a particular algorithm.

    1. I was leaning more towards a separate section, just so as to delineate how AI affordances might be different from some of the “traditional” machine affordances that the paper talks about. But I totally see your point about why an AI should be considered the machine.

  3. I wonder how and where would be good places to look to answer those questions you listed in reflection 1. I agree that addressing the problem space is important in addressing difficult issues in this field, before jumping to looking for fixes.
    I was interested in question 2. I personally feel that if AI affordances were its own separate category, it would be a merge of machine affordances and some other related fields. After all, affordances are mostly based on existing things and how humans feel things should behave.

    1. Apart from existing literature that explores mixed-initiative systems, I feel the answers to those questions may come with experience. A different radical way of thinking (and this may not be correct always) : newly developed systems, which do not conform to some of the standards set by the paper (i.e. incorrectly/inefficiently delegating the tasks to different parties, thinking from a solution space, etc.), upon being thoroughly evaluated, may not yield positive results. This is a very subjective argument.

Leave a Reply