1/29/2020 – Jooyoung Whang – An Affordance-Based Framework for Human Computation and Human-Computer Collaboration

In this paper, the author reviews more than 1200 papers to identify how to best utilize human-machine collaboration. Their field of study was visual analytics, but the paper was well-generalized to fit many other research areas. The paper discusses two foundational factors to consider when designing a human-machine collaborative system: Allocation and affordance. In the many papers that the authors reviewed, systematic methods of trying to appropriately allocate work for each human and computer in a collaborative setting was studied. A good rule was introduced by Fitts, but it was found outdated later due to the increasing computational power of machines. The paper decides that inspecting affordance rather than allocation is a better way to utilize human-machine collaborative systems. Affordance can be best understood as what something an agent is good at than others. For example, humans can provide excellent visual processing skills while computers accel at large-data processing. The paper also introduces some case studies where multiple affordances from each party was utilized.

I greatly enjoyed reading about each of the affordances that human and machine can each provide. The list of affordances that the paper provides will serve as a good resource to come back to when trying to design a human-machine collaborative system. One machine affordance that I do not agree with is bias-free analysis. In machine learning scenarios, a learning model is very often easily biased. Both humans and machines can be biased in analyzing something based on previous experience or data. Of course, it is the responsibility of the designer of the system to ensure unbiased models, but as the designer is a human, it is often impossible to avoid bias of some kind. The case study regarding the reCAPTCHA system was an interesting read. I always thought that CAPTCHAs were only used for security purposes, and not machine learning. After learning how it is actually used, I was impressed how efficient and effective the system is at both securing Internet access as well as digitalizing physical books.

The followings are the questions that I came up with while reading the paper:

1. The paper does a great job at summarizing what each a human and a machine is relatively good at. The designer, therefore, simply needs to select appropriate tasks from the system to assign to each human and machine. Is there a good way to identify what affordance the system’s task needs?

2. There’s another thing that humans are really good at compared to a machine: adapting. Machines, upon their initial programming, does not change their response to an event according to time and era while humans very much do. Is there a human-machine collaborative system that would have a task which would require the affordance “adaptation” from a human collaborator?

3. Many human-machine collaborative systems register the tasks that needs to be processed using an automated machine. For example, the reCAPTCHA system (the machine) samples a question and asks the human user to process it. What if it was the other way around where a human register a task and assigns the task to either a machine or a human collaborator? Would there be any benefits to doing that?

Vikram Mohanty

I am a 3rd year PhD student in the Department of Computer Science at Virginia Tech. I work at the Crowd Intelligence Lab, where I am advised by Dr. Kurt Luther. My research focuses on developing novel tools that leverage the complementary strengths of Artificial Intelligence (AI) and collective human intelligence for solving complex, open-ended problems.

Leave a Reply