01/29/2020 – Bipasha Banerjee – An Affordance-Based Framework for Human Computation and Human-Computer Collaboration

Summary

The paper elaborates on an affordance-based framework for human computation and human-computer collaboration. It was published in 2012 in IEEE Transactions on Visualization and Computer Graphics. Affordances is defined as “opportunities provided to an organism by an object or environment”. They reviewed 1271 papers on the area and formed a collection of 49 documents that have state-of-the-art research work. They have grouped them into machine and human based affordances.

In human affordance, they talk about all the skills that humans have to offer namely, visual perception, visuospatial thinking, audiolinguistic ability, sociocultural awareness, creativity and domain knowledge. In machine affordances they discussed about large-scale data manipulation, collecting and storing large amount of data, effective data movement, bias-free analysis. There also is a separate case where a system makes use of multiple affordances like the reCAPTCHA and the PatViz projects. They have included some possible extensions that include human adaptability and machine sensing. The paper also describes the challenges in measuring complexity of visual analytics and the best way to measure work.

Reflection

Affordance is a new concept to me. It was interesting how the authors defined human vs machine affordance-based system along with systems that make use of both. Humans have a special ability that outperforms machines that is creativity and comprehension. Nowadays, machines have the capability to classify data, but this requires a lot of training samples. Recent neural network-based architectures are “data hungry” and using such system are extremely challenging when proper labelled data is lacking. Additionally, humans have a good capability of perception, where distinguishing audio, images, video are easy for them. Platforms like Amara do take advantage of this and employ crowd-workers to caption a video. Humans are effective when it comes to domain knowledge. Jargons specific to a community e.g., chemical names, legal domain, medical domains are difficult for machines to comprehend. Named entity recognizers help machines in this aspect. However, the error is still high. The paper does succeed in highlighting the positives of both systems. Humans are good in various aspects as mentioned before but are often prone to error. This is where machines outperform humans and can be used effectively by systems. Machines are good when dealing with a large quantity of data. Machine-learning based algorithms are useful to classify, cluster data or other services as necessary. Additionally, not having perception acts as a plus as humans do tend to get influenced from certain opinion. If it is a task that require political angle, it would be extremely difficult for humans to have an-unbiased opinion. Hence, both humans and machines have a unique advantage over the other. It is the task of the researcher to utilize them effectively.

Questions

  1. How to effectively decide which affordance is the best for the task at hand? Human or machine?
  2. How to evaluate the effectiveness of the system? Is there any global evaluation metric that can be implemented?
  3. When using both the systems how to separate task effectively?

Read More

01/29/20 – Lulwah AlKulaib- An Affordance Based Framework for Human Computation and Human-Computer Collaboration

Summary:

The authors reviewed literature from top ranking conferences in visual analytics, human computer interaction, and visualization. From the 1271 papers, they identified 49 papers representative of human-computer collaborative problem solving. In their analysis of the 49 papers, they found patterns of design that depends on a set of human and machine-intelligence affordances. The authors believe that these affordances form the basis of a common framework for understanding and discussing the analyzed collection. They use these features to describe the properties of two human-computer collaboration projects. The case studies they explain in the paper were reCAPTCHA and PatViz. The authors explain each case study and how they leverage the human and machine affordances. They also suggest a list of under explored affordances and suggest scenarios in which they might be useful. The authors believe that their framework will benefit the field of visual analytics as a whole. In presenting the preliminary framework, they aspire to have laid the foundation for a more rigorous analysis of tools presented in the field. 

Reflection:

The paper presents a summary on the state of research in human-computer collaboration and related fields in 2012. The authors considered most of the advances that happened then lacking a cohesive direction. They set a negative tone in that part of the paper. They emphasized their point of view by proposing three questions that they claim cannot be answered systematically:

  • How do we tell if a problem would benefit from a collaborative technique?
  • How do we decide which tasks to delegate to which party, and when?
  • Finally, How does one system compare to others trying to solve the same problem?

Another point worth discussing, is that the authors answer the second question by saying that researchers using affordances language would steer to matching tasks according to the strengths of humans or machines instead of matching them based on their deficiencies. I’m not sure I agree. I feel like the case studies they provided were not enough to back this claim and it wasn’t sufficient for them to use in their discussion section.

The authors also raise a point about the importance of developing a common language to describe how much and how well affordances are being leveraged. I agree with their proposal and believe that this measure exists in other fields like AI, as they mentioned.

Discussion:

  • What are the values of having the suggested method to evaluate projects?
  • The authors argue against using crowdsourcing for problem solving. Do you agree with them? Why/Why not?
  • Are affordances sufficient for understanding crowdsourcing problems? Why/Why not?
  • What is the best way to measure human work? (other than those mentioned in the paper)
  • How do we account for individual differences in human operators? (other than those mentioned in the paper)
  • Give examples that the authors didn’t propose for the questions that they mention initially: 
    • How do we tell if a problem would benefit from a collaborative technique?
    • How do we decide which tasks to delegate to which party, and when?
    • How does one system compare to others trying to solve the same problem?

Read More