Summary:
The authors reviewed literature from top ranking conferences in visual analytics, human computer interaction, and visualization. From the 1271 papers, they identified 49 papers representative of human-computer collaborative problem solving. In their analysis of the 49 papers, they found patterns of design that depends on a set of human and machine-intelligence affordances. The authors believe that these affordances form the basis of a common framework for understanding and discussing the analyzed collection. They use these features to describe the properties of two human-computer collaboration projects. The case studies they explain in the paper were reCAPTCHA and PatViz. The authors explain each case study and how they leverage the human and machine affordances. They also suggest a list of under explored affordances and suggest scenarios in which they might be useful. The authors believe that their framework will benefit the field of visual analytics as a whole. In presenting the preliminary framework, they aspire to have laid the foundation for a more rigorous analysis of tools presented in the field.
Reflection:
The paper presents a summary on the state of research in human-computer collaboration and related fields in 2012. The authors considered most of the advances that happened then lacking a cohesive direction. They set a negative tone in that part of the paper. They emphasized their point of view by proposing three questions that they claim cannot be answered systematically:
- How do we tell if a problem would benefit from a collaborative technique?
- How do we decide which tasks to delegate to which party, and when?
- Finally, How does one system compare to others trying to solve the same problem?
Another point worth discussing, is that the authors answer the second question by saying that researchers using affordances language would steer to matching tasks according to the strengths of humans or machines instead of matching them based on their deficiencies. I’m not sure I agree. I feel like the case studies they provided were not enough to back this claim and it wasn’t sufficient for them to use in their discussion section.
The authors also raise a point about the importance of developing a common language to describe how much and how well affordances are being leveraged. I agree with their proposal and believe that this measure exists in other fields like AI, as they mentioned.
Discussion:
- What are the values of having the suggested method to evaluate projects?
- The authors argue against using crowdsourcing for problem solving. Do you agree with them? Why/Why not?
- Are affordances sufficient for understanding crowdsourcing problems? Why/Why not?
- What is the best way to measure human work? (other than those mentioned in the paper)
- How do we account for individual differences in human operators? (other than those mentioned in the paper)
- Give examples that the authors didn’t propose for the questions that they mention initially:
- How do we tell if a problem would benefit from a collaborative technique?
- How do we decide which tasks to delegate to which party, and when?
- How does one system compare to others trying to solve the same problem?
I think one possible reason to explain why the authors against using crowdsourcing for problem-solving is the time. This paper did not predict the rapid development of crowdsourcing and AI technology in 2012. Hence, this paper had a negative tone.