Summary
Crouser and Chang make the argument that visual analytics is defined as “the science of analytical reasoning facilitated by visual interactive interfaces,” and is being pushed through two main directions of thought – human computation and human computer collaborations. However, there’s no common design language between the two subdisciplines. Therefore, they took it upon themselves to do a survey of 1217 papers, whittling them down to 49 representative papers to then find common threads that can help define the fields. They then categorized the research into what affordances the research studies either for users or the machines. Humans are naturally better with visual perception, visuospatial thinking, audiolinguistic ability, sociocultural awareness, creativity, and domain knowledge, while machines are better with large-scale data manipulation, data storage and collection, efficient data movement and biasfree analysis. The authors then suggest that research explore human adaptability and machine sensing as well as discuss when to use these strategies.
Personal Reflection
When reading this I did question a few things about the studies. For example, in bias-free analysis, while they do admit that human bias can be introduced during the programming, they fail to acknowledge the bias that can be present in the input data. Entire books have been written (Weapons of Math Destruction being one) that cover how “bias-free” algorithms can be fed input data that have clear bias, resulting in a biased system regardless of it being hard-coded in the algorithm.
Outlining these similarities between various human-computer collaborations allows other researchers to scope projects better. Bringing up the deficiencies of certain approaches allows for avoidance of the same pitfalls.
The complexity measure questions section, however, felt a little out of place considering it was the first time it was brought up in the paper. However, it asked strong questions that definitely impact this area of research. If ‘running time’ for a human is a long time, this could mean there are improvements to be made and areas that we can introduce more computer-aid.
Questions
- This kind of paper is often present in many different fields. Do you find these summary papers useful? Why or Why not? Since it’s been 8 years since this was published, is it time for another one?
- Near the end of the paper, they ask what the best way is to measure human work. What are your ideas? What are the tradeoffs for the types they suggested (input size, information density, human time, space)?
- Section 6 makes it clear that using multiple affordances at once needs to be balanced in order to be effectively used. Is this an issue with the affordances or an issue with the usability design of the tools?
- The authors mention two areas of further study in section 7: Human adaptability and machine sensing. Have these been researched since this paper came out? If not, how would you tackle these issues?