An affordance-based framework for human computation and human-computer collaboration

Crouser, R. Jordan, and Remco Chang. “An affordance-based framework for human computation and human-computer collaboration.” Visualization and Computer Graphics, IEEE Transactions on 18.12 (2012): 2859-2868.

Discussion leader: Javier Tibau

Summary

The authors surveyed the existing literature, from conferences in Visual Analytics, HCI and Visualization, identifying 49 papers representative of human-computer collaboration (HCC) in problem solving. In their analysis, they identify a set of affordances (both human and machine affordances), using them to describe the properties of two HCC projects (reCAPTCHA and PatViz). They urge for the adoption of this vocabulary as an effective way for studying and comparing the approaches by researchers in the area.

Reflection

The paper is a reproach on the (then) current state of research in HCC and related fields, where the many moves forwards are viewed (by the authors) as lacking a cohesive direction and vision. This is emphasized by the three questions that they claim are unanswerable in a systematic way:
– How do we tell if a problem would benefit from a collaborative technique?
– How do we decide which tasks to delegate to which party, and when?
– How does one system compare to others trying to solve the same problem?
(I believe this to be criticism of the field)
One memorable observation is that through affordances: researchers may move away from delegating tasks based on the deficiencies of each agent, to playing to the strengths of human and machine.
Interestingly, towards the end of the paper, they admit that even their affordance-based framework does not aid in tackling questions regarding the well-being of crowd workers.

Questions

  • Why should we be concerned, in a field this young, about having good methodologies for evaluation of projects? Is it not enough to have successful experiences that illuminate ways forward?
  • Why are affordances insufficient for understanding some crowdsourcing problems?
  • What are some of the arguments presented against the use of crowdsourcing for problem solving?

Read More

Micro Perceptual Human Computation for Visual Tasks

Gingold, Yotam, Ariel Shamir, and Daniel Cohen-Or. “Micro Perceptual Human Computation for Visual Tasks.” ACM Trans. Graph. 31, no. 5 (September 2012): 119:1–119:12. doi:10.1145/2231816.2231817.

Discussion leader: Will Ellis

Summary

Human computation (HC) involves using people to solve problems for which pure computational algorithms are unsuited. While previous human computation processes have used people to operate on large batches of problem data or solve problems using complex interactions, this paper describes a paradigm of decomposing complex, human-solvable problems into very simple, independent, parallelizable micro-tasks. Through this, the authors devise algorithms that break down large problems, farm out their constituent micro-tasks to unskilled “human processors” (Mechanical Turkers in this case), and reassemble the results within a timeframe suitable for interactive software use. The paper describes applying this approach to three separate image-processing problems considered to be difficult: finding depth layers in an image, figuring out the surface normal map of a 3-dimensional object in an image, and detecting the bilateral symmetry of an object in an image. The paper further describes various quality control strategies and which combinations thereof produce the best results and economic value.

Reflection

The paper devotes significant coverage to figuring out the best strategies to maximize the accuracy of results. Authors employ duplication, or having different or even the same human processor (HP) perform the same task multiple times. They employ “sentinel operations”, or making an HP solve a problem with a known answer to verify his or her reliability. They attempted self-refereeing, or giving the results of one HP to other HPs for approval. Ultimately, their need for speed dictated they use a combination of duplication and sentinel operation strategies for quality control. Of course, employing more HPs to operate on the same problem, as in the duplication and self-refereeing strategies, costs more money per micro-task. However, researchers found they could achieve high-level accuracy with the least expenditure by setting very high (100%) duplication and sentinel thresholds with only 1 HP. I find this interesting, though not all that surprising, that best outcomes are achieved when highly-performant “unskilled labor” is used. In other words, paying one good worker to do a task is more economical than paying multiple mediocre workers to do the same task separately and combining their results. The authors seem to agree, saying, “Identifying accurate HPs with lower sentinel and consistency overhead is an important direction for future work.” This outcome, in my mind, works against a fundamental premise of the work, which is that tasks can be made simple enough that any unskilled individual can perform them well enough to make this paradigm of human computing an economically viable alternative to pure software solutions or manual problem-solving by experts.

Questions

  • Do the macro-tasks the authors attempted to solve in this paper seem generalizable to other problems in graphical vision or other fields? What other kinds of problems could be solved by human computation using this divide-and-conquer strategy?
  • From a usability perspective, what are the pitfalls of having other humans “in the loop” in your professional software? How could these be mitigated?
  • Does this system introduce new ethical concerns about HP workers for software end-users who may or may not be aware of their existence?
  • How do you feel about the authors’ MTurker compensation strategy? (They used less strict criteria to decide to pay HPs than to decide if HP’s answers could be used.)

Bonus!
Crowdsourcing level design in Nintendo’s Super Mario Maker
Super Mario Maker level design contest at Facebook

Read More