04/08/2020 – Vikram Mohanty – CrowdScape: Interactively Visualizing User Behavior and Output

Authors: Jeffrey M Rzeszotarski, Aniket Kittur

Summary

This paper proposes a system CrowdScape, that supports human evaluation of crowd work through interactive visualization of behavioral traces and worker output, combined with mixed-initiative machine learning. Different case studies are discussed to showcase the utility of CrowdScape.

Reflection

The paper addresses the issue of quality control, a long-standing problem in crowdsourcing, by combining two existing standalone approaches that researchers currently adopt: a) inference from worker behavior and b) analyzing worker output. Combining these factors is advantageous as it provides the complete picture, either by providing corroborating evidence towards ideal workers, or in some cases, may provide complementary evidence that can help infer ideal “good” workers. Just analyzing the worker output might not be enough as there’s an underlying chance that it might be as good as a random coin toss. 

Even though it was a short text in parentheses, I really liked the fact that the authors explicitly sought permission to record the worker interaction logs. 

Extrapolating other similar or dissimilar behavior using Machine Learning seems intuitive here as the data and the features used (i.e. the building blocks) of the model are very meaningful, perfectly relevant to the task and not a black-box model. As a result, it’s not surprising to see it work almost everywhere. The one case where it didn’t work, it made up for it by showing that the complementary case works. This sets a great example for designing predictive models on top of behavioral traces that actually works. 

Moreover, the whole system was built agnostic of the task, and the evaluations justified it. However, I am not sure if the best use case of the system is optimized towards recruiting multiple workers for a single task, or whether it is to identify a set of good workers to subsequently retain for other tasks in the pipeline. I am guessing it is the latter, as the former might seem like an expensive approach for getting high-quality responses. 

On the other hand, I feel the implications of this paper go beyond just crowdsourcing quality control. CrowdScape, or a similar system, can provide assistance for studying user behavior/experience in any interface (web for now), which is important for evaluating interfaces.

Questions

  1. Does your evaluation include collecting behavioral trace logs? If so, what are some of your hypotheses regarding user behavior?
  2. How do you plan on assessing quality control?
  3. What kind of tasks do you see CrowdScape being best applicable for? (e.g. single task, multiple workers)

Vikram Mohanty

I am a 3rd year PhD student in the Department of Computer Science at Virginia Tech. I work at the Crowd Intelligence Lab, where I am advised by Dr. Kurt Luther. My research focuses on developing novel tools that leverage the complementary strengths of Artificial Intelligence (AI) and collective human intelligence for solving complex, open-ended problems.

One thought on “04/08/2020 – Vikram Mohanty – CrowdScape: Interactively Visualizing User Behavior and Output

  1. The continued study and behavior analytics CrowdScape provides is best for a single task. The metrics Crowdscape tracks allow the distributor to track the participants. Assuming the participants opt-in and agree to such tracking, removing participants’ work quickly is efficient for both the participant and the distributor. The distributor identifies a lackluster participant quicker instead of looking at multiple tasks of metrics. The participant works for a shorter time creating a better frame of focus for the work.

Leave a Reply