04/08/2020 – Dylan Finch – CrowdScape: interactively visualizing user behavior and output

Word count: 561

Summary of the Reading

This paper describes a system for dealing with crowdsourced work that needs to be evaluated by humans. For complex or especially creative tasks, it can be hard to evaluate the work of crowd workers, because there is some much of it and most of it needs to be evaluated by another human. If the evaluation takes too long, you lose the benefits of using crowd workers in the first place. 

To help with these issues, the researchers have developed a system that helps an evaluator deal with all of the data from the tasks. The system leans heavily on the data visualization. The interface for the system shows the user a slathering of different metrics about the crowd work and the workers to help the user determine quality. Specifically, the system helps the user to see information about worker output and behavior at the same time, giving a better indication of performance.

Reflections and Connections

I think that this paper tries to tackle a very important issue of crowd work: evaluation. Evaluation of tasks is not an easy process and for complicated tasks, it can be extremely difficult and, worst of all, hard to automate. If you need humans to review and evaluate work done by crowd workers, and it takes the reviewer a non-insignificant amount of time, then you are not really saving any effort by using the crowd in the first place. 

This paper is so important because it provides a way to make it easier for people to evaluate work done by crowd workers, making the use of crowd workers much more efficient, on the whole. If evaluation can be done more quickly, the data from the tasks can be used more quickly, and the whole process of using crowd workers has been made much faster than it was before. 

I also think this paper is important because it gives reviewers a new way to look at the work done by crowds: it shows the reviewer both worker output and worker behavior. This would make it much easier for reviewers to decide if a task was completed satisfactorily or not. If we can see that a worker did not spend a lot of time on a task and that their work was significantly different from other workers assigned to the same task, we may be able to tell that that worker did a bad job, and their data should be thrown out.

From the pictures of the system, it does look a little complicated and I would be concerned that the system is hard to use or overly complicated. Having a system that saves time, but that takes a long time to fully understand can be just as bad as not having the time saving system. So, I do think that some effort should be used to make the system look less intimidating and easier to use. 

Questions

  1. What are some other possible applications for this type of software, besides the extra  one mentioned in the paper?
  2. Do you think there is any way we could fully automate the evaluation of the creative and complex tasks focused on in this research?
  3. Do you think that the large amount of information given to users of the system might overwhelm them?

One thought on “04/08/2020 – Dylan Finch – CrowdScape: interactively visualizing user behavior and output

  1. I agree that the work presented in this paper is indeed very important as it helps in speeding the entire crowd working process as a whole by increasing the speed of evaluations.

    I also like the point you raise about the system being complicated and if this hampers the ability of requesters to understand the system. It makes me wonder if a small tutorial of the system might help requesters to better learn the capabilities of the system in a systematic manner which would perhaps reduce the complexity of its use. I do, however, agree that this would perhaps increase the time taken by requesters to use the system as compared to having a simpler system in place.

Leave a Reply