01/28/2020 | Palakh Mignonne Jude | The Future of Crowd Work

SUMMARY

This paper aims to define the future of crowd work in an attempt to ensure that future crowd workers will share the same benefits as those currently shared by full-time employees. The authors define a framework keeping in mind various factors such as workflow, assignment of tasks, real-time response to tasks, etc. The future that the paper envisions includes worker considerations such as providing timely feedback, and job motivation as well as requester considerations such as quality assurance and control, task decomposition. The research foci mentioned in the paper broadly consider the future of work processes, integration of crowd work and computation, supporting the crowd workers of the future in terms of job design, reputation and credentials, motivation and rewards. With respect to the future of crowd computation, the paper suggests a hybrid human-computer system that would capitalize on the best of both human intelligence and machine intelligence. The authors mention two such strategies – crowds guiding AI and AIs guiding crowds.  As a set of future steps that can be undertaken to ensure environment for crowd workers, the authors describe three design goals – creation of career ladders, improving task design through better communication, facilitating learning.

REFLECTION

I found it interesting to learn about the framework proposed by the authors in order to ensure a better working environment in the future for crowd workers. I like the structure of paper wherein the authors mentioned a brief description about the research foci followed by some prior work and then some potential research that can be performed in each of these foci.

I particularly liked the set of steps that the authors proposed – such as the creation of a career ladder. I believe that the creation of such a ladder, will help workers stay motivated as they will have the ability to work towards a larger goal as promotions can be a good incentive to foster a better and more efficient working environment. I also found it interesting to learn how often times, the design of the tasks cause ambiguity which makes it difficult for the crowd workers to perform their tasks well. I think that having sample tests of these designs with some of the better performing workers (as indicated in the paper) is a good idea as it will allow the requesters to gain feedback on their task design since many of the requesters may not realize that these tasks are not as easy to understand as they might believe.

QUESTIONS

  1. While talking about crowd-specific factors, the authors mention how crowd workers can leave tasks incomplete with fewer repercussions as compared to traditional organizations. Perhaps having a common reputation system in order to maintain a history of employment (associated with some common ID) in order to maintain recommendation letters, work histories might help to keep track of all the platforms with which a crowd worker was associated as well as their performance?
  2. Since the crowd workers interviewed were from Amazon Mechanical Turk alone, wouldn’t the responses collected from the workers as part of this study be biased? The opinion these workers would give would be specific to AMT alone and these opinions might be different among workers that are part of different platforms.
  3. Do any of these platforms perform a thorough vetting for the requesters? Have any measures been taken to move towards the development of a better system in order to ensure that the tasks posted by requesters are not harmful/abusive in nature (CAPCTHA solving, reputation manipulation, etc)?

One thought on “01/28/2020 | Palakh Mignonne Jude | The Future of Crowd Work

  1. I think that crowdsourcing is definitely a different experience from platform to platform and though the interviews were only on Amazon MTurk workers I would say it might still be representative given that MTurk is the most popular crowdsourcing platform out there.
    As for vetting requesters, we’ve touched upon that lightly during class last week. MTurk doesn’t perform any form of vetting for requesters, as far as I know, but some other platforms do. Like Alegion and Appen. But their tasks are mainly data labeling.

Leave a Reply