While crowd work has the potential to support a flexible workforce and leverage expertise distributed geographically, the current landscape of crowd work is often associated with negative attributes such as meager pay and lack of benefits. This paper proposes potential changes to better the entire experience in this landscape. The paper draws inputs from organizational behavior, distributed computing and feedback from workers to create a framework for future crowd work. The aim is to provide a framework that would help build a culture of crowd work that is more attractive for the requesters as well as the workers and that can support more complex, creative and highly valued work. The platform should be capable of decomposing tasks, assigning them appropriately, motivating workers and should have a structured workflow that enables a collaborative work environment. Quality assurance is also a factor that needs to be ensured. Creating career ladders, improving task design for better clarity and facilitating learning are key themes that emerged from this study. Improvements along these themes would enable create a work environment conducive for both the requesters as well as workers. Motivating the workers, creating communication channels between requesters and workers, providing feedback to workers are all means to achieve this goal.
Since the authors were requesters themselves, it was nice to see that they sought to get the perspectives of the current workers in order to take into account both the parties’ viewpoints before constructing the framework. An interesting comparison of the crowdsourcing market has been made to a loosely coupled distributed computing system and this helped build the framework by drawing an analogy to solutions developed to similar problems in the distributed computing space. I liked the importance given to feedback and learning which are components of the framework. I feel that feedback is of extreme importance when it comes to improving one’s self and this is not prevalent in the current ecosystem. As for learning, I feel that personal growth is very essential in any working environment and a focus on learning would facilitate self-improvement which in turn would help them perform subsequent tasks better. As a result, the requesters are benefitted since the crowd workers are more proficient in their work. I particularly found the concept of intertwining AIs guiding crowds and crowds guiding AIs extremely interesting. The thought of leveraging the strengths of both AI and humans to strengthen the other is intriguing and has great potential if utilized meaningfully.
- How can we create a shift in the mindset of the current requesters who get their work done for meager pay to actually change their viewpoint and invest in the workers by giving valuable feedback/spend time ensuring the requirements are well understood?
- What are some interesting ways that can be employed to leverage AIs guiding crowds?
- How can we prevent the disruption of quality by a handful of malicious users who collude to agree on wrong answers to cheat the system? How can we build a framework of trust that is resistant to malicious workers and requesters who can corrupt the system?
Integrity, or some sorta voice within, or a pledge to always pay fair, can go a long way in making sure requesters pay a fair wage. There’s been interesting work done on this. Most recently, https://fairwork.stanford.edu/ and this is the associated paper: https://hci.stanford.edu/publications/2019/fairwork/fairwork-hcomp2019.pdf
It’s just a one line of code to ensure Turkers get the minimum wage. However, the issue is not entire black and white. If it’s the minimum wage, which one — Federal ($7.25/hr) vs California ($15/hr) . This gets further complicated as you move to countries. Many researchers circulated a pledge form on Twitter some months back as well. It’s good to get the conversation started.
Prolific is a platform which ensures $6.50/hr : https://www.prolific.co/ Here’s a blog post for the same: https://medium.com/@ekadamer/stop-using-mturk-for-research-4b9c7b4f6f56
I feel the first step for AIs guiding crowds should be help recommend tasks to crowds. If AI recommendations can be used to propel “pointless” videos on a content-sharing platform (https://www.nytimes.com/2020/01/29/magazine/comedy-written-for-the-machines.html), it certainly can (should) be used to recommend tasks on a platform that sees so much activity every day. This would help get offload the cost of being hypervigilant.
I am not sure what the internal mechanism of Prolific is but they certainly recommend tasks to their workers based on their interests (and maybe, previous tasks).