SUMMARY
In this work, the authors perform a study that extends to crowd work platforms beyond Amazon’s Mechanical Turk. They state that this is the first research that has attempted to perform a more thorough study of the various crowd platforms that exist. Given that prior work has mainly focused on Mechanical Turk, a large number of the issues faced by both requesters and workers has been due to the idiosyncrasies associated with this platform in particular. Thus, the authors aim to broaden the horizon for crowd work platforms in general and present a qualitative analysis of various platforms such as ClickWorker, CloudFactory, CrowdComputing Systems, CrowdFlower, CrowdSource, MobileWorks, and oDesk. The authors identify the key criteria to distinguish between the various crowd platforms as well as identify key assumptions in crowd sourcing that maybe caused due to a narrow vision of AMT.
The limitations of AMT, as described by the authors, include inadequate quality control (caused due to a lack of gold standards, lack of support for complex tasks), inadequate management tools (caused due to lack of details about a worker’s skills, expertise; lack of focus on worker ethics and conditions), missing support to detect fraudulent activities, and a lack of automated tools for routing of tasks. In order to compare and assess the seven platforms selected for this study, the authors focus on four broad categories – quality concerns, poor management tools, missing support to detect fraud, and lack of automated tools. These broad categories further map to various criteria such as identifying distinguishing features of each platform, identifying if the crowd platform maintains its own workforce or relies on other sources for its workers as well as if the platform allows for/offers a private workforce, the amount and type of demographic information provided by the platform, platform support for routing of tasks, support for effective and efficient communication among workers, the incentives provided by the platform, organizational structures and processes for quality assurance, existence of automated algorithms to help human workers, and the existence of an ethical environment for workers.
REFLECTION
I found it interesting to learn that prior research in this field was done mainly using AMT. I agree that the research that was performed with only AMT as the crowd platform would have led to conclusions that were biased due to this narrow vision of crowd platforms in general. I believe that the qualitative analysis performed by this paper is an important contribution to this field at large as it will help future researchers to select a platform that is best suited for their task at hand due to a better awareness of the distinguishing features of each of the platforms considered in this paper. I think that the analogy about the Basic programming language aptly describes the motivation for the study performed in this paper.
I also found the categories selected by the authors to be interesting and relevant for requesters when they are considering a platform to be chosen. However, I think that it may have also been interesting (as a complementary study) for the authors to have included information about reasons why a crowd worker may join a certain platform – this would give a more holistic perspective and an insight into these crowd working platforms beyond Mechanical Turk. For example, the book on Ghost Work, included information about platforms such as Amara, UHRS, and LeadGenius in addition to AMT. Such a study coupled with a list of limitations of AMT from a workers’ perspective as well as a similar set of criteria for platform assessment would have been interesting.
QUESTIONS
- Considering that many of the other crowd platforms such as CrowdFlower (2007), ClickWorker (2005) have existed before the date of publication of this work, is there any specific reason that prior research with crowd work did not explore any these platforms? Was there some hinderance to the usage and study of these platforms?
- The authors mention that one of the limitations of AMT was its poor reputation system – this made me wonder why AMT did not take any measures to remedy this poor reputation system?
- Why is that AMT’s workforce is focused in U.S. and India? Do these different platforms have certain distinguishing factors that cause certain demographics of people to be more interested in one over the other?
- The paper mentions that oDesk provides payroll and health-care benefits to its workers. Does this make requesting on oDesk more expensive due to this additional cost? Are requesters willing to pay a higher fee to ensure such benefits exists for crowd workers?
I do agree that an extension to this study where they look at reasons of the workers on why they choose a particular platform to work with, it would give an interesting perspective. I feel like the answer to question 1, in my opinion, is probably just that they had enough data points and enough data in their study that adding more platforms to look at would just be too much effort for minimal return since they seemed to have covered the breadth of what crowd platforms can offer. That is just a guess though. Perhaps CrowdFlower and ClickWorker had interesting offerings that it would have been worth including them after all. I don’t know.
For question 3 though, the Ghost Work book made mention of the fact that the possible reason why USA and India have the highest numbers is a combination of language skill (English is predominant) and more importantly, AMT actually pays them money instead of Amazon gift cards. Those are pretty compelling reasons for more people to work for AMT in those areas compared to other countries where AMT won’t those citizens pay cold hard cash for their work.