Summary:
“Beyond Mechanical Turk: An Analysis of Paid Crowd Work Platforms” by Donna Vakharia and Matthew Lease talks about the limitations of Amazon Mechanical Turk (AMT) and presents a qualitative analysis of newer vendors who offer different models for achieving quality crowd work. AMT was one of the first platforms to offer paid crowd work. However, nine years after its launch, it still is in its beta stage because of its limitations that fail to take into account the skill, experience and ratings of the worker and the employer and its minimal infrastructure that does not support collecting analytics. Several platforms like ClickWorker, CloudFactory, CrowdComputing Systems (now WorkFusion), CrowdFlower, CrowdSource, MobileWorks (now LeadGenius), and oDesk have tried to mitigate these limitations by coming up with a more advanced workflow that ensures quality work from crowd workers. The authors identify four major limitations of AMT: (i)Inadequate Quality Control, (ii)Inadequate Management Tools, (iii)Missing support for fraud prevention, and (iv)Lack of automated tools. The authors also list down several metrics to qualitatively assess other platforms: (i)Key distinguishing features of the platform, (ii)Source of the workforce, (iii)Worker demographics, (iv)Worker Qualifications and Reputations, (v)Recommendations, (vi)Worker Collaboration, (vii)Rewards and Incentives, (viii)Quality Control, (ix)API offerings, (x)Task Support and (xi)Ethics and Sustainability. These metrics prove useful for a thorough comparison of different platforms.
Reflections:
One of the major limitations of AMT is that there are no pre-defined tests to check the quality of the worker. In contrast, other platforms ensure that they test their workers in one or the other way before assigning them tasks. However, these tests might not always reflect the ability of the workers. The tests need to be designed keeping the task in mind and this makes standardization a big challenge. Several platforms also believe in offering their own workforce. This can have both positive and negative impacts. The positives being that the platforms can perform a thorough vetting while the negatives are that this might limit the diversity of the workforce. Another drawback of AMT is that workers seem interchangeable as there is no way to distinguish one from the other. Other platforms try to use badges to display worker skills and use a leaderboard to rank their workers. This can lead to unequal distribution of work which might be merit based but there is a need to perform a deeper analysis of the ranking algorithms in order to ensure that there is no unwanted bias in the system. Some of the platforms employ automated tools to perform tasks which are repetitive and monotonous. This comes with its own set of challenges. As machines become more “intelligent”, humans need to develop better and more specialized skills in order to remain useful. More research needs to be done in order to better understand the working and limitations of such “hybrid” systems.
Questions:
1. With crowd platforms employing automated tools, it is interesting to discuss whether these platforms can still be categorised as a crowd platform.
2. This was more of a qualitative analysis of crowd platforms. Is there a way to quantitatively rank these platforms? Can we use the same metrics?
3. Are there certain minimum standards that every crowd platform should adhere to?