Summary
In this paper, the authors aim to highlight and explore the features of online crowd works other than Amazon Mechanical Turk (AMT) that is not been investigated by many researchers. They recognize that AMT as a system that made a revolution on data processing and collection, but also lack very crucial features like quality control, automation, and integration.
The paper poses many questions about human computation and presents some solutions. The questions related to the current problems with AMT, features of other platforms, and a way to measure each system.
The authors discuss the limitation the AMT system like lacking a good quality control, lacking a good management tools, and missing a process to prevent fraud, and lacking of a way to automate repeated tasks.
The paper defines criteria to evaluate or asses a crowd work platform. The assessment uses different categories like incentive program, quality measures used in the system, the worker demographic and identity information, worker skills or qualifications, and other categories that they have used to compare/contrast different systems including AMT.
The authors also reviewed like seven AMT-alternatives like ClickWorker, CloudFactory, CrowdComputing Systems, CrowdFlower, CrowdSource, MobileWorks, and oDesk. They show the benefits of each system over AMT using the criteria mentioned above, and show that there is a significant improvement on these systems that make the entire process much better and enable the worker and the requester to interact in a better way.
The crows-platforms analysis done by the authors was the only work at the time the paper was written to compare different systems on a defined criterion, and they hope that this work offer a good foundation for other researchers to get better results.
All platforms are still missing features like analytics data about each worker to provide visibilities of the work that is done by each worker. There is also a lack of security measure to make sure that the system is robust and can respond to any adversarial attack.
Reflection
I found that the features that are missing from Amazon Mechanical Turk interesting giving the volume of the system usage and also given the work that Amazon do today on the cloud computing, marketplace and other area where the quality of the work is well known.
I also found that the technical details mentioned in the paper interesting. It seems to me that the authors go lots of feedback from everybody get involved in the system.
I agree with authors on the criteria mentioned in the paper to asses crowdsourcing systems like pay motivation, quality control, and automation and system integration.
The authors didn’t specify which system is the best on their opinion and which system meet all of their criteria.
Questions
- Are there any other criteria that we can use to assess the crowdsourcing systems?
- The author didn’t mention which system is the best. Is there a system that can outperform others?
- Is there a reason why Amazon didn’t address these findings?