01/29/20 – Runge Yan – Beyond Mechanical Turk

Analysis on Amazon Mechanic Turk and other paid crowdsourcing platform

Ever since the rise of AMT, research and applications have been focusing on this prominent crowdsourcing platform. With the development of similar platforms, some concerns on the use AMT have been solved in various ways. This paper reviews AMT limitations and compares the solution among 7 other popular platforms: ClickWorker, CloudFactory, CrowdComputing Systems, CrowdFlower, CrowdSource, LeadGenius and oDesk.

AMT Limitations are presented in four categories: They are short in quality control, management tools, support for fraud prevention and automated tools. These limitations are further mapped to assessment criteria to focus on detailed solution from all the platforms.

These criteria include the identity and skill of contributors, extra workload management, complex task support and quality control by requesters, and generalized qualification, task collaboration, task recommendation by platforms. By comparing with AMT, future focus research is addressed. Also, the method used in this paper can be improved in a way we set an alternative platform as a baseline.

I tried to work on a platform…

I’ve been thinking since I tried to work on Amazon Mechanic Turk and ClowdFlower (I believe they changed the name to Figure Eight). How does a requestor post a task? The expected format of the input and output from the platform may not match the interface provided by the requestor. I can see most requestors have to transform/code for themselves, but the platforms also start to help here.

Both platforms require identification through credit card and AMT requires SSN. I’m able to use Figure Eight now but AMT refused my signup. I have a relatively new SSN and credit record, which is probably the reason I’m refused. Although CrowdFlower comes into people’s sight and was mentioned more than before, the difference in scale and functionality can be easily spotted in their website layout and structure.

Figure Eight provides a basic task for me to start – A people’s name, current company and position name are given, my job is to go to his LinkedIn profile and make sure of two things: Is he/she working in the same company, and if so, did he/she change to another position? How many positions of this people are active in?  

This should be a simple task, even for a people who’s not familiar with LinkedIn. The reward is relatively low, though. For 10 correct answers I got 1 cent (I think I’m still in an evaluation period). More pay is on the road if I work in a recommended manner, i.e. to try out several simple tasks in different categories, and then put my hands on more complex task, and so on.

Still, I found myself quitting after I made 10 cents. I’m not sure if it’s because I was too casual in sample quiz that I got 8 correct out of 10 and that they decided to give me a longer trail. Compared to several fun-inspired tasks I tried, experience on Figure Eight is not so welcome as I see it.

Back to the analysis. There’s an example that represent many dilemmas in this tripartite workspace: AMT doesn’t care about workers’ profile while oDesk offers public worker profiles showing their identities, skills, rating, etc. It’s hard to maintain a platform that workers can switch between these two options when identification is preferred for some tasks and not for other tasks. And this may refrain requestors from posting their needs.

Questions

Do platforms cooperate? How to combine existing good solution to improve or come up with a better platform?

How does AMT dominate in crowdsourcing for so long and no other platform catches up with it? What are the most significant improvements on AMT in the recent years?

Leave a Reply