Soylent: A Word Processor with a Crowd Inside: Reflection

The authors here talk about Soylent, a word processing interface that enabled writers to call upon crowdworkers to help shorten proof read and edit parts of their documents on demand. The authors hypothesized that crowd workers with a basic knowledge of written English can support both novice and expert writers. More importantly, the authors discuss architectural and interaction patterns for integrating crowdsourced human contributions directly into user interfaces.

I believe the major contribution of this paper is not the actual context that is being discussed. Rather, this paper lays some important groundwork on how crowdsourcing can be effectively used to improve the quality of a product or service. Instead of viewing crowdworkers as generators of large dataset, they use the breaking down of specific tasks to create an outcome. The authors evaluations demonstrated the feasibility of crowdsourced editing and investigate questions of reliability, cost, wait time, and work time.

Through the use of Shortn, a text shortening service that cuts selected text down to 85% of its original length typically without changing the meaning of the text or introducing errors, Crowdproof, a human-powered spelling and grammar checker that finds problems Word misses, explains the problems, and suggests fixes and the Human Macro, an interface for offloading arbitrary word processing tasks such as formatting citations or finding appropriate figures, the authors describe important elements of an interaction between the crowdworkers and the requesters.

An issue I feel worth discussing here is the cost of a crowdworker. With workers earning very less for their qualifications and efforts (see here). With work being effectively broken into smaller chunks to prevent too much or too little interaction with a smaller subset of mturkers, there is a risk that most other work can get devalued.

Moreover, as described in the paper, the use of mturkers in document correction and completion seems very counterproductive. Businesses would not want to hire workers to protect data privacy, students wouldn’t be able to use this, freelancers cannot hire freelancers to do their freelancing, specialists need to know specific details and regular home users would rather not pay for such a service. Then who would this service be for? It seems that this paper is more of a proof of concept on breaking down tasks for crowdsourcing workers for work that could be automated in the coming years. Grammarly and other similar services come to mind.

Finally, this study makes me question the concept of work and the value of work as there is evidence that crowdworkers’ skills and efforts can be devalued and their welfare underrepresented. Kittur et al. highlight interesting challenges and opportunities in the field of crowdwork worth reading [1].

  1. Kittur, A., Nickerson, J. V., Bernstein, M., Gerber, E., Shaw, A., Zimmerman, J., … & Horton, J. (2013, February). The future of crowd work. In Proceedings of the 2013 conference on Computer supported cooperative work (pp. 1301-1318). ACM.