01/29/20 – The Future of Crowd Work – Subil Abraham

Reading: Aniket Kittur, Jeffrey V. Nickerson, Michael Bernstein, Elizabeth Gerber, Aaron Shaw, John Zimmerman, Matt Lease, and John Horton. 2013. The Future of Crowd Work. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work (CSCW ’13), 1301–1318. https://doi.org/10.1145/2441776.2441923

What can we do to make crowd work better than the current state of simple tasks, to allow more complexity and satisfaction for the workers? The paper tries to provide a framework to improve crowd work in that direction. It does this through framing it in terms of 12 research directions that need to be studied so that they can be improved upon. The research foci are envisioned to promote the betterment of the current, less than stellar, sometimes exploitative nature of crowd work and make it into something “we would want our children to participate” in.

I like their parallels to distributed computing because it really is like that, trying to coordinate a bunch of people to complete some larger task by combining the results of smaller tasks. I work on distributed things so I appreciate the parallel they make because it fits my mental framework. I also find it interesting that one of the ways of quality control is to observe the worker’s process rather than just evaluating the output but it makes sense that evaluating the process allows the requester to maybe give guidance on what the worker is doing wrong and help improve the processes, whereas with just looking at the output, you can’t know where things went wrong and can only guess. I also think that their suggestion that crowd workers can move up to be full employees as somewhat dangerous because it seems to incentivize the wrong things for companies. I’m imagining a scenario where a company is built entirely on utilizing high level crowd work where they’re advertising that you have opportunities to “move up”, “make your own hours”, “hustle will reach the top”, where the reward is job security. I realize I just described what tenure track may be like for an academic. But that kind of incentive structure seems exploitative and wrong to me. This kind of set up seems normal because it may have existed for a long time in academia and prospective professors accept it because they are single mindedly determined (and somewhat insane) that they are willing to see this through. But I would hate for something like that to become the norm everywhere else.

  1. Did anyone feel like there was any avenue that wasn’t addressed? Or did the 12 research foci fully cover every aspect of potential crowd work research?
  2. Do you think the idea of moving up to employee status on crowd work platforms as a reward for doing a lot of good work is a good idea?
  3. What kind of off-beat innovations can we think of for new kinds of crowd platforms? Just as a random example – a platform for crowds to work with other crowds, like one crowd assigns tasks for another crowd and they go back and forth.

Read More

01/29/20 – Affordance-Based Framework for Human Computation and Human-Computer Collaboration – Subil Abraham

Reading: R. Jordon Crouser and Remco Chang. 2012. An Affordance-Based Framework for Human Computation and Human-Computer Collaboration. IEEE Transactions on Visualization and Computer Graphics 18, 12: 2859–2868. https://doi.org/10.1109/TVCG.2012.195

This paper is creating a summary of data visualization innovations as well as more general human computer collaboration tools for interpreting and making conclusions for data. The goal of the paper is to create a common language by which to categorize these tools and thereby provide a way of comparing the tools and understanding exactly what is needed for a particular situation rather than relying on just researcher intuition. They set up a framework in terms of affordances, what a human or computer can find opportunity and are capable of doing to do given the environment. By framing things in terms of affordances, we are able to identify how a human and/or computer can contribute to the goal of a given task, as well as be able to frame a system in comparison to other systems in terms of their affordances.

The idea of categorizing human-computer collaborations in terms of affordances is certainly an interesting and intuitive idea. Framing the characteristics of the different tools and software we use in these terms is a useful way of looking at things. However, as useful as the framework is, having read a little bit about function allocation, I don’t see how hugely different affordances are from function allocation. They both seem to be saying the same thing, in my view. The list of affordances is a bit more comprehensive than the Fitts HABA-MABA list. However, they both seem to be conveying the same information. Perhaps I do not have the necessary width of knowledge to see the difference, but the paper doesn’t make any convincing argument that is easy for an outsider to this field to understand.

Questions for discussion:

  1. How effective of a system is affordances? What use is it actually able to provide besides being one more set of standards? (relevant xkcd: https://m.xkcd.com/927/)
  2. There is a seemingly clear separation between human and machine affordances. But human adaptability seems to be third kind of affordance, a hybrid affordance where a machine action is used to spark human ingenuity. Does that seem like a valid or would you say that adaptability falls clearly in one of the two existing categories?
  3. Now that we have a language to talk about this stuff, can we now use this language, these different affordances, to combine together to create new applications? What would that look like? Or are we limited to just identifying an application by its affordances after its creation?

Read More

01/29/20 – Human Computation: A Survey and Taxonomy of a Growing Field

Summary of the Reading

This paper is a survey of the research in the field of human computation. The paper aims to classify human computation systems so that the similarities between different projects and the holes in current research can be seen more clearly. The paper also explores related fields like crowdsourcing.

The paper starts by defining human computation as “a paradigm for utilizing human processing power to solve problems that computers cannot yet solve.” The paper then goes on to discuss the differences between human computation and crowdsourcing, social computing, data mining, and collective intelligence.

The paper then goes on to classify human computation systems based on 6 dimensions: motivation, quality control, aggregation, human skill, process orders, and task-request cardinality. Each of these dimensions has several discrete options. Putting all of these dimensions together allows for the classification of any arbitrary human computation system. The paper also provides examples of systems that have various values on each dimension.

Reflections and Connections

I think this paper provides a much needed tool for further human computation and crowdsourcing research. The first step to understanding something is being able to classify that thing. This tool will allow current and future researchers to classify human computation systems so that they can apply existing research to other, similar systems and also allow them to see where existing research falls short and where they need to focus future research.

This research also provides an interesting perspective on current human computation systems. It is interesting to see how current human computation systems compare to each other and what each system does differently and what they have in common. 

I also like the malleability of the classification system. They say in the future work section that this system is very easy to add to. Future researchers who continue the work on this project could easily add values to each of the dimensions to better classify the human computation systems. They could also add values to the dimensions if new human computation systems are invented and need to be classified using this system. There are a lot of good opportunities for growth from this project.

One thing that I thought this paper was missing is a direct comparison of different human computation systems on more than one dimension. The paper uses human computation systems as examples in the various values for each of the dimensions of the classification system, but it doesn’t put these dimensions together and compare the human computation systems on more than one dimension. I think this would have added a lot to the paper, but it would also make for a great piece of future work for this project. This idea is actually very similar to the other paper from this week’s bunch, titled “Beyond Mechanical Turk: An Analysis of Paid Crowd Work Platforms” and I think it would be helpful and build on both of these papers. 

Questions

  1. Do you think the classification system presented in this paper has enough dimensions? Does it have too many?
  2. What is one application you see for this classification system?
  3. Do you think this classification system will help crowdsourcing platforms deal with some of their issues?

Read More

01/28/2020 | Palakh Mignonne Jude | The Future of Crowd Work

SUMMARY

This paper aims to define the future of crowd work in an attempt to ensure that future crowd workers will share the same benefits as those currently shared by full-time employees. The authors define a framework keeping in mind various factors such as workflow, assignment of tasks, real-time response to tasks, etc. The future that the paper envisions includes worker considerations such as providing timely feedback, and job motivation as well as requester considerations such as quality assurance and control, task decomposition. The research foci mentioned in the paper broadly consider the future of work processes, integration of crowd work and computation, supporting the crowd workers of the future in terms of job design, reputation and credentials, motivation and rewards. With respect to the future of crowd computation, the paper suggests a hybrid human-computer system that would capitalize on the best of both human intelligence and machine intelligence. The authors mention two such strategies – crowds guiding AI and AIs guiding crowds.  As a set of future steps that can be undertaken to ensure environment for crowd workers, the authors describe three design goals – creation of career ladders, improving task design through better communication, facilitating learning.

REFLECTION

I found it interesting to learn about the framework proposed by the authors in order to ensure a better working environment in the future for crowd workers. I like the structure of paper wherein the authors mentioned a brief description about the research foci followed by some prior work and then some potential research that can be performed in each of these foci.

I particularly liked the set of steps that the authors proposed – such as the creation of a career ladder. I believe that the creation of such a ladder, will help workers stay motivated as they will have the ability to work towards a larger goal as promotions can be a good incentive to foster a better and more efficient working environment. I also found it interesting to learn how often times, the design of the tasks cause ambiguity which makes it difficult for the crowd workers to perform their tasks well. I think that having sample tests of these designs with some of the better performing workers (as indicated in the paper) is a good idea as it will allow the requesters to gain feedback on their task design since many of the requesters may not realize that these tasks are not as easy to understand as they might believe.

QUESTIONS

  1. While talking about crowd-specific factors, the authors mention how crowd workers can leave tasks incomplete with fewer repercussions as compared to traditional organizations. Perhaps having a common reputation system in order to maintain a history of employment (associated with some common ID) in order to maintain recommendation letters, work histories might help to keep track of all the platforms with which a crowd worker was associated as well as their performance?
  2. Since the crowd workers interviewed were from Amazon Mechanical Turk alone, wouldn’t the responses collected from the workers as part of this study be biased? The opinion these workers would give would be specific to AMT alone and these opinions might be different among workers that are part of different platforms.
  3. Do any of these platforms perform a thorough vetting for the requesters? Have any measures been taken to move towards the development of a better system in order to ensure that the tasks posted by requesters are not harmful/abusive in nature (CAPCTHA solving, reputation manipulation, etc)?

Read More

01/29/2020 – Bipasha Banerjee – An Affordance-Based Framework for Human Computation and Human-Computer Collaboration

Summary

The paper elaborates on an affordance-based framework for human computation and human-computer collaboration. It was published in 2012 in IEEE Transactions on Visualization and Computer Graphics. Affordances is defined as “opportunities provided to an organism by an object or environment”. They reviewed 1271 papers on the area and formed a collection of 49 documents that have state-of-the-art research work. They have grouped them into machine and human based affordances.

In human affordance, they talk about all the skills that humans have to offer namely, visual perception, visuospatial thinking, audiolinguistic ability, sociocultural awareness, creativity and domain knowledge. In machine affordances they discussed about large-scale data manipulation, collecting and storing large amount of data, effective data movement, bias-free analysis. There also is a separate case where a system makes use of multiple affordances like the reCAPTCHA and the PatViz projects. They have included some possible extensions that include human adaptability and machine sensing. The paper also describes the challenges in measuring complexity of visual analytics and the best way to measure work.

Reflection

Affordance is a new concept to me. It was interesting how the authors defined human vs machine affordance-based system along with systems that make use of both. Humans have a special ability that outperforms machines that is creativity and comprehension. Nowadays, machines have the capability to classify data, but this requires a lot of training samples. Recent neural network-based architectures are “data hungry” and using such system are extremely challenging when proper labelled data is lacking. Additionally, humans have a good capability of perception, where distinguishing audio, images, video are easy for them. Platforms like Amara do take advantage of this and employ crowd-workers to caption a video. Humans are effective when it comes to domain knowledge. Jargons specific to a community e.g., chemical names, legal domain, medical domains are difficult for machines to comprehend. Named entity recognizers help machines in this aspect. However, the error is still high. The paper does succeed in highlighting the positives of both systems. Humans are good in various aspects as mentioned before but are often prone to error. This is where machines outperform humans and can be used effectively by systems. Machines are good when dealing with a large quantity of data. Machine-learning based algorithms are useful to classify, cluster data or other services as necessary. Additionally, not having perception acts as a plus as humans do tend to get influenced from certain opinion. If it is a task that require political angle, it would be extremely difficult for humans to have an-unbiased opinion. Hence, both humans and machines have a unique advantage over the other. It is the task of the researcher to utilize them effectively.

Questions

  1. How to effectively decide which affordance is the best for the task at hand? Human or machine?
  2. How to evaluate the effectiveness of the system? Is there any global evaluation metric that can be implemented?
  3. When using both the systems how to separate task effectively?

Read More

01/28/2020 | Palakh Mignonne Jude | Beyond Mechanical Turk: An Analysis Of Paid Crowd Work Platforms

SUMMARY

In this work, the authors perform a study that extends to crowd work platforms beyond Amazon’s Mechanical Turk. They state that this is the first research that has attempted to perform a more thorough study of the various crowd platforms that exist. Given that prior work has mainly focused on Mechanical Turk, a large number of the issues faced by both requesters and workers has been due to the idiosyncrasies associated with this platform in particular. Thus, the authors aim to broaden the horizon for crowd work platforms in general and present a qualitative analysis of various platforms such as ClickWorker, CloudFactory, CrowdComputing Systems, CrowdFlower, CrowdSource, MobileWorks, and oDesk. The authors identify the key criteria to distinguish between the various crowd platforms as well as identify key assumptions in crowd sourcing that maybe caused due to a narrow vision of AMT.

The limitations of AMT, as described by the authors, include inadequate quality control (caused due to a lack of gold standards, lack of support for complex tasks), inadequate management tools (caused due to lack of details about a worker’s skills, expertise; lack of focus on worker ethics and conditions), missing support to detect fraudulent activities, and a lack of automated tools for routing of tasks. In order to compare and assess the seven platforms selected for this study, the authors focus on four broad categories – quality concerns, poor management tools, missing support to detect fraud, and lack of automated tools. These broad categories further map to various criteria such as identifying distinguishing features of each platform, identifying if the crowd platform maintains its own workforce or relies on other sources for its workers as well as if the platform allows for/offers a private workforce, the amount and type of demographic information provided by the platform, platform support for routing of tasks, support for effective and efficient communication among workers, the incentives provided by the platform, organizational structures and processes for quality assurance, existence of automated algorithms to help human workers, and the existence of an ethical environment for workers.

REFLECTION

I found it interesting to learn that prior research in this field was done mainly using AMT. I agree that the research that was performed with only AMT as the crowd platform would have led to conclusions that were biased due to this narrow vision of crowd platforms in general. I believe that the qualitative analysis performed by this paper is an important contribution to this field at large as it will help future researchers to select a platform that is best suited for their task at hand due to a better awareness of the distinguishing features of each of the platforms considered in this paper. I think that the analogy about the Basic programming language aptly describes the motivation for the study performed in this paper.

I also found the categories selected by the authors to be interesting and relevant for requesters when they are considering a platform to be chosen. However, I think that it may have also been interesting (as a complementary study) for the authors to have included information about reasons why a crowd worker may join a certain platform – this would give a more holistic perspective and an insight into these crowd working platforms beyond Mechanical Turk. For example, the book on Ghost Work, included information about platforms such as Amara, UHRS, and LeadGenius in addition to AMT. Such a study coupled with a list of limitations of AMT from a workers’ perspective as well as a similar set of criteria for platform assessment would have been interesting.

QUESTIONS

  1. Considering that many of the other crowd platforms such as CrowdFlower (2007), ClickWorker (2005) have existed before the date of publication of this work, is there any specific reason that prior research with crowd work did not explore any these platforms? Was there some hinderance to the usage and study of these platforms?
  2. The authors mention that one of the limitations of AMT was its poor reputation system – this made me wonder why AMT did not take any measures to remedy this poor reputation system?
  3. Why is that AMT’s workforce is focused in U.S. and India? Do these different platforms have certain distinguishing factors that cause certain demographics of people to be more interested in one over the other?
  4. The paper mentions that oDesk provides payroll and health-care benefits to its workers. Does this make requesting on oDesk more expensive due to this additional cost? Are requesters willing to pay a higher fee to ensure such benefits exists for crowd workers?

Read More

01/29/2020 – Bipasha Banerjee – The Future of Crowd Work

Summary

The paper discusses crowd work and was presented at CSCW (Computer-supported cooperative work) in 2013. It proposes a framework that takes ideas from organizational behavior and distributed computing along with workers’ feedback. The authors of the paper consider the crowd sourcing platform to be a distributed system’s platform where each worker is considered to be analogous to a node in distributed system. This would help in partitioning tasks like parallel computing does. The ways shared resources can be managed, and allocation is also discussed well in this paper. The paper provides deep analysis on the kind of work crowd workers end up doing, the positives and the negatives of such work.

The paper outlines and identifies 12 research areas that form their model. This takes into account broadly, the future of crowd work processes, crowd computation and the crowd workers. Each of the broad topics addressed various subtopics from quality control to collaboration between workers. The paper also talks about how to create leaders in such systems, the importance of better communication and that learning, and assessment should be an integral part of such systems.

Reflection

It was an interesting read on the future of the crowd work. The approach to define the system as a distributed system was fascinating and a novel way to look at the problem. Workers do have a capability to act as “parallel processors” which make the system more efficient and would enable to do intensive tasks (like application development) effectively. Implementing theories from organizational behavior is interesting that it allows the system to better manage and allocate resources. The authors address various subtopics that talk about various issues in depth. It was a very informative read on where they incorporated background work on each of the research areas. I will be discussing some of the topics or problems that stood out to me.

Firstly, they spoke about processes. Assignment of work, management turns out to be a challenging task. In my opinion, a universal structure or hierarchy is not the way to go. In certain kinds of work or tasks it is needed to have a structure where hierarchy would prove to be useful. Work like software development, would benefit from a structure where the code is reviewed, and the quality is assessed by a separate person. Such work also needs a synchronous as people might have tasks dependent on each other.

Secondly, the paper discussed the future of crowd-computation. This included the discussion of AIs and how they can be used in the future to guide crowd working. AI in recent years have proved to be an important tool. Automatic text summarization can be used to help create “Gold standards”. Similarly, other NLP techniques could very well be used to extract information, annotate, summarize and provide other automatic services that can be used to integrate with the current human framework. This would create a human-in the loop system.

Lastly, the future of crowd workers is also an important topic to ponder. Crowd workers are often not compensated well. Similarly, requesters are often delivered sub-par work. The paper did mention that background verification is not always done properly for such “on-demand worker” as it is done for full-time employees from transcripts to interviews. This is a challenge. However, on-demand workers can be validated like Coursera does to validate students. They can be asked to upload documents for tasks that require specialization. This is in itself a task that can be carried out by contractors who verify documentation or create a turk job for the same.

Overall, this was an interesting read and research should be conducted in each of the areas to see how the system and work improves. It has the potential to create more jobs in the future with recruiters being able to hire people instantaneously.

Questions

  1. The authors only considered AMT and ODesk to define the framework. Would other platforms (like Amara, LeadGenuis) have greater/lesser issue which differ from the current needs?
  2. They mentioned about “oDesk Worker Diary” which takes snapshots of workers’ computer screen. How is the privacy and security addressed?
  3. Can’t credentials be verified digitally for specialized tasks?

Read More

01/29/20 – Lee Lisle – Human Computation: A Survey and Taxonomy of a Growing Field

Summary

In the paper, Quinn and Bederson reflect on the current state of human computation research and define a framework for current and future research in the field. They make sure to impart to the reader that human computation is not crowdsourcing, nor collective intelligence – rather, it is a space where human effort is used where computer may be able to solve the problems in the future. They then define several dimensions on how to classify a human computation study; these are motivation (which can include pay or altruism among others), quality control (or how the study ensures reliable results), how the study aggregates the data, what human skill is used (visual perception etc.), process order (how the tasks are deployed) and task-request cardinality (how many tasks are deployed for how many requests). Using these dimension definitions, the authors define new research areas for growth, through pointing out uncombined dimensions or by creating new dimensions to explore.

Personal Reflection

I read this paper after reading the human computation/human computer collaboration affordances survey, and it was interesting to compare and contrast the two papers for how they approached very similar problems in different ways. This paper did a good job in defining dimensions rather than research areas. It was much easier to understand how one can change the dimensions of research as a sort of toggle on how to tackle the issues they purport to solve.

Also, the beginning seemed to be on a tangent about what human computation is really defined as, but I thought this section helped considerably narrow the scope of what they wanted to define. I had thought of human computation and crowdsourcing as synonyms, so getting them separated early on was a good way of setting the scene for the rest of the paper.

Also, this paper opened my eyes to see how wide the dimensions could be. For example, while I had known of a few methods for quality control, I hadn’t realized there were so many different options.

Lastly, I am very happy they addressed the social issues (in my opinion) plaguing this field of research in the conclusion. Treating these workers as faceless mercenaries is dehumanizing at best. I wish there was a little more interaction between the two parties than there is currently, but it is being at least thought of in these survey studies.

Questions

  1. What dimension do you think has the most promising potential for new growth, and why?
  2. Do you think you can start a new research project by just choosing a set of 6 choices (1 for each dimension) and then design a project?
  3. If a project has the same collection of dimensions as another proven study, is there merit in researching it? Or should it just work?
  4. Can you think of any study that might fit under two different discrete values of the same dimension? I.E., is there a many (studies) to one dimensional value relationship, or is it many to many?

Read More

01/29/20 – NAN LI – Beyond Mechanical Turk: An Analysis of Paid Crowd Work Platforms

Summary:

Since the AMT has long been the research focus for the study of crowdsourcing. The author compared a series of platforms and their diverse capabilities to enrich future research directions. The aim of his work is to encourage more investigation and research studies from different sources instead of focusing only on AMT. The author reviewed the related work, pointed out the AMT limitation and problems that have been criticized, then defined the criteria to assess the platform. Mainly in the paper, the author performed a detailed analysis and comparison among seven alternative crowd work platforms. Through the analysis and comparison with AMT, the author identified the same approaches that each platform has been implemented, such as the peer assessment, qualification tests, leaderboards, etc. Besides, the author also found the difference such as the use of automated methods, task availability on mobiles, ethnic worker treatment, etc.

Reflections:

I think the most serious problem in the research study is the limitation of the scope of the investigation. Only a variety of usage of resources can make the research results credible and highly applicable. Thus, I think it is very meaningful for the author to make this comparison and investigation. This is what research should so, always explore, always compare and keep critical. Besides, with the increase in popularity and the increase in the number of users, AMT should pay more attention to improving its own platform, rather than keep staying at a level that can meet current needs. Especially now that the same type of platforms are gradually increasing and developing, they are attracting more users by improving a better management service or develop reasonable regulation and protection mechanisms. Although, until now, AMT is the biggest and most developed platform, AMT still needs to learn from other platforms’ advantages.

In spite of that, other platforms should also keep update and try to create novelty features to attract more network users. A good way to improve their own platform is to always consider what the user’s requirement is. Hot topics which include ethical issues and labor protection issues should be considered. Besides, how to make good use of these platforms to a great extent to improve their product quality is also worth considering.

A short path towards improvement is through discussion. This discussion should include the company, the client, the product development team and even the researcher. As for the companies, they should always ask feedback from their network users. This is a baseline for them to improve not only their platform and user experience but also their product. Also, companies should discuss with each other and even though with the researchers to think about solutions to current problems in their platforms. Even though we cannot predict how things will go. Will these platforms last long? How many of these works will be available with the development of technology? I still believe it is worth to work hard to make these platforms better.

Questions

  • Why researchers are more willing to do research on AMT?
  • Is there any solution to pursue researchers do research on other platforms?
  • For the other platforms besides AMT, what is the main reason for their users to choose them instead of AMT?

Read More

01/29/20 – Sushmethaa Muhundan – Beyond Mechanical Turk: An Analysis of Paid Crowd Work Platforms

In 2005, Amazon launched an online crowd work platform named Mechanical Turk which was one of the first of its kind and it gained momentum in this space. However, numerous other platforms have come up offering the same service since then. A vast majority of researchers in the field of crowd work have concentrated their efforts on Mechanical Turk and often ignore the alternative feature sets and workflow models provided by these other platforms. This paper deviates from this pattern and gives a qualitative comparison of 7 other crowd work platforms. The intent is to help enrich research diversity and accelerate progress by moving beyond MTurk. Since there has been a lot of work inspired by the short-comings of MTurk, the broader perspective is often lost and the alternate platforms are often ignored. This paper covers the following platforms: ClickWorker, CloudFactory, CrowdComputing Systems, CrowdFlower, CrowdSource, MobileWorks, and oDesk.

I feel that this paper encompasses different types of crowd work platforms and provides a holistic view as opposed to just focusing on one platform. The different dimensions used to compare the platforms give us an overview of the differentiating features each platform provides. I agree with the author in that research on crowd work would benefit from diversifying its lens of crowd work. This paper would be a good starting point from that perspective.

Having only been exposed to MTurk and its limitations thus far, I was pleasantly surprised to note that many platforms offer peer reviews, plagiarism checks, and feedback. This not only helps ensure a high quality of work but also provides a means for workers to validate their work and improve. Opportunities are provided to enhance the skill set of workers by providing a variety of resources to train them like certifications and training modules. Badges are used to display the workers’ skill set. This helps promote the worker’s profile as well as helps the worker grow professionally. Many platforms display work histories, test scores, and areas of interest that guide requesters in choosing workers who match their selection criteria. A few platforms maintain payroll and provide bonuses for high performing workers. This keeps the workers motivated to deliver high-quality results. 

I really liked the fact that a few platforms are using automation to complete mundane tasks thereby eliminating the need for human workers to do these tasks. These platforms identify tasks that can be handled by automated algorithms, use machine automated workers for these tasks and use human judgment for the rest. This increases productivity and enables faster completion times.

  • How can the platforms learn from each other and incorporate the best practices so that they can provide a platform that motivates the workers to perform well as well as helps requesters find workers with the necessary skillset efficiently?
  • What are some ways we can think of that permits access to pull identities for greater credibility and hide identities when not desired? Is there a way a middle ground can be achieved?
  • Since learning is an extremely important factor that would benefit both the workers (professional growth) and requesters (workers are better equipped to handle work), how can we ensure that due importance is given to this aspect by all platforms?

Read More