01/29/20 – Lee Lisle – An Affordance-Based Framework for Human Computation and Human-Computer Collaboration

Summary

            Crouser and Chang make the argument that visual analytics is defined as “the science of analytical reasoning facilitated by visual interactive interfaces,” and is being pushed through two main directions of thought – human computation and human computer collaborations. However, there’s no common design language between the two subdisciplines.  Therefore, they took it upon themselves to do a survey of 1217 papers, whittling them down to 49 representative papers to then find common threads that can help define the fields. They then categorized the research into what affordances the research studies either for users or the machines. Humans are naturally better with visual perception, visuospatial thinking, audiolinguistic ability, sociocultural awareness, creativity, and domain knowledge, while machines are better with large-scale data manipulation, data storage and collection, efficient data movement and biasfree analysis. The authors then suggest that research explore human adaptability and machine sensing as well as discuss when to use these strategies.

Personal Reflection

When reading this I did question a few things about the studies.  For example, in bias-free analysis, while they do admit that human bias can be introduced during the programming, they fail to acknowledge the bias that can be present in the input data. Entire books have been written (Weapons of Math Destruction being one) that cover how “bias-free” algorithms can be fed input data that have clear bias, resulting in a biased system regardless of it being hard-coded in the algorithm.

Outlining these similarities between various human-computer collaborations allows other researchers to scope projects better. Bringing up the deficiencies of certain approaches allows for avoidance of the same pitfalls.

The complexity measure questions section, however, felt a little out of place considering it was the first time it was brought up in the paper. However, it asked strong questions that definitely impact this area of research. If ‘running time’ for a human is a long time, this could mean there are improvements to be made and areas that we can introduce more computer-aid.

Questions

  1. This kind of paper is often present in many different fields. Do you find these summary papers useful?  Why or Why not? Since it’s been 8 years since this was published, is it time for another one?
  2. Near the end of the paper, they ask what the best way is to measure human work.  What are your ideas? What are the tradeoffs for the types they suggested (input size, information density, human time, space)?
  3. Section 6 makes it clear that using multiple affordances at once needs to be balanced in order to be effectively used. Is this an issue with the affordances or an issue with the usability design of the tools?
  4. The authors mention two areas of further study in section 7: Human adaptability and machine sensing. Have these been researched since this paper came out? If not, how would you tackle these issues?

Read More

01/29/20 – Akshita Jha – Beyond Mechanical Turk: An Analysis of Paid Crowd Work Platforms

Summary:
“Beyond Mechanical Turk: An Analysis of Paid Crowd Work Platforms” by Donna Vakharia and Matthew Lease talks about the limitations of Amazon Mechanical Turk (AMT) and presents a qualitative analysis of newer vendors who offer different models for achieving quality crowd work. AMT was one of the first platforms to offer paid crowd work. However, nine years after its launch, it still is in its beta stage because of its limitations that fail to take into account the skill, experience and ratings of the worker and the employer and its minimal infrastructure that does not support collecting analytics. Several platforms like ClickWorker, CloudFactory, CrowdComputing Systems (now WorkFusion), CrowdFlower, CrowdSource, MobileWorks (now LeadGenius), and oDesk have tried to mitigate these limitations by coming up with a more advanced workflow that ensures quality work from crowd workers. The authors identify four major limitations of AMT: (i)Inadequate Quality Control, (ii)Inadequate Management Tools, (iii)Missing support for fraud prevention, and (iv)Lack of automated tools. The authors also list down several metrics to qualitatively assess other platforms: (i)Key distinguishing features of the platform, (ii)Source of the workforce, (iii)Worker demographics, (iv)Worker Qualifications and Reputations, (v)Recommendations, (vi)Worker Collaboration, (vii)Rewards and Incentives, (viii)Quality Control, (ix)API offerings, (x)Task Support and (xi)Ethics and Sustainability. These metrics prove useful for a thorough comparison of different platforms.

Reflections:
One of the major limitations of AMT is that there are no pre-defined tests to check the quality of the worker. In contrast, other platforms ensure that they test their workers in one or the other way before assigning them tasks. However, these tests might not always reflect the ability of the workers. The tests need to be designed keeping the task in mind and this makes standardization a big challenge. Several platforms also believe in offering their own workforce. This can have both positive and negative impacts. The positives being that the platforms can perform a thorough vetting while the negatives are that this might limit the diversity of the workforce. Another drawback of AMT is that workers seem interchangeable as there is no way to distinguish one from the other. Other platforms try to use badges to display worker skills and use a leaderboard to rank their workers. This can lead to unequal distribution of work which might be merit based but there is a need to perform a deeper analysis of the ranking algorithms in order to ensure that there is no unwanted bias in the system. Some of the platforms employ automated tools to perform tasks which are repetitive and monotonous. This comes with its own set of challenges. As machines become more “intelligent”, humans need to develop better and more specialized skills in order to remain useful. More research needs to be done in order to better understand the working and limitations of such “hybrid” systems.

Questions:
1. With crowd platforms employing automated tools, it is interesting to discuss whether these platforms can still be categorised as a crowd platform.
2. This was more of a qualitative analysis of crowd platforms. Is there a way to quantitatively rank these platforms? Can we use the same metrics?
3. Are there certain minimum standards that every crowd platform should adhere to?

Read More

01/29/20 – Ziyao Wang – An Affordance-Based Framework for Human Computation and Human-Computer Collaboration.

In this paper, the authors conducted literature review on papers and publications which represent the state of the art in human-computer collaboration and human computation area. From these literature review, they identified the affordance into two groups: human-intelligence and machine-intelligence. Though they introduced the affordance can be split into two groups, there is also systems like reCAPTCHA and PatViz which can benefit from the combination of the two intelligences. Finally, they provided examples of how to utilize this framework and some advice on future researches. They announced that human adaptability and machine sensing is two extension of current work. Also, future work (find way to measure human work, assess human work in practice and account for individual differences in human operators) need combination of experts in theoretical computer science, as well as in psychology and neuroscience.

Reflections:

Primarily, I felt that both human affordance and machine affordance contribute to the success of current systems. There is great importance in allocate the tasks and make human and machine can support each other. Current systems may suffer from poor human-computer collaboration. For example, systems cannot assign proper work to human workers or the user interface is difficult to use. To avoid such kind of situation, policies and guidance are needed. There should be a common used evaluation criteria and make restriction on industry.

Secondly, researchers can benefit from making an overview of the related research areas. In most of the cases, to solve a problem we may need help from experts in different areas. As a result, the category of the problem may become ambiguous. As a result, researchers from different aspects may waste their effort on similar researches and they may be not able to get help from previous research in another area. For this reason, it is important to make a category of the researches with similar goals or related techniques. Current or future researches will benefit from such kind of category and the discussions between experts will become much easier. As a result, more ideas can be proposed and researchers can find out some fields which they have not been considered before.

Additionally, in the human-computation and human-computer collaborative systems, problems are solved using both human-intelligence and machine-intelligence. For such a comprehensive area, it is important to do reflection regularly. With the reflections, researchers can make comprehensive consideration on the problems they are going to solve. With the table in the paper, the affordance of human-intelligence and machine-intelligence can be overviewed. Additionally, we can find out in which areas there have been a lot of research and to which areas we should pay more attention to. With this common framework, the understanding and discussion on previous work would be much easier and novel ideas will occur. This kind of reflection can be applied in other areas too, which will result in rapid development in each industry.

Questions:

Why there is no updates in systems which is considered hard to use?

How to assess human work and machine work in practical?

For user interface, is it more important to let new workers use easily but there is limitation in customization or let experienced workers able to customize and reach high efficiency however new user may face some difficulty in using?

Read More

01/29/20 – Ziyao Wang – Beyond Mechanical Turk: An Analysis of Paid Crowd Work Platforms

The authors found that most of current research on crowd work focused on AMT, and there is little exploration of other platforms. To enrich the diversity of crowd work researches and accelerate progress of development of crowd work, the authors contrast the key problems which they have found in AMT versus features of other platforms. With the comparison, the authors found different crowd platforms provide solution to different problems in AMT. However, all the platforms cannot provide sufficient support to well-designed worker analytics. Additionally, they made a conclusion that there is still a number of limitations in AMT and research on crowd work will benefit from diversifying. With the contributions of the authors, future research on alternative platforms can benefit from not considering AMT as the baseline.

Reflection:

AMT is the first platform for crowd work and many people are benefit by using this platform. However, if all companies and workers only use AMT, the lack of diversity will result in a low development progress of crowd work platform.

From AMT’s point of view, if there are no other crowd work platforms, it is hard for AMT to effectively find solutions to its limitations and problems. In this paper, for example, lack of automated tool is one of AMT’s limitations. However, the platform WorkFusion allows usage of machine automated workers, which is an improvement to AMT. If there is only one platform, it is hard for the platform to find its limitation by itself. But with the competitor platforms, to provide better user experience and defeat other adversary companies, they have to do a lot of research to keep their platforms up to date. As a result, the research of crowd work will be pushed to develop rapidly.

For other platforms, they should not just copy the pattern of AMT. Though AMT is the most popular crowd work platform, it still has some limitation. Other platforms can copy the advantages of AMT. However, they should avoid AMT’s limitations and find solutions to these drawbacks. A good baseline can help the initiation of other platforms. But if other platforms are just totally follow the baseline, all the platforms will suffer from same drawbacks and no one can propose any solution to them. To avoid this kind of situation, the companies should develop own platforms, avoid the drawbacks and learn advantages from other platforms.

The researchers should not just focus on AMT. Of course, the researches on AMT will receive more attention as most of the users use AMT. However, it is harmful to the development of crowd work platform. Even some platforms developed solutions for some problems, if the researchers just ignore these platforms, these solutions will not be able to spread out and the researchers and other companies will spend unnecessary effort to do research on similar solutions.

Question:

What is the main reason for most of the companies select AMT as their crowd work service provider?

What is the most significant advantage for each of the platforms? Is there any chance to develop a platform which has all other platforms’ advantages?

Why most of the researchers focused on AMT only? Is it possible to advice more researchers to do cross-platform analysis?

Read More

01/29/20 – Vikram Mohanty – Beyond Mechanical Turk: An Analysis of Paid Crowd Work Platforms.

Paper Authors: Donna Vakharia and Matthew Lease.

Summary

This paper gives a general overview of different crowdsourcing platforms and their key feature offerings, while centering around the limitations of Amazon Mechanical Turk (AMT), the most popular among the platforms. The factors which make requesters resort to AMT are briefly discussed, but the paper points out that these factors are not exclusive to AMT. Other platforms also offer most of these advantages, while offsetting some of AMT’s limitations such as quality control, automated task routing, worker analytics, etc.. The authors qualitatively assess these platforms, by comparing and contrasting on the basis of key criteria categories. The paper, by providing exposure to lesser-known crowdsourcing platforms, hopes to mitigate one plausible consequence of researchers’ over-reliance on AMT i.e. the platform’s limitations can sub-consciously shape research questions and directions. 

Reflection

  1. Having designed and posted a lot of tasks (or HITs) on AMT, I concur with the paper’s assessment of AMT’s limitations, especially no built-in gold standard tests, no support for complex tasks, task routing and real-time work. The platform’s limitations, essentially, is offloaded by the researcher’s time, efforts and creativity, which is now consumed to work around these limitations instead of other pressing stuff.
  2. This paper provides a nice exposure to platforms that offer specialized and complex task support (e.g. CrowdSource supporting writing and text creation tasks). As platforms expand on supporting for different complex tasks, this would a) reduce the workload on requesters for designing tasks, and b) reduce the quality control tensions arising from poor task design.
  3. Real-time crowd work, despite being an essential research commodity, still remains a challenge for crowdsourcing platforms. Even though this inability has resulted in toolkits like LegionTools [1] which facilitate real-time recruiting and routing of crowd workers on AMT, these toolkits are not the final solution. Even though many real-time crowd-powered systems have been built using this toolkit, they still remain prone to being bottle-necked by the toolkit’s limitations. These limitations may arise from lack of resources for maintaining and updating the software, which may have originated as a student-developed research project. Crowd platforms adopting such research toolkits into their workflow may solve some of these problems. 
  4. Sometimes, projects or new interfaces may require testing learning curve of its users. It does not seem straightforward to achieve that on AMT since it lacks support for maintaining a trusted worker pool. However, it seems to be possible on other platforms like ClickWorker and oDesk, which allow worker profiles and identities.  
  5. A new platform, called Prolific, was launched publicly in 2019, and alleviates some of the shortcomings of AMT such as fair pay assurance (with a minimum $6.50/hour), worker task recommendation based on experience, initial filters, and quality control assurance. The platform also provides functionalities for longitudinal/multi-part studies, which may seem difficult to achieve using the functionalities offered by AMT. The ability for longitudinal studies was not addressed for other platforms, either. 
  6. The paper was published in 2015 and highlighted the lack of automated tools. Since then, numerous services have come up and now offer human-in-the-loop functionalities, including Amazon and Figure Eight (formerly CrowdFlower)

Questions

  1. The authors raise an important point that the most popularly used platform’s limitations can shape research questions and directions. If you were to use AMT for your research, can you think of how its shortcomings would affect your RQs and research directions? What would be the most ideal platform feature for you?
  2. The paper advocates algorithms for task recommendation and routing, as has been pointed out in other papers [2]. What are some other deficiencies that can be supported by algorithms? (reputations, quality control, maybe?)
  3. If you had a magic tool to build a crowdsourcing platform to support your research, along with bringing a crowd workforce, what would your platform look like (the minimum viable product)? And who’s your ideal crowd? Why would these features help your research?

Read More

1/29/2020 – Jooyoung Whang – Human Computation: A Survey and Taxonomy of a Growing Field

This paper attempts to define a region where human computation belongs, including its definition and similar ideas. According to the paper’s quote from Von Ahn’s dissertation on human computation, it is defined as a way of solving a computation problem that a machine cannot yet handle. The paper compares human computation with crowdsourcing, social computing, and data mining and explain how they are similar but different. The paper continues to study the dimensions related to human computation, starting with motivation. These include factors such as pay, altruism, and joy. The next dimension that the paper discuss is quality control, the method of ensuring an above-threshold accuracy of human computation results. These included multi-response agreement, expert review, and automatic check. Then, the paper introduces how the gathered computations by many humans can be aggregated together to solve the ultimate problem. These included collection, statistical processing, improvement, and search. Finally, the paper discusses a few more small dimensions such as process order and task-request cardinality.

I enjoyed the paper’s attempt to generate a taxonomy for human computation which can be easily ill-defined. I think the paper did a good job at it by starting with the definition and breaking it down into major components. In the paper’s discussion about aggregation, it was interesting to me that they included “none”, which means the individual human computations by themselves are the major problem that the requester wants solved, and there is no need for aggregation of all the results. Another thing I found fascinating about the project was their mentioning of motivation for the humans performing the computation. Even though it is natural that people will not perform the tasks for nothing, it did not occur to me that this would be a major factor to consider when utilizing human computation. Of the list of possible motivations, I found altruism to be a humorous and unexpected category.

I was also reminded of a project that used human computation, called “Place” held in a community called Reddit, where a user of the community could place a colored pixel on a shared canvas once in a few minutes. The aggregation of human computation of “Place” would probably be considered as iterative improvement.

These are the questions that I could come up with while reading the paper:

1. The aggregation category “none” is very interesting, but I cannot come up with an immediate example. What would be a good case of utilizing human computation that doesn’t require aggregation of the results?

2. In the Venn diagram figure of the paper showing relationships between human computation, crowdsourcing, and social computing, what kind of problems would go into the region where all three overlap? This would be a problem where many people on the Internet with no explicit relation to each other socially interact and cooperate to perform computation that machines cannot yet do. The collected results may be aggregated to solve a larger problem.

3. Data mining was not considered a human computation because it was about an algorithm trying to discover information from data collected from humans. If humans sat together trying to discover information from data generated by a computer, would this be considered human computation?

Read More

01/29/2020-Donghan Hu-Human Computation: A Survey and taxonomy of a Growing Field

In this paper, the authors focused on the problem that due to the rapid growth of computing technology, current methods are not well supported by a single framework that can understand each new system in the context of old helpfully. Based on this research question, authors categorized multiple human computations systems aiming at identifying parallels between different systems, classifying systems into different dimensions, and disclosing defects which existed in current systems and work. Then, the authors compared human computing with other related ideas, terms, and areas. For example, deafferenting human computing with social computing, crowdsourcing. For the classification, the authors divided different systems into six dimensions: motivation, quality control, aggregation, human skill, process orders, and task request cardinality. For each dimension, the authors explained sample values and listed one example. Due to the development of human computation, new systems can be categorized into current dimensions, or new dimensions and sample values will be created in the future.

From this paper, I knew that human computing is a wild topic which is hard to be defined clearly. There are two main parts that consist of human computing: 1) problems fit the general paradigm of computation, 2) the human participation id directed by the computational systems or process. Human computation binds human activities and computers tightly. For the six dimensions, I am kind of confused that how authors categorized these systems into these six dimensions. I think that authors need to talk more about how and why. From this form, I can find that one system can be categorized into multiple dimensions due to its complex features, for example, Mechanical Turk. And I think this is one possible reason that systems are hard to be classified in human computing easily. Because one system may solve many human computing problems and implements multiple features increasing the difficulty of understanding its context. What’s more, I am quite interested in the “Process order” dimension. From this part, it helps me to understand how people interact with computers. For different process order, people can generate different questions that need them to solve. And it is impossible to come up with a solution as a panacea that works well in each processed order. We should consider questions like feedback, interactions, learning effects, curiosity and so on.

What’s more, I am interested in the idea that focusing on only one style of human computation may become a tendency that can potentially missing more suitable solutions to a problem. Thinking differently in multiple ways would help us quickly solve the research questions. We are not supposed to limit us on one narrow topic or one single area.

Question 1: how can we use this classification of human computation systems?

Question 2: how and why authors come up with these six dimensions? I think more explanations are needed.

Question 3:  If one system is classified into multiple dimensions and sample values, can I treat these values equally? Or there is one majority values and dimension?

Read More

01/29/2020-Donghan Hu-An Affordance-Based Framework for Human Computation and Human-Computer Collaboration

Many researchers’ primary goals are to develop tools and methodologies that can facilitate human-machine collaborative problem solving and to understand and maximize the benefits of the partnership of size and complexity. The first problem is how do we tell if a problem would benefit from a collaborative technique? This paper mentioned that even though deploying various collaborative systems has led to many novel approaches to difficult problems, it has also led to the investment of significant time, expense and energy. However, these problems might be solved better by only depending on human or machine techniques. The second problem is how do we decide which tasks to delegate to which party and when? The authors stated that we are still lacking a language for describing the skills and capacity of the collaborating team. For the third question, how does one system compare to others trying to solve the same problem? Lacking of no common language or measures by which to describe new systems is one important reason.  About the research contributions, authors picked out 49 publications from 1271 papers which represent the state of the art in the study of human-computer collaboration and human computation. Then authors identify grouping based on human- and machine-intelligence affordances which form the basis of a common framework for understanding and discussing collaboration works. Last, the authors talked about the unexplored areas for future work. Each of the current frameworks is specific to a subclass of collaborative systems which is hard to extend them to a broader class of human-computer collaborative systems.

Based on the definition of “affordance”, I know both humans and machines bring to the partnership opportunities for action, and each mush be able to perceive and access these opportunities in order for them to be effectively leveraged. It is not surprised for me that the bandwidth of information presentation is potentially higher in the visual perception than any of the other senses. I consider that visual perception as the most important information processing for humans in most cases, that’s why there are a plethora of research studies combined with human visual processing to solve various problems. I am quite interested in the concept of sociocultural awareness. Individuals understand their actions in relation to others and to the social, cultural and historical context in which they are carried out. I think this is a paramount view in the study of HCI. Different individuals in different environments with different cultural backgrounds would behave different interactions with the same computers. In the future, I consider that cultural background should become an important factor in the studies of HCI.

I found that various applications are categorized into multiple affordances. If so, how can the authors answer the third question? For example, if two systems are trying to solve the same problem, but each of them have different human or computer affordance, how can I say which is better? Does different affordance have different weight values? Or we should treat them equally?

Less tools are designed for creativity, social and bias-free affordance, what does this mean? Is it mean that these affordances are less important or researchers are still working on these areas?

Read More

01/29/20 – Lulwah AlKulaib- Human Computation: A Survey & Taxonomy of a Growing Field

Summary:

The paper briefly speaks of the history of human computation. The first dissertation (2005), workshop (2009), and the different backgrounds of scholars in human computation. The authors agree with Von Ahn’s definition of the human computation as: “… a paradigm for utilizing human processing power to solve problems that computers cannot yet solve.” and mention multiple definitions from other papers and scholars. They believe that two conditions need to be satisfied to constitute human computation:

  • The problems fit the general paradigm of computation, and so,  might someday be solvable by computers.
  • The human participation is directed by a computational system or process.

They present a classification for human computation systems made of 6 main factors divided into two groups: 

  • Motivation, human skill, aggregation.
  • Quality control, process, task-request cardinality.

The authors also explain how to find new research problems based on the proposed classification system:

  • Combining different dimensions to discover new applications.
  • Creating new values ​​for a given dimension.

Reflection:

The interesting issue I found the authors discussing was that they believe that the Wikipedia model does not belong to human computation. Because current Wikipedia articles are created through a dynamic social process of discussion about the facts and presentation of each topic among a network of authors and editors. I never thought of Wikipedia as human computation although there are tasks in there that I believe could be classified as such. Especially when looking at non-English articles. As we all know, the NLP field has created great solutions for the English language, yet some languages, even widely spoken ones, are playing catch up. So, this brings me to disagree with the authors’ opinion about Wikipedia. I agree that some parts of Wikipedia are related to social computing like allowing collaborative writing, but they also have human computation aspects like Arabic articles linked data identification (for the info box). Even though using NLP techniques might work for English articles on Wikipedia, Arabic is still behind when it comes to such task and the machine is unable to complete it correctly. 

On another note, I like the way the authors broke up their classification and explained each section. It clarified their point of view and they provided an example for each part. I think that the distinctions were addressed in detail and they left enough room to consider the classification of future work. I believe that this was the reason that other scientists have adapted the classification. Seeing that the paper was cited more than 900 times, it makes me believe that there’s some agreement in the field. 

Discussion:

  1. Give examples of human computation tasks.
  2. Do you agree/disagree with the author’s opinion about Wikipedia’s articles being excluded from the human computation classification?
  3. How is human computation different from crowdsourcing, social computing, data mining, and collective intelligence?
  4. Can you think of a new human computation system that the authors didn’t discuss? Classify it according to the dimensions mentioned in the paper.
  5. Do you agree with the authors’ classification system? Why/Why not?
  6. What is something new that you learned from this paper?

Read More

1/29/2020 – Jooyoung Whang – An Affordance-Based Framework for Human Computation and Human-Computer Collaboration

In this paper, the author reviews more than 1200 papers to identify how to best utilize human-machine collaboration. Their field of study was visual analytics, but the paper was well-generalized to fit many other research areas. The paper discusses two foundational factors to consider when designing a human-machine collaborative system: Allocation and affordance. In the many papers that the authors reviewed, systematic methods of trying to appropriately allocate work for each human and computer in a collaborative setting was studied. A good rule was introduced by Fitts, but it was found outdated later due to the increasing computational power of machines. The paper decides that inspecting affordance rather than allocation is a better way to utilize human-machine collaborative systems. Affordance can be best understood as what something an agent is good at than others. For example, humans can provide excellent visual processing skills while computers accel at large-data processing. The paper also introduces some case studies where multiple affordances from each party was utilized.

I greatly enjoyed reading about each of the affordances that human and machine can each provide. The list of affordances that the paper provides will serve as a good resource to come back to when trying to design a human-machine collaborative system. One machine affordance that I do not agree with is bias-free analysis. In machine learning scenarios, a learning model is very often easily biased. Both humans and machines can be biased in analyzing something based on previous experience or data. Of course, it is the responsibility of the designer of the system to ensure unbiased models, but as the designer is a human, it is often impossible to avoid bias of some kind. The case study regarding the reCAPTCHA system was an interesting read. I always thought that CAPTCHAs were only used for security purposes, and not machine learning. After learning how it is actually used, I was impressed how efficient and effective the system is at both securing Internet access as well as digitalizing physical books.

The followings are the questions that I came up with while reading the paper:

1. The paper does a great job at summarizing what each a human and a machine is relatively good at. The designer, therefore, simply needs to select appropriate tasks from the system to assign to each human and machine. Is there a good way to identify what affordance the system’s task needs?

2. There’s another thing that humans are really good at compared to a machine: adapting. Machines, upon their initial programming, does not change their response to an event according to time and era while humans very much do. Is there a human-machine collaborative system that would have a task which would require the affordance “adaptation” from a human collaborator?

3. Many human-machine collaborative systems register the tasks that needs to be processed using an automated machine. For example, the reCAPTCHA system (the machine) samples a question and asks the human user to process it. What if it was the other way around where a human register a task and assigns the task to either a machine or a human collaborator? Would there be any benefits to doing that?

Read More