1/28/20 – Nurendra Choudhary – Beyond Mechanical Turk

Summary:

In this paper, the authors analyze and compare different crowd work platforms. They comment that research into such platforms has been limited to Mechanical Turk and their study wishes to encompass more of them. 

They compare seven AMT alternatives namely ClickWorker, CloudFactory, CrowdComputing Systems, CrowdFlower, CrowdSource, MobileWorks, and oDesk. They evaluate the platforms on 12 different metrics to answer the high-level concerns of quality control, poor worker management tools, missing fraud prevention measures and lack of automated tools. The paper also distinctly provides a need from requesters to employ their own specialized workers through such platforms and apply their own management systems and workflows. 

The analysis shows diversity of these platforms and identifies some commonalities such as “peer review, qualification tests, leaderboards, etc.”  and also some contrastive features such as “automated methods, task availability on mobiles, ethical worker treatment, etc.”

Reflection:

The paper provides great evaluation metrics to judge aspects of a crowd work platform. The suggested workflow interfaces and tools can greatly streamline the process for requesters and workers. However, I don’t think these crowd work platforms are businesses. Hence, incentive is required to invest in such additional processes. In the case of MT, the competitors do not have enough market share to promote viability of additional streamline processes. I think as the processes become more complex, requesters will be limited by the current framework and a market opportunity will force the platforms to evolve by integrating the processes mentioned in the paper. This will be a natural progress based on traditional development cycles.

I am sure a large company like Amazon definitely has the resources and technical skills to lead such a maneuver for MT and other platforms will follow suit. But the most important aspect for change would be a market stimulus driven by necessity and not just desire. Currently, the responsibility falls on the requester because the requirement for the processes is rare.

Also, the paper only analyzes from a requester perspective. Currently, the worker is just a de-humanized number but adding such workflows may lead to discrimination between geographical regions or distrust in a worker’s declared skill sets. This will bring the real-world challenges in the “virtual workplace” and more often lead to challenging work conditions for remote workers. This condition might also lead to worrisome exclusivity which the current platforms avoid really well. However, I believe user checks and fraud networks in the system are areas that the platforms should really focus to improve user experience for requesters.

I think a different version of the service should be provided to corporations who need workflow management and expert help. For quality control, I believe the research community should investigate globally applicable efficient processes for these areas.

Questions:

  1. How big is the market share of Mechanical Turk compared to other competitors?
  2. Does Mechanical Turk need to take a lead in crowd work reforms?
  3. Is the difference between platforms due to the kind of crowd work they support? If so, which type of work has better worker conditions?
  4. How difficult is for MT integrate the quality controls and other challenges mentioned in the paper?

Read More

01/29/20 – Rohit Kumar Chandaluri – Human Computation: A Survey and Taxonomy of a Growing Field

Summary

The author of the paper started explaining about human computation, crowd sourcing, social computing, data mining, and collective intelligence and the differences between these terminologies. The author also explained the intersection of these areas and the areas where they are disjoint. After that the author explained about different dimensions of human computation work to control the work quality, in which each dimension has many values. And the author explained each value and its contribution to the system. And in the final area the author proposed possible future research areas that can be researched for new topics in this area.

Reflections

  1. It was interesting to know how quality control is handled in human computations given money is involved people try to exploit the system.
  2. Money is not the only factor that motivates people to work in this area there are many incentives that motivate people to complete this task.
  3. There are different methods that exist to control the quality of work like review, voting, economic modeling, input-output agreement and automatic check etc.
  4. The process order of the task also has a significant meaning for each task.

Questions

  1. Is the present system for quality check enough to ensure quality? There might be tasks which might need expertise in which none of the workers exist to complete tasks.
  2. Can collection of data can be considered as crowdsourcing, where we need images of different people for image recognition model?
  3. Do you think 90% jobs that provide pay of less than 0.1$  die after some time due to the need for expertise?

Read More

1/29/20 – Sukrit Venkatagiri – Affordance-Based Framework for Human Computation

Paper: An Affordance-Based Framework for Human Computation and Human-Computer Collaboration by R. Jordan Crouser and Remco Chang

Summary: 

This paper provides a survey of 49 papers on human-computer collaboration systems and interfaces. The authors highlight some affordances that arise from these collaborative systems and propose an affordance-based framework as a common language for understanding seemingly disparate branches of research and indicate unexplored avenues for future work. They discuss various systems, and provide extensions to these systems that provide human adaptability, and machine sensing. Finally, they conclude with a discussion of the utility of their framework in an increasingly collaborative world, and some complexity measures for visual analytics.

Reflection:

This paper focuses on some fundamental questions in mixed-initiative collaborations, such as how does one tell if a problem even benefits from a collaborative technique, and if so, who is the work delegated to? The paper also provides ways to evaluate complexity in different visual analytic setups, but raises more questions, such as what is the best way to evaluate work, and how can we account for individual differences? These suggestions and questions, however, only beget more questions. The nature of work is increasingly complex, requiring more unique ways to measure success that are application-specific. The paper tries to come up with a one-size-fits-all solution for this, but the solution ends up being more generic.

The paper also highlights the need for a more holistic evaluation approach. Typically, ML and AI research is focused solely on the performance of the model. However, this paper highlights the need to evaluate the performance of both the human and the system that they are collaborating with. 

The paper talks about human-computer collaboration, mostly focused on visual analytics. There is still more work to be done in studying how applicable this framework is to physical human-computer interfaces, for example, an exoskeleton, a robot that makes cars, etc. Here, there are different abilities of humans and robots, which are not covered in the paper. Perhaps humans’ visual skills may be combined with a robots’ accuracy. 

Questions:

  1. How might one apply this framework in the course of their class project?
  2. What about this framework is still/no longer applicable in the age of deep learning?
  3. Will AI ever surpass human creativity, audio linguistic abilities, and visuospatial thinking abilities? What does it mean to surpass human abilities?
  4. Is this framework applicable for cyber-physical systems? How does it differ?

Read More

01/29/20 – Yuhang Liu – Human computation: a survey and taxonomy of a growing field

In this article, Alexander J. Quinn, Benjamin B. Bederson. First introduced the background, then discussed the definition of human computer, distinguished and compared with related technologies, and then put forward the classification system for human computer system, and explained how to find new research points based on the proposed classification system. The article first proposed the definition of human computation. The author believes that human computation should meet:

  1. The problems fit the general paradigm of computation, and as such might someday be solvable by computers.
  2. The human participation is directed by the computational system or process. Then the author compares human computation with other concepts, which mainly include: crowd sourcing, social computing, data mining, and collective intelligence.

The main differences from these concepts are the presence of computers and the application direction. Then the author proposed a new classification dimension. According to the proposed dimension, problems can be considered the following aspects:

  1. Combining different dimensions to discover new applications.
  2. Create new values for a given dimension.
  3. When encountering a new human computer system, classify it according to the current dimensions to discover new things

I think this article is similar to another article “An Affordance-Based Framework for Human Computation and Human-Computer Collaboration”. This article is all about the new direction of human computation. To help people find new methods through new classification systems, and find new applications based on the combination of different dimensions. The other article is about a new research method “Affordance”, which achieves better research results based on the relationship between humans and machines. And I think the arguments of the two articles coincide: The classification system mentioned in this article has six dimensions, motivation, quality control, aggregation, human skill, process order, task request cardinality. Among them, human skills can correspond to human advantages, that is the part of “affordance” that humans can take part in human computation. And motivation, quality control, aggregation as the description in another article, humans cannot be like computers, People cannot completely give up subjective thinking and realize unbiased analysis. The process order reflects different interaction methods and different interaction orders in human computation. Task request cardinality can correspond to other “affordance” methods. When the number of participants is large, there will be different methods. So I think in some ways the two articles are complementary. At the same time, in this article, the author also mentioned the difference between human computation and other concepts. I think this is very important in future research. In future research, there will be more and more interdisciplinary crossings, so it is important to distinguish these disciplines, determine the boundaries of the disciplines, and lay a solid foundation for different disciplines. The foundation, universal methods, and efficient solutions are not only good for the development of each discipline, but also have a very important impact in the interdisciplinary process.

What is the significance of distinguishing human computation from other definitions?

What are the characteristic of human computation corresponding with the six dimensions mentioned in article?

Is there a new dimension, and if it is combined with the dimensions mentioned in the article, what new applications will it have?

Read More

01/29/20 – Sukrit Venkatagiri – The Future of Crowd Work

Summary:

This paper surveys existing literature in crowdsourcing and human computation and outlines a framework consisting of 12 major areas of future work. The paper focuses on paid crowd work, as opposed to volunteer crowd work. Envisioning a future where crowd work is attractive to both requesters and workers requires considering work processes, crowd computation, and what crowd workers want. Work processes involves the various workflows, quality control, and task assignment techniques, as well as the synchronicity involved in doing the work itself. Crowd computation can involve crowds guiding AIs, or vice versa. Crowd workers themselves may have different motivations, require additional job support through tools, want ways to maintain a reputation as a “good worker”, and ways to build a career out of doing crowd work. To improve crowd work, it requires re-establishing career ladders for workers, improving task quality and design, and facilitating learning opportunities. The paper ends with a call for more research on several fronts to shape the future of crowd work: observational, experimental, design, and systems-related.

Reflection:

The distributed nature of crowd work theoretically allows anyone to do work from anywhere, at any time, and there are clear benefits to this freedom. On the other hand, this distributed nature also enforces existing power structures and facilitates the abstraction of human labor. This paper addresses some of these concerns with crowd work, and highlights the need for enabling on-the-job training and re-establishing career ladders. However, recent work has highlighted the long-term physical and psychological effects of doing crowd work [1,2]. For example, content moderators are often traumatized by the work that they do. Gray and Suri [3] also point out the need for a “commons” that provides a pool of shared resources for workers, along with a retainer model that values workers’ 24/7 availability. Yet, very few platforms do so, mostly due to weak labor laws. More work needs to be done investigating the broader, long-term and secondary effects of doing crowd work. 

Second, the paper highlights the need for human creativity and thought in guiding AI, but states that crowd work is analogous to a processor. This is not entirely correct, since a processor always produces the same output for a given input. On the other hand, the same (or different) human may not. This poses the potential for human biases to be introduced into the work that they do. For example, Thebault-Spieker et al. found that crowd workers are biased in some regards [4], but not others [5]. More work needs to be done to understand the impact of introducing creative, insightful, and—most importantly—unique human thought “in the loop.”

Finally, there is a tension between how society values those who do complex work (such as engineers, plumbers, artists, etc.), and the constant push towards the taskification, or “Uberization” of complex work (Uber drivers, contractors on Thumbtack and UpWork, crowd workers, etc.), where work is broken down into the smallest possible unit to increase efficiency and decrease costs. What does it mean for work to be taskified? Who benefits, and who loses? How do we value microwork? Can we value microwork the same as “skilled” work?

Questions:

  1. Seven years later, is this the type of work you would want your children to do?
  2. How do we incorporate human creativity into ML systems, without also incorporating human biases?
  3. How has crowd work changed since this paper first came out?

References:

[1] Roberts, Sarah T. Behind the screen: Content moderation in the shadows of social media. Yale University Press, 2019.

[2] Newton, Casey. Bodies in Seats: At Facebook’s Worst-Performing Content Moderation Site in North America, one contractor has died, and others say they fear for their lives. The Verge. June 19, 2019. 

[3] Mary L. Gray and Siddharth Suri. Ghost Work.

[4] Jacob Thebault-Spieker, Daniel Kluver, Maximilian A. Klein, Aaron Halfaker, Brent Hecht, Loren Terveen, and Joseph A. Konstan 2017. Simulation Experiments on (the Absence of) Ratings Bias in Reputation Systems. Proceedings of the ACM on Human-Computer Interaction 1, CSCW: 101:1–101:25. https://doi.org/10.1145/3134736

[5] Jacob Thebault-Spieker, Loren G. Terveen, and Brent Hecht 2015. Avoiding the South Side and the Suburbs: The Geography of Mobile Crowdsourcing Markets. In Proceedings of the 18th Acm Conference on Computer Supported Cooperative Work & Social Computing (CSCW ’15), 265–275. https://doi.org/10.1145/2675133.2675278

Read More

01/29/20 – Yuhang Liu – Affordance-based framework for human-computer collaboration

In 1993, researchers from different backgrounds jointly discussed the challenges and benefits in the field of human-computer collaboration. They define collaboration as the process of two or more agents working together to achieve a common goal, while human-machine collaboration is defined as a collaboration involving at least one person and a computing agent. The field of visual analysis is deeply rooted in human-machine collaboration. That is, Visual Analytics attempts to leverage analyst intelligence and machine computing power in collaborations that analyze complex problems. The author has studied thousands of papers from many top conferences such as visual analysis, human-computer interaction and visualization. Provides a comprehensive overview of the latest technologies and provides a general framework based on the authors’ research. The author of this framework calls it “Affordance,” pointing out that there is a heuristic between humans and machines, which exists independently of each other instead of the other.

From reading the article, we learned that the word “Affordance” was first proposed by American psychologist J.J. Gibson. It means that the object and the environment provide the opportunity for action. When the word is used in human-computer collaboration, it means that both human and machine provide partners with opportunities for action. In this two-way relationship, it must be effectively perceived, By using it, we can achieve better human-machine collaboration. In this two-way relationship, people and computers have different abilities.

First of all, this kind of Affordance behaves differently for different human abilities. These different abilities mainly include visual perception, visuospatial thinking, audio linguistic ability, sociocultural awareness, creativity, and domain knowledge. Among them, the first three abilities belong to human strengths, especially visual abilities. Humans have powerful visual perception abilities and can easily distinguish color, shape, and even the texture and motion of the images, human have unparalleled advantages in this aspect of the machine, so it is very reasonable for people to work instead of computers in this respect. The latter three abilities require years of systematic learning and are difficult to fully embed in the computer, so using manual analysis and personnel experience as part of collaboration can greatly improve efficiency.

On the contrary, machines also have the abilities that humans do not have, large-scale data manipulation, collecting and storing large amounts of data, efficient data movement. These capabilities are not possessed by human beings, people cannot complete this series of tasks, and people cannot completely give up subjective thinking and realize unbiased analysis. There are other ways of revelation.

All in all, the author analyzed a large number of papers and finally got a general model, which can lay the foundation for future work, so it can well solve the problems encountered in previous research, and can judge whether it can be based on the “Affordance” idea. Collaborative technologies solve problems, when tasks are assigned to one party, and the ability to develop common languages. And I think that the middle frame is also very reasonable, and it can achieve the mutual cooperation between human and machine, and the inspiration effect. The integration of problems and the convergence of their solutions should also be the direction of development.

What are the disadvantages of the previous framework?

What characteristics of people and computers correspond to the “affordance” of human and computers?

How to make humans and machines play nicely for the extensions mentioned in the article?

Read More

01/29/20 – Akshita Jha – Human Computation: A Survey and Taxonomy of a Growing Field

Summary:
“Human Computation: A Survey and Taxonomy of a Growing Field” by Quinn and Bederson classifies human computation systems different dimensions. They also point out the subtle differences between human computation, crowd sourcing, social computing, data mining and collective intelligence. Traditionally, human computation is defined as “a paradigm for utilizing human processing power to solve problems that computers cannot yet solve.” Although, there is some overlap between human computation and crowd-sourcing, the major idea behind crowds-sourcing is that it works by employing members of the public in place of traditional human workers. Social computing, on the other hand, differs from human computation in the manner such that it studies natural human behavior mediated by technology. The third term, data mining, focuses on using technology to analyse the data generated by humans. All these terms partly fall in the category of collective intelligence. Well-developed human computation systems are also examples of collective intelligence. For example, to create a gold standard dataset for machine translation, several humans have to provide translations for the same system. This is an example of the collective intelligence of the crowd as well as human expertise to solve computationally challenging tasks. The authors present a classification system based on six different dimensions: (i)Motivation, (ii)Quality Control, (iii)Aggregation, (iv)Human Skill, (v)Process Order and, (vi)Task Request Cardinality. The paper further talks about the various sub categories in these dimensions and how these sub categories influence the output. This work presents a useful framework that can be help researchers categorize their work into one of these categories.

Reflections:
This was an interesting read as it presented an overview of the field of human computation. The paper does a commendable job of highlighting the subtle differences in the different sub-fields of HCI. The authors classify human computation into one of the six dimensions depending on the sample values. However, I think there might be some overlap between these sample values and sub categories. For example, the amount of ‘pay’ a person receives definitely determines the motivation for the task but it may also determine the quality of work. If the worker feels that he is adequately compensated, that might prove an incentive to produce quality work. Similarly, ‘redundancy’ and ‘multi-level review’ are part of the ‘quality check’ dimension but they can also fall in the ‘task request cardinality’ dimension as multiple users are required to perform similar tasks. Another point to be considered here is that although, the authors differentiate between crowd-sourcing and human computation, several parallels can be drawn between them using the dimensions presented. For example, it will be interesting to observe whether the features employed by different crowd platforms can be categorized into the dimensions highlighted in this paper and whether they have the same sub class or do they vary depending on the kind of task at hand.

Questions:
1. Can this work extend to related fields like social computing and crowd-sourcing?
2. How can we categorize ethics and labor standards based on these dimensions?
3. Is it possible to add new dimensions to human computation?

Read More

01/29/20 – Sushmethaa Muhundan –The Future of Crowd Work

While crowd work has the potential to support a flexible workforce and leverage expertise distributed geographically, the current landscape of crowd work is often associated with negative attributes such as meager pay and lack of benefits. This paper proposes potential changes to better the entire experience in this landscape. The paper draws inputs from organizational behavior, distributed computing and feedback from workers to create a framework for future crowd work. The aim is to provide a framework that would help build a culture of crowd work that is more attractive for the requesters as well as the workers and that can support more complex, creative and highly valued work. The platform should be capable of decomposing tasks, assigning them appropriately, motivating workers and should have a structured workflow that enables a collaborative work environment. Quality assurance is also a factor that needs to be ensured. Creating career ladders, improving task design for better clarity and facilitating learning are key themes that emerged from this study. Improvements along these themes would enable create a work environment conducive for both the requesters as well as workers. Motivating the workers, creating communication channels between requesters and workers, providing feedback to workers are all means to achieve this goal.

Since the authors were requesters themselves, it was nice to see that they sought to get the perspectives of the current workers in order to take into account both the parties’ viewpoints before constructing the framework. An interesting comparison of the crowdsourcing market has been made to a loosely coupled distributed computing system and this helped build the framework by drawing an analogy to solutions developed to similar problems in the distributed computing space. I liked the importance given to feedback and learning which are components of the framework. I feel that feedback is of extreme importance when it comes to improving one’s self and this is not prevalent in the current ecosystem. As for learning, I feel that personal growth is very essential in any working environment and a focus on learning would facilitate self-improvement which in turn would help them perform subsequent tasks better. As a result, the requesters are benefitted since the crowd workers are more proficient in their work. I particularly found the concept of intertwining AIs guiding crowds and crowds guiding AIs extremely interesting. The thought of leveraging the strengths of both AI and humans to strengthen the other is intriguing and has great potential if utilized meaningfully.

  • How can we create a shift in the mindset of the current requesters who get their work done for meager pay to actually change their viewpoint and invest in the workers by giving valuable feedback/spend time ensuring the requirements are well understood?
  • What are some interesting ways that can be employed to leverage AIs guiding crowds?
  • How can we prevent the disruption of quality by a handful of malicious users who collude to agree on wrong answers to cheat the system? How can we build a framework of trust that is resistant to malicious workers and requesters who can corrupt the system?

Read More

01/29/20 – Nan LI – Human Computation: A Survey and Taxonomy of a Growing Field

Summary:

The key motivation of this paper is to distinguish the definition of human computation with other terms such as “crowdsourcing”. The author also explored the accurate definition of human computation and developed the classification system so that it provides directions for research on human computation. The author also analyzed human computation and demonstrated the key idea with a graph. The graph includes the motivation of people who work on this computation, how to control the quality and how to aggregate the work. Finally, what kind of skills required for human computation, and the regular process order for them. There are several overlaps between human computation and other terms. The definition of human computing is summarized by the author based on all kinds of definition source as two main points: First, the problem fit the general computing paradigm so could be solved one day. Second, human participants are guided by computing systems or processes. The author compared human computation with related ideas and present a classification system based on the six most significant distinguishing factors. This paper is mainly about the taxonomy of human computation, however, the author indicates the future usage of these classifications and the future direction of research such as the issues related to ethics and labor standards.

Reflection:

This article provides a clear definition of human computation, which I think is the most important step before trying to explore more information or expressing any opinion on this topic. I prefer the definition “…a technique to let humans solve tasks, which cannot be solved by computers.”, although we know these problems could be solved one day with the development of technology. Just look at the motivation indicated by the author, I would consider the human computation is an inevitable trend as the reveal of the deficiency of Artificial Intelligent and as the need of network users. There is an interesting contradiction that happens to me when I was read the paper. When I check the graph of the “classification system for human computation systems overview”, I found the author indicates one of the motivations is altruism. I was skeptical and I do not believe until I saw the example, “thousands of online volunteers combed through over 560,000 satellite images hoping to determine Gray’s location, who was missing during a sailing trip in early 2007”, I think it’s the best reason. The book “ghost work” could be one type of work that included in human computation definition in the paper. The motivation of their work is payment, and a different job has a different process order. The tag of the picture could be considered as “Worker->Requester->Computer(WRC)”, while the Uber driver’s case might be “Computer-> Woker->Requester”. This paper is a summary and classification of the present state of human computation, without any innovative ideas. However, the follow-up work that the author put forward at the end is worth discussing. Especially the topic of issues related to ethics and labor standards. We do not have any kind of regulation for this mode of work. Thus, how to protect the workers? How to prevent the product from intentional destruction. How human computation will develop in the future?

Questions:

  • Do you think the field of human computation will exist for a long time? Or will soon be replaced by the highly developed AI?
  • What aspect of human computation do you think will involve an ethical problem?
  • Which description in this paper more in line with ghost work?
  • Can we find any other examples of the motivation or quality control of human computation?

Read More

01/29/20 – Vikram Mohanty – An Affordance-Based Framework for Human Computation and Human-Computer Collaboration.

Paper Authors: R. Jordon Crouser and Remco Chang

Summary

This paper provides an overview summary of some of the popular systems (back in 2012), which were built around human-computer collaboration. Based on this analysis, the authors uncover different key patterns in human and machine affordances, and propose an affordance-based framework that will help researchers think and strategize better about problems that can benefit from collaboration. Such an affordance-based framework, according to the authors, would enable easy comparison between systems via common metrics (discussed in the paper). In the age of intelligent user interfaces, the paper gives researchers a foundational direction or lens to break down problems and map the solution space in a meaningful manner.

Reflection

  1. This paper is a great reference resource for setting some foundational questions on human-computer collaboration – How do we tell if a problem would benefit from a collaborative solution? How do we decide which tasks to delegate to which party, and when? How do we compare different systems solving the same problem? At the same time, it also sets some foundational goals and objectives for a system rooted in human-computer collaboration. The paper illustrates all the concepts through different successful examples of systems, making it easy to visualize the bin in which your (anticipated) research would fit. 
  2. This paper makes a great motivating argument about developing systems from the problem space, rather than jumping directly to solutions, which may often lead to investment of significant time and energy into developing inefficient collaboration.
  3. The paper makes the case for evolving from a prior established framework (i.e. function allocation) for human-machine systems into the proposed affordance-based one. Even though they proposed this framework in 2012, which is also when deep learning techniques started becoming popular, I feel that this framework is dynamic and broad enough to accommodate the ubiquity of current AI and intelligent user interfaces.
  4. Following the paper’s direction of updating theories with technology’s evolution, I would argue for a “sequel” paper to discuss AI affordances as an extension to the machine affordances. This would require an in-depth discussion of the capacities and limitations of state-of-the-art AIs designed for different tasks, some of which currently fall under human affordances, such as visual perception (computer vision), creativity (language models), etc. While AIs may be far from being perfect in these tasks, they still provide imperfect avoidances. Inevitably, this also means re-focusing some of the human affordances described in the paper, and may be part of a bigger question i.e. “what is the role of humans in the age of AI?”. This also pushes the boundaries for what can be achieved with such hybrid interaction, e.g. AI’s last-mile problems [1].
  5. Currently, many different algorithms interact with human users via intelligent user interfaces (IUIs), and form a big part of decision-making processes. Over the years, researchers from different communities have pointed out how different algorithms can result in different forms of bias [2, 3] and have pushed for more fairness, accountability, transparency and interpretability of these algorithms in an effort to mitigate these biases. The paper, based in 2012, did not account for algorithms within machine affordances, and thus considered bias-free analysis as a machine affordance. 8 years later, to be able to detect biases still remains as somewhat more of a human affordance.

Questions

  1. Now, in 2020, how would you expand upon the machine affordances discussed in the paper?
  2. Does AI fit under machine affordances, or deserves a separate section – AI affordances? What kind of affordances does AI provide humans, and vice-a-versa? In other words, how do you envision this paper in current times? 
  3. For the folks working on AI or ML systems, is it possible for you to present the inaccuracies of the algorithms you are working on in descriptive, qualitative terms? Do you see human cognition, be it through novice or expert workers, as competent enough to fill in the gaps?
  4. Does this paper change the way you view your proposed project? If so, how does it change from before? Is it more in terms of how you present your paper?

Read More