01/29/20 – Vikram Mohanty – An Affordance-Based Framework for Human Computation and Human-Computer Collaboration.

Paper Authors: R. Jordon Crouser and Remco Chang

Summary

This paper provides an overview summary of some of the popular systems (back in 2012), which were built around human-computer collaboration. Based on this analysis, the authors uncover different key patterns in human and machine affordances, and propose an affordance-based framework that will help researchers think and strategize better about problems that can benefit from collaboration. Such an affordance-based framework, according to the authors, would enable easy comparison between systems via common metrics (discussed in the paper). In the age of intelligent user interfaces, the paper gives researchers a foundational direction or lens to break down problems and map the solution space in a meaningful manner.

Reflection

  1. This paper is a great reference resource for setting some foundational questions on human-computer collaboration – How do we tell if a problem would benefit from a collaborative solution? How do we decide which tasks to delegate to which party, and when? How do we compare different systems solving the same problem? At the same time, it also sets some foundational goals and objectives for a system rooted in human-computer collaboration. The paper illustrates all the concepts through different successful examples of systems, making it easy to visualize the bin in which your (anticipated) research would fit. 
  2. This paper makes a great motivating argument about developing systems from the problem space, rather than jumping directly to solutions, which may often lead to investment of significant time and energy into developing inefficient collaboration.
  3. The paper makes the case for evolving from a prior established framework (i.e. function allocation) for human-machine systems into the proposed affordance-based one. Even though they proposed this framework in 2012, which is also when deep learning techniques started becoming popular, I feel that this framework is dynamic and broad enough to accommodate the ubiquity of current AI and intelligent user interfaces.
  4. Following the paper’s direction of updating theories with technology’s evolution, I would argue for a “sequel” paper to discuss AI affordances as an extension to the machine affordances. This would require an in-depth discussion of the capacities and limitations of state-of-the-art AIs designed for different tasks, some of which currently fall under human affordances, such as visual perception (computer vision), creativity (language models), etc. While AIs may be far from being perfect in these tasks, they still provide imperfect avoidances. Inevitably, this also means re-focusing some of the human affordances described in the paper, and may be part of a bigger question i.e. “what is the role of humans in the age of AI?”. This also pushes the boundaries for what can be achieved with such hybrid interaction, e.g. AI’s last-mile problems [1].
  5. Currently, many different algorithms interact with human users via intelligent user interfaces (IUIs), and form a big part of decision-making processes. Over the years, researchers from different communities have pointed out how different algorithms can result in different forms of bias [2, 3] and have pushed for more fairness, accountability, transparency and interpretability of these algorithms in an effort to mitigate these biases. The paper, based in 2012, did not account for algorithms within machine affordances, and thus considered bias-free analysis as a machine affordance. 8 years later, to be able to detect biases still remains as somewhat more of a human affordance.

Questions

  1. Now, in 2020, how would you expand upon the machine affordances discussed in the paper?
  2. Does AI fit under machine affordances, or deserves a separate section – AI affordances? What kind of affordances does AI provide humans, and vice-a-versa? In other words, how do you envision this paper in current times? 
  3. For the folks working on AI or ML systems, is it possible for you to present the inaccuracies of the algorithms you are working on in descriptive, qualitative terms? Do you see human cognition, be it through novice or expert workers, as competent enough to fill in the gaps?
  4. Does this paper change the way you view your proposed project? If so, how does it change from before? Is it more in terms of how you present your paper?

Read More

01/29/20 – Lee Lisle – An Affordance-Based Framework for Human Computation and Human-Computer Collaboration

Summary

            Crouser and Chang make the argument that visual analytics is defined as “the science of analytical reasoning facilitated by visual interactive interfaces,” and is being pushed through two main directions of thought – human computation and human computer collaborations. However, there’s no common design language between the two subdisciplines.  Therefore, they took it upon themselves to do a survey of 1217 papers, whittling them down to 49 representative papers to then find common threads that can help define the fields. They then categorized the research into what affordances the research studies either for users or the machines. Humans are naturally better with visual perception, visuospatial thinking, audiolinguistic ability, sociocultural awareness, creativity, and domain knowledge, while machines are better with large-scale data manipulation, data storage and collection, efficient data movement and biasfree analysis. The authors then suggest that research explore human adaptability and machine sensing as well as discuss when to use these strategies.

Personal Reflection

When reading this I did question a few things about the studies.  For example, in bias-free analysis, while they do admit that human bias can be introduced during the programming, they fail to acknowledge the bias that can be present in the input data. Entire books have been written (Weapons of Math Destruction being one) that cover how “bias-free” algorithms can be fed input data that have clear bias, resulting in a biased system regardless of it being hard-coded in the algorithm.

Outlining these similarities between various human-computer collaborations allows other researchers to scope projects better. Bringing up the deficiencies of certain approaches allows for avoidance of the same pitfalls.

The complexity measure questions section, however, felt a little out of place considering it was the first time it was brought up in the paper. However, it asked strong questions that definitely impact this area of research. If ‘running time’ for a human is a long time, this could mean there are improvements to be made and areas that we can introduce more computer-aid.

Questions

  1. This kind of paper is often present in many different fields. Do you find these summary papers useful?  Why or Why not? Since it’s been 8 years since this was published, is it time for another one?
  2. Near the end of the paper, they ask what the best way is to measure human work.  What are your ideas? What are the tradeoffs for the types they suggested (input size, information density, human time, space)?
  3. Section 6 makes it clear that using multiple affordances at once needs to be balanced in order to be effectively used. Is this an issue with the affordances or an issue with the usability design of the tools?
  4. The authors mention two areas of further study in section 7: Human adaptability and machine sensing. Have these been researched since this paper came out? If not, how would you tackle these issues?

Read More

01/29/20 – Akshita Jha – Beyond Mechanical Turk: An Analysis of Paid Crowd Work Platforms

Summary:
“Beyond Mechanical Turk: An Analysis of Paid Crowd Work Platforms” by Donna Vakharia and Matthew Lease talks about the limitations of Amazon Mechanical Turk (AMT) and presents a qualitative analysis of newer vendors who offer different models for achieving quality crowd work. AMT was one of the first platforms to offer paid crowd work. However, nine years after its launch, it still is in its beta stage because of its limitations that fail to take into account the skill, experience and ratings of the worker and the employer and its minimal infrastructure that does not support collecting analytics. Several platforms like ClickWorker, CloudFactory, CrowdComputing Systems (now WorkFusion), CrowdFlower, CrowdSource, MobileWorks (now LeadGenius), and oDesk have tried to mitigate these limitations by coming up with a more advanced workflow that ensures quality work from crowd workers. The authors identify four major limitations of AMT: (i)Inadequate Quality Control, (ii)Inadequate Management Tools, (iii)Missing support for fraud prevention, and (iv)Lack of automated tools. The authors also list down several metrics to qualitatively assess other platforms: (i)Key distinguishing features of the platform, (ii)Source of the workforce, (iii)Worker demographics, (iv)Worker Qualifications and Reputations, (v)Recommendations, (vi)Worker Collaboration, (vii)Rewards and Incentives, (viii)Quality Control, (ix)API offerings, (x)Task Support and (xi)Ethics and Sustainability. These metrics prove useful for a thorough comparison of different platforms.

Reflections:
One of the major limitations of AMT is that there are no pre-defined tests to check the quality of the worker. In contrast, other platforms ensure that they test their workers in one or the other way before assigning them tasks. However, these tests might not always reflect the ability of the workers. The tests need to be designed keeping the task in mind and this makes standardization a big challenge. Several platforms also believe in offering their own workforce. This can have both positive and negative impacts. The positives being that the platforms can perform a thorough vetting while the negatives are that this might limit the diversity of the workforce. Another drawback of AMT is that workers seem interchangeable as there is no way to distinguish one from the other. Other platforms try to use badges to display worker skills and use a leaderboard to rank their workers. This can lead to unequal distribution of work which might be merit based but there is a need to perform a deeper analysis of the ranking algorithms in order to ensure that there is no unwanted bias in the system. Some of the platforms employ automated tools to perform tasks which are repetitive and monotonous. This comes with its own set of challenges. As machines become more “intelligent”, humans need to develop better and more specialized skills in order to remain useful. More research needs to be done in order to better understand the working and limitations of such “hybrid” systems.

Questions:
1. With crowd platforms employing automated tools, it is interesting to discuss whether these platforms can still be categorised as a crowd platform.
2. This was more of a qualitative analysis of crowd platforms. Is there a way to quantitatively rank these platforms? Can we use the same metrics?
3. Are there certain minimum standards that every crowd platform should adhere to?

Read More

01/22/20 – Akshita Jha – Ghost Work

Summary:
Ghost Work by Mary L. Gray and Siddharth Suri talks about the invisible human labor powering the seemingly ‘automated’ systems. It talks about the opaque world of employment and how this shadow workforce is what powers the so called intelligent systems and is largely responsible for their seamless working. There are hundreds of millions of invisible people who work online through Amazon Mechanical Turk, CrowdFlower and other crowd-sourcing platforms for a meager sum. Most of these workers have a Bachelors or a Masters degree, who might or might not be employed full time. They join these platforms hoping to make some additional money along while also hoping to get a sense of community. Most of these workers are from the US and India as it’s easier to get cheap labor in these countries. These crowd sourcing platforms offer labor as a service where the laborers are hired using an API which makes it extremely convenient to filter these crowd source workers according to their ‘qualifications’, evaluate their work, collect their answers and pay them, all in a very short amount of time. A major implication of the API is that the workers are stripped away from their identity and are only identifiable by their unique identifier. Although, this workforce is essential to solve the problem of the “last mile”, the use of APIs create a distance between the parties involved rendering these workers in between as ghost workers.

Reflections:
It was an interesting read because it talks about a major workforce that we hardly get a chance to interact with. As computer scientists, we work with Machine Learning models without appreciating the ‘ghost work’ that goes in to build a “gold standard” dataset. For example, the much used resource, ImageNet, developed by Fei-Fei Li of the Stanford Human-Centered AI Institute, involved 49,000 workers from 167 countries for accurately labeling 3.2 million images. ImageNet has been utilized by computer vision researchers to build state-of-the-art image recognition algorithms but hardly any of the works have acknowledged the amount of work that went into captioning the images. Companies like LeadGenius and Amara are attempting to bring about a change in how these ghost workers are treated. They deviate from the traditional business strategies as they hire workers only after a rigorous interview round and additional tests conducted by senior workers. They offer a paid video orientation session and after a 90-day trial period, the workers might also become eligible for an 8 percent hike in their hourly pay, subject to certain minimum requirements. Amara gives their employees the option to opt out of projects they find repetitive and choose from a variety of projects, unlike other platforms where the content is pre-decided and the workers have no autonomy. These companies should be appreciated for attempting to bridge the gap between mindless ghost work and the kind of that takes into account the creativity and the interests of an individual worker.

Questions:
All this makes me think about the steps we can take to better acknowledge the essential work put in by these crowd-source workers. Does this work come under employed work or volunteerism? How do we categorize this work and should we consider this form of work as formal employment? These are essential questions that need to be discussed.

Read More

01/29/20 – Ziyao Wang – An Affordance-Based Framework for Human Computation and Human-Computer Collaboration.

In this paper, the authors conducted literature review on papers and publications which represent the state of the art in human-computer collaboration and human computation area. From these literature review, they identified the affordance into two groups: human-intelligence and machine-intelligence. Though they introduced the affordance can be split into two groups, there is also systems like reCAPTCHA and PatViz which can benefit from the combination of the two intelligences. Finally, they provided examples of how to utilize this framework and some advice on future researches. They announced that human adaptability and machine sensing is two extension of current work. Also, future work (find way to measure human work, assess human work in practice and account for individual differences in human operators) need combination of experts in theoretical computer science, as well as in psychology and neuroscience.

Reflections:

Primarily, I felt that both human affordance and machine affordance contribute to the success of current systems. There is great importance in allocate the tasks and make human and machine can support each other. Current systems may suffer from poor human-computer collaboration. For example, systems cannot assign proper work to human workers or the user interface is difficult to use. To avoid such kind of situation, policies and guidance are needed. There should be a common used evaluation criteria and make restriction on industry.

Secondly, researchers can benefit from making an overview of the related research areas. In most of the cases, to solve a problem we may need help from experts in different areas. As a result, the category of the problem may become ambiguous. As a result, researchers from different aspects may waste their effort on similar researches and they may be not able to get help from previous research in another area. For this reason, it is important to make a category of the researches with similar goals or related techniques. Current or future researches will benefit from such kind of category and the discussions between experts will become much easier. As a result, more ideas can be proposed and researchers can find out some fields which they have not been considered before.

Additionally, in the human-computation and human-computer collaborative systems, problems are solved using both human-intelligence and machine-intelligence. For such a comprehensive area, it is important to do reflection regularly. With the reflections, researchers can make comprehensive consideration on the problems they are going to solve. With the table in the paper, the affordance of human-intelligence and machine-intelligence can be overviewed. Additionally, we can find out in which areas there have been a lot of research and to which areas we should pay more attention to. With this common framework, the understanding and discussion on previous work would be much easier and novel ideas will occur. This kind of reflection can be applied in other areas too, which will result in rapid development in each industry.

Questions:

Why there is no updates in systems which is considered hard to use?

How to assess human work and machine work in practical?

For user interface, is it more important to let new workers use easily but there is limitation in customization or let experienced workers able to customize and reach high efficiency however new user may face some difficulty in using?

Read More

01/29/20 – Ziyao Wang – Beyond Mechanical Turk: An Analysis of Paid Crowd Work Platforms

The authors found that most of current research on crowd work focused on AMT, and there is little exploration of other platforms. To enrich the diversity of crowd work researches and accelerate progress of development of crowd work, the authors contrast the key problems which they have found in AMT versus features of other platforms. With the comparison, the authors found different crowd platforms provide solution to different problems in AMT. However, all the platforms cannot provide sufficient support to well-designed worker analytics. Additionally, they made a conclusion that there is still a number of limitations in AMT and research on crowd work will benefit from diversifying. With the contributions of the authors, future research on alternative platforms can benefit from not considering AMT as the baseline.

Reflection:

AMT is the first platform for crowd work and many people are benefit by using this platform. However, if all companies and workers only use AMT, the lack of diversity will result in a low development progress of crowd work platform.

From AMT’s point of view, if there are no other crowd work platforms, it is hard for AMT to effectively find solutions to its limitations and problems. In this paper, for example, lack of automated tool is one of AMT’s limitations. However, the platform WorkFusion allows usage of machine automated workers, which is an improvement to AMT. If there is only one platform, it is hard for the platform to find its limitation by itself. But with the competitor platforms, to provide better user experience and defeat other adversary companies, they have to do a lot of research to keep their platforms up to date. As a result, the research of crowd work will be pushed to develop rapidly.

For other platforms, they should not just copy the pattern of AMT. Though AMT is the most popular crowd work platform, it still has some limitation. Other platforms can copy the advantages of AMT. However, they should avoid AMT’s limitations and find solutions to these drawbacks. A good baseline can help the initiation of other platforms. But if other platforms are just totally follow the baseline, all the platforms will suffer from same drawbacks and no one can propose any solution to them. To avoid this kind of situation, the companies should develop own platforms, avoid the drawbacks and learn advantages from other platforms.

The researchers should not just focus on AMT. Of course, the researches on AMT will receive more attention as most of the users use AMT. However, it is harmful to the development of crowd work platform. Even some platforms developed solutions for some problems, if the researchers just ignore these platforms, these solutions will not be able to spread out and the researchers and other companies will spend unnecessary effort to do research on similar solutions.

Question:

What is the main reason for most of the companies select AMT as their crowd work service provider?

What is the most significant advantage for each of the platforms? Is there any chance to develop a platform which has all other platforms’ advantages?

Why most of the researchers focused on AMT only? Is it possible to advice more researchers to do cross-platform analysis?

Read More

01/22/2020 – Ziyao Wang – Ghost Work

The introduction talks about what is the ghost work. People who are hired by APIs to do some work which cannot be solved by artificial intelligence are called ghost work. They may determine whether some posts contain adult content or whether the person log in to the account is the account holder. These people are like ghosts since none of the APP users or the programmers do not recognize their appearance. Chapter 1 mainly discusses about the occurrence and development of ghost work. MTurk was built when amazon facing problem with correcting e-book information. Afterwards, they used this API to hire students to do the job and more companies paid them to get similar service. With years of development, the API can let workers do macro-tasks under the leading of full-time employees. However, there are also problems in such ghost work. The hired workers can hardly protect their own profits and companies can hardly find the cracker when issue occurs.

Reflection:

This kind of ghost work is beneficial for both the companies and the workers. The workers from poor areas can make profit doing such kind of job. In the example, a skilled worker can make salary of $40 per day, which is relatively high in some areas, for example, small towns in China. Also, this kind of job can be completed without time and area limitation. This means housewives or retired people can do such kind of job.

In the meanwhile, companies can make profits too. Due to there is no time and area limitations, companies can always find cheap labor.  This means companies can save expanse on hiring and make more profits. Because they can hire people all over the world, they have workers work in different hours in the day. This is similar like they are hiring 24-hour workers, which means more profits and better user-experience.

However, this kind of ghost work contains risks too. When problems occur, companies can hardly find the crackers as they are hiring too many workers without knowing who they are. Also, human make mistakes. As a result, these hired workers are likely to make mistakes when doing tasks. One more points is that the companies do not know the background of the hired people. Sometimes the hired workers may be really bad. In the worst case, they can hardly understand the words on screen and the results will be not trustworthy.

For the employees, no one can ensure their rights. When companies refused to pay their salary, they can find nowhere to ask their salary back. Also, when they hardly solve some problems, they may find someone else has solved these problems and they can get no salary. One more point is that it cannot be ensured that there are tasks whenever. There may be limited tasks in some workers’ working time, which will make these workers receive limited salary.

Questions:

Is there any policy currently to protect the users of these APIs, both companies and hired workers?

How can the workers protect themselves when the companies refuse to pay their salary?

How to deal with mistakes caused by the workers? Any remedies or punishments?

Read More

01/29/20 – Vikram Mohanty – Beyond Mechanical Turk: An Analysis of Paid Crowd Work Platforms.

Paper Authors: Donna Vakharia and Matthew Lease.

Summary

This paper gives a general overview of different crowdsourcing platforms and their key feature offerings, while centering around the limitations of Amazon Mechanical Turk (AMT), the most popular among the platforms. The factors which make requesters resort to AMT are briefly discussed, but the paper points out that these factors are not exclusive to AMT. Other platforms also offer most of these advantages, while offsetting some of AMT’s limitations such as quality control, automated task routing, worker analytics, etc.. The authors qualitatively assess these platforms, by comparing and contrasting on the basis of key criteria categories. The paper, by providing exposure to lesser-known crowdsourcing platforms, hopes to mitigate one plausible consequence of researchers’ over-reliance on AMT i.e. the platform’s limitations can sub-consciously shape research questions and directions. 

Reflection

  1. Having designed and posted a lot of tasks (or HITs) on AMT, I concur with the paper’s assessment of AMT’s limitations, especially no built-in gold standard tests, no support for complex tasks, task routing and real-time work. The platform’s limitations, essentially, is offloaded by the researcher’s time, efforts and creativity, which is now consumed to work around these limitations instead of other pressing stuff.
  2. This paper provides a nice exposure to platforms that offer specialized and complex task support (e.g. CrowdSource supporting writing and text creation tasks). As platforms expand on supporting for different complex tasks, this would a) reduce the workload on requesters for designing tasks, and b) reduce the quality control tensions arising from poor task design.
  3. Real-time crowd work, despite being an essential research commodity, still remains a challenge for crowdsourcing platforms. Even though this inability has resulted in toolkits like LegionTools [1] which facilitate real-time recruiting and routing of crowd workers on AMT, these toolkits are not the final solution. Even though many real-time crowd-powered systems have been built using this toolkit, they still remain prone to being bottle-necked by the toolkit’s limitations. These limitations may arise from lack of resources for maintaining and updating the software, which may have originated as a student-developed research project. Crowd platforms adopting such research toolkits into their workflow may solve some of these problems. 
  4. Sometimes, projects or new interfaces may require testing learning curve of its users. It does not seem straightforward to achieve that on AMT since it lacks support for maintaining a trusted worker pool. However, it seems to be possible on other platforms like ClickWorker and oDesk, which allow worker profiles and identities.  
  5. A new platform, called Prolific, was launched publicly in 2019, and alleviates some of the shortcomings of AMT such as fair pay assurance (with a minimum $6.50/hour), worker task recommendation based on experience, initial filters, and quality control assurance. The platform also provides functionalities for longitudinal/multi-part studies, which may seem difficult to achieve using the functionalities offered by AMT. The ability for longitudinal studies was not addressed for other platforms, either. 
  6. The paper was published in 2015 and highlighted the lack of automated tools. Since then, numerous services have come up and now offer human-in-the-loop functionalities, including Amazon and Figure Eight (formerly CrowdFlower)

Questions

  1. The authors raise an important point that the most popularly used platform’s limitations can shape research questions and directions. If you were to use AMT for your research, can you think of how its shortcomings would affect your RQs and research directions? What would be the most ideal platform feature for you?
  2. The paper advocates algorithms for task recommendation and routing, as has been pointed out in other papers [2]. What are some other deficiencies that can be supported by algorithms? (reputations, quality control, maybe?)
  3. If you had a magic tool to build a crowdsourcing platform to support your research, along with bringing a crowd workforce, what would your platform look like (the minimum viable product)? And who’s your ideal crowd? Why would these features help your research?

Read More

01/22/20 – Rohit Kumar Chandaluri – Ghost Work

There is a lot of work going on in the present technologies with out us knowing about them like identifying hate speech or abusive speech etc. All this work is done by a computer software which we call it as Artificial Intelligence. It is hard for a software alone to classify all the possible scenarios in the world to hate speech or not. So, we train the software by providing it with some sample data and labels telling this sentence is hate speech and this is not. This data that is required by the software is provided by humans, so for a software to evolve like a human robot it requires the intervention of humans for it to develop. The chapters provided explain how these data labelling is achieved and how did jobs like crowdsourcing and Mtruck developed in the course of 10 years. It also explains how these jobs help software systems to evolve, and how these jobs are paid on contract basis. There are no legal laws that bind these jobs to provide basic necessities like health insurance, food etc., the pay for the job is decided by supply of labor for that job and in some cases the pay will be less than the minimum pay required. The chapters also explain how macro and micro jobs get crowdsources for very low payments making the jobs expand to any kind of area.

The chapters brought about interesting topic on how crowd source can be used for very low prices. And it is also interesting to see people making living out of these kinds of jobs. It was true that the jobs are interesting and not mundane as you change your job for each different task which makes your like non mundane. The chapters didn’t explain how the crowd source work is validated, which I am interested to learn about. It is interesting to learn that we can break a big task into small micro or macro tasks and get them done at a very cheaper price using crowdsourcing. It was interesting to learn that there are internal crowd source jobs that exist in companies like Microsoft, Google etc.

  1. Will the jobs like these develop more and cause full time jobs at stake as these kinds of jobs can be used for any areas like education (projects, assignments) and even macro tasks in the big projects?
  2. How are people making a living out of these jobs by getting paid less than minimum wage, there are people who just do these jobs?
  3. How is validation of these jobs getting done, how can you make sure that the task completed by the crowd source worker was good?
  4. As the crowd source jobs are helping create automated software’s which will remove jobs of the people who are doing it as of now, it will also remove the jobs of simple tasks in crowd sources too as once the software is developed to uncover corner scenarios we need experts which only few people will be eligible. Is crowd source creating jobs or removing them?

Read More

01/22/20 – Nan LI – Ghost Work

There is a group of people who comes from a different state, even different time zone, doing some repetitive but important tasks which enable the APPs more intelligent. For example, block inappropriate photos from the website, manually compare photos of Uber drivers and so on. Their job is not full time, with a low salary, and even the work opportunity is unstable. The author defines this type of work as ghost work. According to incomplete statistics, the number of these workers are still increasing. However, this type of work has no guarantees, no bonuses, no promotions, and the number of jobs is limited. Based on the investigation, there are various reasons that this group of people would like to be a ghost worker. For example, they don’t want to leave their family, they don’t want to be bundled by a full-time job, or they need good experience to show up on their resume. This book mainly demonstrates the research result on this booming work and the standard of living of these ghost workers. The author also indicates that even though Artificial Intelligence is getting prevalence now, the last mile between what humans can do and what robots can do is still large.

This chapter reminds me of another news that I read before which revealed a scam. There are some “technology companies” claim to be able to solve ransomware malware while actually just negotiate a ransom with hackers, and then charge their customers’ far more money than the ransom. The reason I thought about this news was that the scam was deceived in the name of technology. However, this news may not have much to do with ghost work, but there is a lot of news that some AI companies actually hire cheap staff to perform manual operations to make their products look smart, and I think this is not different from what the above scam did. Nevertheless, I am only discussing a very extreme case, just because it reminds me of the news that I saw. Compared with these events, what ghost workers have been done reflects more positive. I would consider what they did was made up of the last mile between humans and AI. Regarding of the Uber driver’s case, ghost workers only add manual recognition when the driver changes significantly and the machine cannot recognize it. We can blame it as immature, even “semi-AI” technology, but we can also treat this kind of work as part of AI work once we acknowledge the insurmountable last mile problem. Besides, think about the job opportunities provided for those bunch of people, think about the convenience and efficiency provided by ghost workers. I would rather consider these as a win-win strategy. Yet this win-win situation is established on the premise that AI technology is not mature enough, the unemployment rate is high, and society has sufficient demand for this type of work.

There is a more negative effect that we have talked about during the class, however, I would prefer to discuss from the perspective of people who need these jobs. There must be a reason for these jobs, the author already introduced the original of these works and the benefits of this mode of work. However, with the progress of society and the development of science and technology, how this working model will change is still unknown. We shouldn’t just see the immediate benefits without considering long-term development. Based on these, I would like to raise the following topics that can be discussed:

  • How would you predict the future development of this working model?
  • Attitude expression should be based on different perspectives and from different positions. What is your perspective?
  • Based on your perspective, how do you evaluate the pros and cons of ghost work?

Read More