03/04/20 – Nan LI – Combining Crowdsourcing and Google Street View to Identify Street-level Accessibility Problems

Summary:

The main objective of this paper is to investigate the feasibility of using crowd workers to locate and assess sidewalk accessibility problems using Google Street View imagery. To achieve this goal, the author conducted two studies to examine the feasibility of finding, labeling sidewalk accessibility problems. The paper uses the results of the first study to prove the possibility of labeling tasks, define what does good labeling performance like, and also provide verified ground truth labels that can be used to assess the performance of crowd workers. Then, the paper evaluates the annotation correctness from two discrete levels of granularity: image level and pixel level. The previous evaluation check for the absence or presence of a label and the later examination in a more precise way, which related to image segmentation work in computer vision. Finally, the paper talked about the quality control mechanisms, which include statistical filtering, an approach for revealing effective performance thresholds for eliminating poor quality turkers, and verification interface, which is a subjective approach to validates labels.

Reflection:

The most impressive point in this paper is the feasibility study, study 1. Since this study not only investigates the feasibility of the labeling work but also provides a standards of good labeling performance and indicate the validated ground truth labels, which can be used to evaluate the crowd worker’s performance. This pre-study provides all the clues, directions, and even the evaluation matrix for the later experiment. It provides the most valuable information for the early stage of the research with a very low workload and effort. I think sometimes it is a research issue that we put a lot of effort into driving the project forward instead of preparing and investigate the feasibility. As a result, we stuck by some problems that we can foresee if we conduct a pre-study.

However, I don’t think the pixel-level assessment is a good idea for this project. Because the labeling task does not require such a high accuracy for the inaccessible area, and it is to accurate to mark the inaccessible area with the unite of the pixel. As the table indicated in the paper’s results of pixel-level agreement analysis, the area overlaps for both binary classification, and multiclass classification are even no more than 50%. Also, though, the author thinks even a 10-15% overlap agreement at the pixel level would be sufficient to localize problems in images, this makes me more confused about whether the author wants to make an accurate evaluation or not.

Finally, considering our final project, it is worth to think about the number of crowd workers that we need for the task. We need to think about the accuracy of turkers per job. The paper made a point that performance improves with turker count, but these gains diminish in magnitude as group size grows. Thus, we might want to figure out the trade-off between accuracy and cost so that we can have a better idea of choice for hiring the workers.

Questions:

  • What do you think about the approach for this paper? Do you believe a pre-study is valuable? Will you apply this in your research?
  • What do you think about the matrix the author used for evaluating the labeling performance? What else matrix would you like to apply in assessing the rate of overlap area?
  • Have you ever considered how many turkers you need to hire would meet your accuracy need for the task? How do you evaluate this number?

Word Count: 578

Read More

03/04/20 – Nan LI – Real-Time Captioning by Groups of Non-Experts

Summary:

In this paper, the author focused on the main limitation of real-time captioning. The author made the point that the caption with high accuracy and low latency requires expensive stenographers who need an appointment in advance and who are trained to use specialized keyboards. The less expensive option is automatic speech recognition. However, its low accuracy and high error rate would greatly influence the user experience and cause many inconveniences for deaf people. To alleviate these problems, the author introduced an end-to-end system called LEGION: SCRIBE, which enables multiple works to provide simultaneous captioning in real-time, and the system combines their input into a final answer with high precision, high coverage, low latency. The author experimented with crowd workers and other local participants and compared the results with CART, ASR, and individual. The results indicate that this end-to-end system with a group of works can outperform both individual and ASR regarding the coverage, precision, and latency.

Reflection:

First, I think the author made a good point about the limitation of real-time captioning, especially the inconvenience that brings to deaf and hard of hearing people. Thus, the greatest contribution this end-to-end system provided is the accessibility of cheap and reliable real-time captioning channel. However, I have several concerns about it.

First, this end-to-end system requires a group of workers, even paid with a low salary for each person, as the caption time increases, the salary for all workers is still a significant overhead.

Second, since to satisfy the coverage requirement, a high precision, high coverage, low latency caption requires at least five more workers to work together. As mentioned in the experiment, the MTruk works need to watch a 40s video to understand how to use this system. Therefore, there may be a problem that the system cannot find the required number of workers on time.

Third, since the system only combines the work from all workers. Thus, there is a coverage problem, which is if all of the workers miss a part of the information, the system output will be incomplete. Based on my experience, if one of the people did not get part of the information, usually, most people cannot get it either. As the example presented in the paper, no workers typed the “non-aqueous” which was used in a clip about chemistry.

Finally, I am considering combining human correction and ASR caption. Since humans have the strength that remembers the pre-mentioned knowledge, for example, an abbreviation, yet they cannot type fast enough to cover all the content. Nevertheless, ASR usually does not miss any portion of the speech, yet it will make some unreasonable mistakes. Thus, it might be a good idea to let humans correct inaccurate captions of ASR instead of trying to type all the speech contents.

Question:

  • What do you think of this end-to-end system? Can you evaluate it from different perspectives, such as expense, accuracy?
  • How would you solve the problem of inadequate speech coverage?
  • What do you think of the idea that combines human and ASR’s work together? Do you think it will be more efficient or less efficient?

Word Count: 517

Read More

02/26/20 – Nan LI – Explaining Models: An Empirical Study of How Explanations Impact Fairness Judgment

Summary:

The main objective of this paper is to investigate how people make fairness judgments of the ML system and how explanations impact their fairness judgments. In particular, they explored the difference between a global explanation, which describes how the model works, and local explanations, which are sensitive and case-based. Besides, the author also demonstrates the impact of individual differences in cognitive style and prior position on algorithmic fairness impact the fairness judgment regarding different explanations. To achieve this goal, the author conducted an online survey-style study with Amazon Mechanical Turk workers with specific criteria. The experiment results indicate that based on different kinds of fairness issues and user profiles, there are varies effective explanation. However, a hybrid explanation that using global explanations to comprehend and evaluate the model and using local explanations to examine individual cases may be essential for accurate fairness judgment. Furthermore, they also demonstrated that individuals’ previous positions on the fairness of algorithms affect their response to different types of interpretations.

Reflection:

First, I think this paper talked about a very critical and imminent topic. Since the exploration and implementation of the machine learning system and AI system, it has been wildly deployed that using ML prediction to make decisions on high-stake fields such as healthcare and criminal predictive. However, societies have great doubts about how the system makes decisions. They cannot accept or even understand why these important decisions should be left to a piece of algorithm. Then, the community’s call for algorithm transparency is getting higher and higher. At this point, an effective, unbiased and user-friendly interpretation of ML system which enables the public to identify fairness problems would not only improve on ensuring the fairness of the ML system, but also increase public trust in ML system output.

However, it is also tricky that there is no one-size-all solution for an effective explanation. I do understand that different people shall have a different reaction to explanations, nevertheless, I was kinda surprised that people have very different opinions on the judgment of fairness. Even though this is understandable considering their prior position on the algorithm, their cognition, and different background, this will make it more complex to ensuring the fairness of the machine learning system. Since the system may need to take into account individual differences in their fairness positions, which may require different corrective or adaptive actions.

Finally, this paper reminds me of another similar topic. When we explain how the model works, how much information should we provide? What kind of information should we preserved so that this information will not be abused? In this paper, the author only mentioned that they would provide two types of explanations, global explanations that describe how the model works, and local explanations that attempt to justify the decision for a specific case. However, they didn’t examine the extent of system model information provided in the explanation. I think this is an interesting topic since we are investigating the impact of explanations on fairness judgment.

Question:

  1. Which type of explanations mentioned in this article would you prefer to see when you judge the fairness of the ML system?
  2. How did the user perceive machine learning system fairness influence the fairness ensuring process when designing the system?
  3. This experiment conducted based on an online survey with crowd workers instead of judges, do you think this would have any influence on experiment results?

Word Count: 564

Read More

02/26/20 – Nan LI – Will You Accept an Imperfect AI? Exploring Designs for Adjusting End-user Expectations of AI Systems

Summary:

The key motivation of this paper is to investigate the influence factors of user satisfaction and acceptance on an imperfect AI-powered system, here the example used in this paper is an email Scheduling Assistant. To achieve this goal, the author conducted a number of experiments based on three techniques for setting expectations: Accuracy Indicator, Example-based Explanation, and Performance Control. Before the experiments, the author presents three main topic research questions, which about the impact factors on accuracy and acceptance of High Precision(low False Positives) and High recall; the effective design techniques for setting appropriate end-user expectations of AI systems, and impact of expectation-setting intervention techniques. A series hypothesis also made before experiments. Finally, the experiment results indicate the expectation adjustment techniques demonstrated in the paper have impacted the intended aspects of expectations and able to enhance user satisfaction and acceptance with an imperfect AI system. Out of the expectation, the conclusion is that a High Recall system can increase user satisfaction and acceptance than High Precision.

Reflection

I think this paper talked about a critical concern of AI-powered system from an interesting and practical direction. The whole approach of this paper reminds me of the previous paper which talked about a summary of the guideline of Human AI interaction. The first principle is to let humans have a clear mind about what AI can do and the second principle is to let humans understand how well can AI do on what it can do. Thus, I think the three expectation adjusting techniques are designed to give the user a clear clue of these two guidelines. However, instead of using text only to inform the user, the author designed three interfaces based on the principles that combining visualization and text, striving for simplicity.

These designs enable informed of the system accuracy very intuitively. Besides, these designs also allow the user to control the detection accuracy, so that the user could apply their own requirement. Thus, through several adjustments of the control and feedback experience, the user would finally combine their expectation with an appropriate adjustment. I believe this should be the main reason that these techniques could increasing user satisfaction and acceptance with an imperfect AI system successfully.

However, as users mentioned in the paper, the conclusion that users are more satisfied and accept a system with High Recall instead of High Precision based on the fact that users can easily recover from a False Positive in their experiment platform than from a False Negative. In my perspective, the satisfaction between High Recall and High Precision should be different based on vary AI system. Nevertheless, nowadays, the AI system has been wildly applied to the high-stakes domain such as health care or criminal predictive. For these systems, we might want to adjust to different systems to optimize for different goals.

Questions:

  1. Can you see any other related guidelines applied to expectation adjustment techniques designed in the paper?
  2. Is there any other way that we can adjust the user expectation of an imperfect AI system?
  3. What do you think are the key factors that able to decrease user expectations? Do you have a similar experience?

Word Count:525

Read More

02/19/2020 – Nan LI – In Search of the Dream Team: Temporally Constrained Multi-Armed Bandits for Identifying Effective Team Structures

Summary:

The paper pointed out the theory that there is no universally ideas structures for effective teamwork. The best structure for teamwork is determined by different team members, tasks, surroundings, etc. Thus, this paper presents a system that investigates the optimal team structure by adapting different team structures to teams and evaluate the efficiency based on team performance and teamwork feedback. However, the combination of diverse dimensions of team structures and the arms (the different values of dimension) of each dimension is a large set. To avoid overwhelming group testers with these values, the paper also leverages a model called multi-armed bandits with temporal constrains which set the constraints of the number of arms selections based on several factors. The paper tested the platform with AMT workers, and evaluate the performance of the system with designed task and performance evaluation. The system confirmed that there are no two teams had the same optimal team structures, and this structure even different for the same group when completing the different tasks. The results also indicate the platform DreamTeam can promote high-efficiency teamwork.

Reflection:

First, I highly agree with the opinion that there are no universal idea structures for effective teamwork. Besides, searching the optimal structure through adapt different dimensions and evaluating each fit seems also reasonable. However, I think the experiment could gather more valuable information if they test the platform with a real group instead of a randomly formed group. Because I think the premise of becoming a group and completing a task together is that the group members are familiar with each other. Thus, this platform should be more effective in the early stages of team formation. Before the team members are familiar with each other, they can use this system to find the optimal team structure temporarily, so that they can quickly cooperate and work as a team even though they do not know each other. Nevertheless, as the familiarity among the group members raises, using this method to determine the optimal structure may be inefficient. Because they may have already find the best structure for some of the dimensions as they get along and getting more experience of working together.

I also considered another situation that is more suitable for this system, when a long-established team is assigned to working on a new type of task. Then maybe the working mode of the teamwork needs to be switched so that they can complete the new task most efficiently. At this time, the system support is demanded to find this new optimal structure.

Finally, I think the constraints method mentioned in the article also very inspirational. Maybe we can improve the effectiveness of the DreamTeam platform by allowing users to pre-delete some dimensions that they would not like to change. For example, the hierarchy or interaction pattern. In this case, the reduction of the combination is more conducive to exhaustive testing, and the adapting structure should be more fit for the teamwork.

Question:

  1. What do you think using the computation power to decide the optimal structure for teamwork?
  2. In this paper, the author finds random tester to form a group and complete the task, do you think this will influence the results?
  3. Under what condition do you think this platform would most benefit.

Word Count: 544

Read More

02/19/2020 – Nan LI – Updates in Human-AI Teams: Understanding and Addressing the Performance/Compatibility Tradeoff

Summary:

In this paper, the author presented that it is prevailing nowadays human and AI form a team to make decisions in a way that AI provides recommendations and humans decide to trust or not. In these cases, successful team collaboration mainly based on human knowledge on previous AI system performance. Therefore, the author proposed that though the update of the AI system would enhance the AI predictive precision, it will hurt the team performance since the updated version of the AI system usually not compatible with the mental model that humans developed with previous AI systems. To solve this problem, the author introduced the concept of compatibility of AI update with prior user experience. To examine the role of this compatibility in Human-AI teams, the author proposed methods and designed a platform called CAJA to measure the impact of updates on team performance. The outcomes show that team performance could be harmed even the updated system’s predictive accuracy improved. Finally, the paper proposed a re-training objective that can promote the compatibility of updates. In conclusion, to avoid diminish team performance, the developer should build a more compatible update without surrendering performance. 

Reflection:

In this paper, the author talked about a specific interaction, which is AI-advised human decision making. As the example presented in the paper–Patient readmission system. In these cases, an incompatible update of the AI system would indeed harm the team performance. However, I think the extent of the impact largely depends on the correlation between the human and AI systems.

If the system and the user have a high grade of interdependence, both are not specialists on a task, the system prediction accuracy and user knowledge have the same impact on the decision result, the incompatible update of the AI system will weaken the team performance. Even though this effect can be eliminated by the running-in of the user and the system later, the cost for the decision in the high-stakes domain will be very large.

However, if the system interacts with users frequently, but the system’s predictions are only one of the concerns for humans to make decisions and cannot directly affect the decision, then the impact of incompatible updates on team performance will be limited.

Besides, if humans are more expertise on the task, and can validate the correctness of the recommendation promptly, then both the improvement of the system performance and the new errors caused by the system update will not have much impact on the results. On the other hand, if the error caused by the update does not affect team performance, then when updating the system, we do not need to consider compatibility but only need to consider the improvement of system performance. As a conclusion, if there is not enough interaction between the user and the system, and the degree of interdependence is not high, or the system only serves as an auxiliary or double-check, then the system update will not have a great impact on team performance.

A compatible update is indeed helpful for users to quickly adapt to the new system, but I think the impact of update largely depends on the correlation between the user and the system, or the proportion of the system’s role in teamwork.

Besides, design a compatible update version also requires extra cost. Therefore, I think we should consider minimizing the impact of system errors on the decision-making process when designing the system and establishing human-AI interaction.

Question:

  1. What do you think about the concept of compatibility of AI update?
  2. Do you have any human-AI system examples that apply this author theory?
  3. Under what circumstances do you think the author’s theory is the most used and when it is not applicable?
  4. When we need to update the system frequently, do you think it is better to build a compatible update or to use an alternative method to solve the human adaptation costs?
  5. In my opinion, Huaman’s degree of adaptation is very high, and the cost required for humans to adapt is much smaller than the cost of developing a compatible update version. what do you think?

Word Count: 682

Read More

02/05/20 – Nan LI – Power to the People: The Role of Humans in Interactive Machine Learning

Summary:

The author in this paper indicated that interactive machine learning can promote the democratization of applied machine learning, which enables users to make use of machine learning-based systems to satisfy their own requirements. However, achieving effective end-user interaction through interactive machine learning brings new challenges. To addressing these challenges and highlight the role and importance of users in the interactive machine learning process, the author presented case studies and the discussion based on the results. For the first section of the case studies presented in the paper indicate that end-user always expect richer involvement in the interactive machine-learning process than just label instances or as an oracle. Besides, the transparency of system work could improve the user experience and the accuracy of the resulting models. Then, the case studies in the next sections indicate richer user interactions were beneficial within a limited boundary, and may not be appropriate for all scenarios. Finally, the author discussed the challenges and opportunities for interactive machine learning systems such as the desire for developing common language across diverse fields, etc.

Reflection:

Personally, I am not very familiar with machine learning. However, after reading this paper, I think the interactive machine learning system could amplify the effects of machine learning on our daily life to a great extent. Especially users with no or little machine learning knowledge could involve in the learning process could not only improve the accuracy of learning outcomes but also richer the interaction between users and products.

One typical example I have experienced the interactive machine learning is one of the features of Netease Cloud Music Player – Private Radio. The private radio recommends music you may like based on your playlist, and then require your feedback, which is like or not. The more feedback you provided, the more likely you would like the next recommendation. Thus, the user study results presented in the paper that end-user would like richer interactive is reasonable. I would also like to tag the recommend music not just like or not, which may also include the reason such as I like this because of the melody or lyrics.

I also agree with the scenario that transparency can help people provide better labels. In my opinion, the transparency of how system works have the same effect as providing users feedback on how their operations influenced the system. A good understanding of the impact of users’ actions would allow them to proactively five more accurate feedback. Regard as the Music Player example, if my private radio always recommends music I like, in order to hear more good music, I will more willing to provide feedback. Conversely, if my feedback has no influence on the radio recommendation situation, I will just give up this feature.

Questions:

  • Do you have a similar experience in the interactive machine-learning system?
  • What is your expectation of these systems?
  • What do you think of the tradeoff between machine learning and human-computer interaction in this interactive learning system?
  • Talk about any of the challenges faced by the interactive learning system which demonstrated at the end of the paper.

Read More

02/05/20 – Nan LI – Guidelines for Human-AI Interaction

Summary

In this paper, the author proposed and evaluated 18 guidelines for Human-AI Interaction. These guidelines were summarized and distilled through four main stages. The author explained these four phases and present partial results by listing several representative examples. First, the author made exhaustive research on AI design and guidelines from different companies, industries, public articles, and papers. Then, they conducted a modified heuristic evaluation to these guidelines and reflect the results. In the third phase, the author conducted a user study with 49 HCI practitioners to evaluate these guidelines from two main aspects: 1) The broad applicability of guidelines. 2) The semantic intelligibility of guidelines. Finally, the author evaluated and revised the guidelines with experts who have work experience in UX/HCI and familiar with discount usability methods such as heuristic evaluation(from paper). These guidelines are analyzed, adjusted and summarized after each stage based on the results of that stage. In the paper, the author even presents the results of each stage through tables and figures. Finally, the author discussed the scope of these guidelines, as well as issues that he found during the evaluation phases.

Reflection:

The main content of this article is an evaluation of the author’s summary. The evaluation process is divided into three phases. There are many times when we need to evaluate our won hypotheses or conclusions in daily studying and research. Thus, the evaluation process present by the author in this paper has many valuable points that are worth learning.

In the first phases, the author’s original version of the guideline was collected from various aspects. The collection is very comprehensive. It is not limited to published papers or journals but also focuses on existing products and applications.

In the next three phases, each stage of the assessment is very detailed and comprehensive. For example, when the author wants to evaluate whether these guidelines are applicable to AI-infused products, there are only 13 products were inspected. The number is not large, but the function of these products is very representative.

In addition, the personnel involved in the inspection in each phase are professionals with experience in the HCI area, which also ensures the professionalism of the evaluation.

During the evaluation, the author not only focused on the applicability and accuracy of the guidelines but also emphasized the quality of semantic expression. This has a great positive effect on the use and dissemination of the guidelines.

In the final discussion of the article, the author also pointed out the development of AI-infused products should always consider ethical issues instead of just adhere to the design guidelines. I don’ have much comment on this, just suddenly realized that no matter in what area, no matter design what kind of product, it is always linked to ethical issues and bias issues. This is always the most complicated topic.

Questions:

  • This paper gives a very detailed user study process and results. Have you ever conducted a standard HCI user-study? What can you learn from the user-study in this paper?
  • The original version of the guidelines proposed in this article is based on the existing paper and product design summary. However, this summary is more about AI-design than HCI design. How do you think about this? Do you think they should collect more information about the HCI design principle? Or you think the information collected by the author is adequate enough.
  • Do you think the inspection process should include more ordinary AI product users?

Read More

01/29/20 – Nan LI – Human Computation: A Survey and Taxonomy of a Growing Field

Summary:

The key motivation of this paper is to distinguish the definition of human computation with other terms such as “crowdsourcing”. The author also explored the accurate definition of human computation and developed the classification system so that it provides directions for research on human computation. The author also analyzed human computation and demonstrated the key idea with a graph. The graph includes the motivation of people who work on this computation, how to control the quality and how to aggregate the work. Finally, what kind of skills required for human computation, and the regular process order for them. There are several overlaps between human computation and other terms. The definition of human computing is summarized by the author based on all kinds of definition source as two main points: First, the problem fit the general computing paradigm so could be solved one day. Second, human participants are guided by computing systems or processes. The author compared human computation with related ideas and present a classification system based on the six most significant distinguishing factors. This paper is mainly about the taxonomy of human computation, however, the author indicates the future usage of these classifications and the future direction of research such as the issues related to ethics and labor standards.

Reflection:

This article provides a clear definition of human computation, which I think is the most important step before trying to explore more information or expressing any opinion on this topic. I prefer the definition “…a technique to let humans solve tasks, which cannot be solved by computers.”, although we know these problems could be solved one day with the development of technology. Just look at the motivation indicated by the author, I would consider the human computation is an inevitable trend as the reveal of the deficiency of Artificial Intelligent and as the need of network users. There is an interesting contradiction that happens to me when I was read the paper. When I check the graph of the “classification system for human computation systems overview”, I found the author indicates one of the motivations is altruism. I was skeptical and I do not believe until I saw the example, “thousands of online volunteers combed through over 560,000 satellite images hoping to determine Gray’s location, who was missing during a sailing trip in early 2007”, I think it’s the best reason. The book “ghost work” could be one type of work that included in human computation definition in the paper. The motivation of their work is payment, and a different job has a different process order. The tag of the picture could be considered as “Worker->Requester->Computer(WRC)”, while the Uber driver’s case might be “Computer-> Woker->Requester”. This paper is a summary and classification of the present state of human computation, without any innovative ideas. However, the follow-up work that the author put forward at the end is worth discussing. Especially the topic of issues related to ethics and labor standards. We do not have any kind of regulation for this mode of work. Thus, how to protect the workers? How to prevent the product from intentional destruction. How human computation will develop in the future?

Questions:

  • Do you think the field of human computation will exist for a long time? Or will soon be replaced by the highly developed AI?
  • What aspect of human computation do you think will involve an ethical problem?
  • Which description in this paper more in line with ghost work?
  • Can we find any other examples of the motivation or quality control of human computation?

Read More

01/22/20 – Nan LI – Ghost Work

There is a group of people who comes from a different state, even different time zone, doing some repetitive but important tasks which enable the APPs more intelligent. For example, block inappropriate photos from the website, manually compare photos of Uber drivers and so on. Their job is not full time, with a low salary, and even the work opportunity is unstable. The author defines this type of work as ghost work. According to incomplete statistics, the number of these workers are still increasing. However, this type of work has no guarantees, no bonuses, no promotions, and the number of jobs is limited. Based on the investigation, there are various reasons that this group of people would like to be a ghost worker. For example, they don’t want to leave their family, they don’t want to be bundled by a full-time job, or they need good experience to show up on their resume. This book mainly demonstrates the research result on this booming work and the standard of living of these ghost workers. The author also indicates that even though Artificial Intelligence is getting prevalence now, the last mile between what humans can do and what robots can do is still large.

This chapter reminds me of another news that I read before which revealed a scam. There are some “technology companies” claim to be able to solve ransomware malware while actually just negotiate a ransom with hackers, and then charge their customers’ far more money than the ransom. The reason I thought about this news was that the scam was deceived in the name of technology. However, this news may not have much to do with ghost work, but there is a lot of news that some AI companies actually hire cheap staff to perform manual operations to make their products look smart, and I think this is not different from what the above scam did. Nevertheless, I am only discussing a very extreme case, just because it reminds me of the news that I saw. Compared with these events, what ghost workers have been done reflects more positive. I would consider what they did was made up of the last mile between humans and AI. Regarding of the Uber driver’s case, ghost workers only add manual recognition when the driver changes significantly and the machine cannot recognize it. We can blame it as immature, even “semi-AI” technology, but we can also treat this kind of work as part of AI work once we acknowledge the insurmountable last mile problem. Besides, think about the job opportunities provided for those bunch of people, think about the convenience and efficiency provided by ghost workers. I would rather consider these as a win-win strategy. Yet this win-win situation is established on the premise that AI technology is not mature enough, the unemployment rate is high, and society has sufficient demand for this type of work.

There is a more negative effect that we have talked about during the class, however, I would prefer to discuss from the perspective of people who need these jobs. There must be a reason for these jobs, the author already introduced the original of these works and the benefits of this mode of work. However, with the progress of society and the development of science and technology, how this working model will change is still unknown. We shouldn’t just see the immediate benefits without considering long-term development. Based on these, I would like to raise the following topics that can be discussed:

  • How would you predict the future development of this working model?
  • Attitude expression should be based on different perspectives and from different positions. What is your perspective?
  • Based on your perspective, how do you evaluate the pros and cons of ghost work?

Read More