04/15/20 – Akshita Jha – Disinformation as Collaborative Work: Surfacing the Participatory Nature of Strategic Information Operations

Summary:
“Disinformation as Collaborative Work: Surfacing the Participatory Nature of Strategic Information Operations” by Starbird et. al. talks about strategic information operations like disinformation, political propaganda, conspiracy theories, etc.. They gather valuable insights about the functioning of these organizations by studying the online discourse in-depth, both qualitatively and quantitatively. The authors present three case studies: (i) Trolling Operations by the Internet Research Agency Targeting U.S. Political Discourse (2015-2016), (ii) The Disinformation Campaign Targeting the White Helmets, and (iii) The Online Ecosystem Supporting Conspiracy Theorizing about Crisis Events. These three case studies highlight the coordinated effort of several organizations to spread misinformation and influence the political discourse of the nation. Through these three case studies, the authors attempt to go beyond understanding online bots and trolls to move towards providing a more nuanced and descriptive perspective of these co-ordinated destructive online operations. This work also successfully highlights the challenging problem for “researchers, platform designers, and policy-makers — distinguishing between orchestrated, explicitly coordinated, information operations and the emergent, organic behaviors of an online crowd.”

Reflections:
This is an interesting work that talks about misinformation and the orchestrated effort that goes behind spreading it. I found the overall methodology adopted by the researchers particularly interesting. The authors use qualitative, quantitative and visual techniques to effectively demonstrate the spread of misinformation from the actors (twitter accounts and websites that initiate the discussion) to the target audience (the accounts that retweet and are connected to these actors either directly or indirectly). For example, the case study talking about the Internet Research Agency Targeting U.S. Political Discourse that greatly influenced the 2016 elections, highlighted the pervasiveness of the Russian IRA agents using network analysis and visual techniques. The authors noted that the “fake” accounts influenced both sides: left-leaning accounts criticized and demotivated the support for the U.S. presidential candidate, Hillary Clinton while promoting the now president, Donald Trump, on the right. Similarly, these fake Russian accounts were active on both sides of the discourse for the #BlackLivesMatter movement. It is commendable that the authors were able to successfully uncover the hidden objective of these misinformation campaigns and observe how these accounts presented themselves as both people and organizations in order to embed themselves in the narrative. The authors also mention that they make use of trace ethnography to track down the activities of the fake accounts. I was reminded of another work, “The Work of Sustaining Order in Wikipedia: The Banning of a Vandal”, that also made of trace ethnography to narrow down the rogue user. It would be interesting to read about a work where trace ethnography was used to track down a “good user”. I would have liked if the paper went into the details of their quantitative analysis and the exact methodology they adopted for their network analysis. I’m also curious to know if the accounts were cherry-picked to show the ones with the most destructive influence or the resulting graph we see in the paper covers all possible accounts. It would have helped if the authors had spoken about the limitations of their work and their own biases that might have had some influence on the results.

Questions:
1. What are your general thoughts on the paper?
2. Do you think machine learning algorithms can help in such a scenario? If yes, what role will they play?
3. Have you ever interacted with an online social media bot? What has that been like?

Read More

4/15/20 – Lee Lisle – Algorithmic Accountability: Journalistic Investigation of Computational Power Structures

Summary

Diakopoulos’s paper makes the point that AI has a power over users that is not often clearly expressed to the users, even when those algorithms have massive amounts of power over users’ lives. The author then points out four different ways algorithms have power over users: prioritization, classification, association, and filtering. After a brief description of each, the author then speaks about how transparency is key to the balancing of these powers.

Then a series of AI implementations were discussed and showed how they exerted some amount of power without informing the user. The author used autocompletions on Google & Bing, autocorrections on iPhone, political emails, price discrimination, and stock trading as examples. The author then uses interviews in order to gain insight in how journalists better understand AIs and write stories on them. This is a form of accountability, and journalists use this information to allow users to understand the technology around them.

Personal Reflection

               I thought this paper brought up a good point that was seen in other readings this week: even if the user is given agency over the final decision, the AI biases them towards a particular set of actions. Even if the weaknesses of the AI are understood, like in the Bansal et al. paper on updates, the participant is still biased from the actions and recommendations of the AI. This power, when combined with the effect it can have on peoples’ lives, can greatly change the course of lives.

               The author also makes a point that interviews with designers is a form of reverse-engineering. I had not thought of it in this way before, so it was an interesting insight into journalism. Furthermore, the idea that AIs are black boxes, but their inputs and outputs can be manipulated such that the interior workings can be better understood was another thing I hadn’t thought of.

               I was actually aware of most of the cases the author presented as ways of algorithms exerting power. I have used different computers and safe modes on browsers in the past to ensure I was getting the best deal in travel or hotels before, for example.

               Lastly, I thought the idea of journalists having to uncover these AI (potential) malpractices was an interesting quandary. Once they do this, they must publish a story, but then most people will likely not hear about it. There’s an interesting problem here of how to warn people about potential red flags in algorithms that I felt the paper didn’t discuss well enough.

Questions

  1. Are there any specific algorithms that have biased you in the past? How did they? Was it a net positive, or net negative result? What type of algorithmic power did it exert?
  2. Which of the four types of algorithmic power is the most serious, in your opinion? Which is the least?
  3. Did any of the cases surprise you? Do they change how you may use technology in the future?
  4. What ways can the users abuse the AI systems?

Read More

04/15/20 – Lulwah AlKulaib-RiskPerceptions

Summary

People’s choice in using technology is associated with many factors, one of them is the perception of associated risk. The authors wanted to study the influence of associated risk to technology used so they adapted a survey instrument from risk perception literature to assess mental models of users and technologists around risks of emerging, data-driven technologies, for example: identity theft, personalized filter bubbles.. Etc. The authors surveyed 175 individuals on MTurk for comparative and individual assessments of risk, including characterizations using psychological factors. They report their findings around group differences of experts (Tech employees) and non-experts (MTurk workers) in how they assess risk and what factors may contribute to their conceptions of technological harm. They conclude that technologists see these risks as posing a bigger threat to society than do non-experts. Moreover, across groups, participants did not see technological risks as voluntarily assumed. The differences in how participants characterize risk have a connection to the future of design, decision making, and public communications. The authors discuss those by calling them risk-sensitive design.

Reflection

This was an interesting paper. Being a computer science student has always been one of the reasons I question technology, why a service is being offered for free, what’s in it for the company, and what do they gain from my use? 

It is interesting to see that the author’s findings are close to my real life experiences. When talking to friends who do not care about risk and are more interested in the service that makes something easier for them and I mention those risks to them they usually don’t think of those risks so they don’t consider them when making those decisions. Some of those risks are important for them to understand since a lot of the available technology (apps at least) could be used maliciously against their users. 

I believe that risk is viewed differently in experts’ views and non experts’ views and that should be highlighted. This explains how problems like the filter bubble mentioned in the paper have become so concerning. It is very important to know how to respond when there’s such a huge gap in how experts and the public think about risk. There should be a conversation to bridge the gap and educate the public in ways that are easy to perceive and accept.

I also think that with the new design elements and how designers are using risk sensitive design techniques for technologies is important. It helps in introducing technology in a more comforting/socially responsible way. It feels more gradual than sudden which makes users more perceptive to using it. 

Discussion

  • What are your thoughts about the paper?
  • How do you define technology risk?
  • What are the top 5 risks that you can think of in technology from your point of view? How do you think that would differ when asking someone who does not have your background knowledge?
  • What are your recommendations for bridging the gap between experts and non-experts when it comes to risk?

Read More

04/15/20 – Lulwah AlKulaib-BelieveItOrNot

Summary

Fact checking is important to be done in a timely manner, especially nowadays when it’s used on live TV shows. While existing work presents many automated fact-checking systems, the human-in-the-loop is neglected.This paper presents the design and evaluation of a mixed initiative approach to fact-checking. The authors combine human knowledge and experience with the efficiency and scalability of automated information retrieval and machine learning. The authors present a user study in which participants used the proposed system to help their own assessment of claims. The presented results suggest that individuals tend to trust the system especially that participant accuracy assessing claims improved when exposed to correct model predictions.Yet, the participants’ trust is overestimated when the model was wrong. The exposure to the system’s predictions often reduced human accuracy. Participants that were given the option to interact with these incorrect predictions were often able improve their own performance. This suggests that in order to have better models, they have to be transparent especially when it comes to human-computer interaction as AI models might fail and humans could be the key factor in correcting them. 

Reflection

I enjoyed reading this paper. It was very informative on the importance of transparent models in AI and machine learning. Also how transparent models could make the performance better when we include the human in the loop.

In their limitations, the authors discuss important points in relying on crowdworkers. They explain how MTurk participants should not all be given the same weight when looking at their responses since different participant demographics or incentives may influence findings. For example, non-US MTurk workers may not be representative of American news consumers or familiar with the latest and that could affect their responses. The authors also acknowledge that MTurk workers are paid by the task and that could cause some of them to respond by agreeing with the model’s response when in reality that is not the case, just so they could acquire the HIT and get paid. They found a minority of these responses and it made me think of ways to mitigate it. Like the papers from last week studying the behavior of an MTurk worker while completing the task might be an indicator if the worker actually agrees with the model or it is just to get paid.

The authors mention the negative impact that could potentially stem from their work and that could be as we saw in their experiment the model did a mistake but the humans over trusted it. The dependability on AI and technology makes users give them credit more than they should and such errors could impact the users perception of the truth.Addressing these limitations should be an essential requirement for further work.

Discussion

  • Where would you use a system like this most?
  • How would you suggest to mitigate errors produced by the system?
  • As humans, we trust AI and technology more than we should, how would you redesign the experiment to ensure that the crowdworkers actually check the presented claims? 

Read More

04/15/20 – Fanglan Chen – Algorithmic Accountability

Summary

Diakopoulos’s paper “Algorithmic Accountability” explores the broad question of how algorithms exert their potential power and are worthy of scrutiny by journalists and studies the role of computational approaches like reverse engineering in articulating algorithmic transparency. Nowadays, Automated decision-making algorithms are now used throughout businesses and governments. Given that such algorithmically informed decisions have the potential for significant societal impact, the goal of this paper is to address algorithmic accountability reported as a mechanism for articulating and elucidating the power structures, biases, and impacts that automated algorithms exercise in our society. Through the use of reverse engineering methods, the researcher conducted five cases of algorithmic accountability reporting, including autocompletion, autocorrection, political email targeting, price discrimination, and executive stock trading plans. Also, the applicability of transparency policies for algorithms is discussed along with the challenges of conducting algorithmic accountability as a broadly viable investigative method.

Reflection

I think this paper touches upon the important research question on the accountability of computational artifacts. Our society currently relies on automated decision-making algorithms on many different aspects, ranging from dynamic pricing to employment practices to criminal sentencing. It is important that developers, product managers, and company/government decision-makers are aware of the possible negative social impacts and necessity for public accountability when they design or implement algorithmic systems.

This research also makes me think about whether we need to be that strict with every algorithmic system. I think to answer the question we need to consider different application scenarios, which are not fully discussed in the paper. Take the object detection problem in the computer vision research area, for example, we have two application scenarios: one is to detect if there is a car in the image for automatic labeling, the other is to check if there is any tumor in the computed tomography for disease diagnosis. Apparently, the level of algorithm accountability is required to be much higher in the second scenario. Hence, in my opinion, the accountability of algorithms needs to be discussed under the application scenarios associated with the user’s expectations and potential consequences when the algorithms go wrong. 

The topic of this research is algorithmic accountability. As far as I am concerned, accountability is a wide scope of concept, including but not limited to an obligation to report, explain, and justify algorithmic decision-making as well as mitigate any potential harms. However, I feel this paper mainly focuses on the accountability aspect of the problem with little discussion on other aspects. There is no denying the fact that transparency is one-way algorithms can be made accountable, but just as the paper puts it, “[t]ransparency is far from a complete solution to balancing algorithmic power.” I think other aspects such as responsibility, fairness, and accuracy are worthy of further exploration as well. Considering these aspects throughout the design, implementation, and release cycles of algorithmic system development would lead to a more socially responsible deployment of algorithms.

Discussion

I think the following questions are worthy of further discussion.

  • What aspects other than transparency you think would be important in the big picture of algorithmic accountability?
  • Can you think about some domain applications that would hardly let automated algorithms make decisions for humans?
  • Do you think transparency potentially leaves the algorithm open to manipulation and vulnerable to adversarial attacks? Why or why not?
  • Who should be responsible if algorithmic systems make mistakes or have undesired consequences?

Read More

04/15/20 – Fanglan Chen – Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking

Summary

Nguyen et al.’s paper “Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking” explores the usage of automatic fact-checking, the task of assessing the veracity of claims, as an assistive technology to augment human decision making. Many previous papers propose automated fact-checking systems, but few of them consider the possibility to have humans as part of a Human-AI partnership to complete the same task. By involving humans in fact-checking, the authors study how people would understand, interact with, and establish trust with an AI fact-checking system. The authors introduce their design and evaluation of a mixed-initiative approach to fact-checking, leveraging people’s judgment and experience with the efficiency and scalability of machine learning and automated information retrieval. Their user study shows that crowd workers involved in the task tend to trust the proposed system with improved participant accuracy with the access to claims when exposed to correct model predictions. But sometimes the trust is so strong that getting exposure to the model’s incorrect predictions reduces their accuracy in the task. 

Reflection

Overall, I think this paper conducted an interesting study on how the proposed system actually influences human’s access to the factuality of claims in the fact-checking task. However, the model transparency studied in this research is different from what I expected. When talking about model transparency, I am expecting an explanation of how the training data is collected, what variables are used to train the model, and how the model works in a stepwise process. In this paper, the approach to increase the transparency of the proposed system is to release the source articles based on what the model provides a true or false judgment on the given claim. The next step is letting the crowd workers in the system group go through each source of news articles and see if that makes sense and whether they agree or disagree on the system’s judgment. In this task, I feel a more important transparency problem here is how the model gets the retrieved articles and how it ranks them in a presented way. Some noises in the training data may bring some bias in the model, but there is little we can tell merely based on checking the retrieved results. That makes me think that there might be different levels of transparency, at some level, we can check the input and output at each step, and at another level, we may get exposure to what attributes the model actually uses to make the prediction.

The authors conducted three experiments with a participant survey on how users would understand, interact with, and establish trust with a fact-checking system and how the proposed system actually influences users’ access to the factuality of claims. The experiments are conducted via a comparative study between a control group and a system group to show that the proposed system actually works. Firstly, I would like to know if the randomly recruited workers in the two groups have some differences among demographics that may potentially have an impact on the final results. Is there a better way to conduct such experiments? Secondly, the performance difference between the two groups in regard to human error is so small and there is no additional proof that the performance difference is statistically significant. Thirdly, the paper reports the experimental results on five claims, even with a claim that has incorrectly supportive articles (claim 3), which seems not to be representative. The task is kind of misleading. Would it be better with quality control of the claims in the task design?

Discussion

I think the following questions are worthy of further discussion.

  • Do you think with the article source presented by the system that the users develop more trust about the system?
  • What are the reasons behind that some claims with the retrieval results of the proposed system downgrade the human performance in the fact-checking task?
  • Do you think there is any flaw in the experimentation design? Can you think of a way to improve it?
  • Do you think we need personalized results in this kind of task where the ground truth is provided? Why or why not?

Read More

04/15/2020 – Vikram Mohanty – Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking

Authors: An T. Nguyen, Aditya Kharosekar , Saumyaa Krishnan, Siddhesh Krishnan, Elizabeth Tate, Byron C. Wallace, and Matthew Lease

Summary

This paper proposes a mixed-initiative approach to fact-checking, combining human and machine intelligence. The system automatically finds and retrieves relevant articles from a variety of sources. It then infers the degree to which each article supports or refutes the claim, as well as the reputation of each source. Finally, the system aggregates this body of evidence to predict the veracity of the claim. Users can adjust the source reputation and stance of each retrieved article to reflect their own beliefs and/or correct any errors according to them. This will, in turn, update the AI model. The paper evaluates this approach through a user study on Mechanical Turk. 

Reflection

This paper, in my opinion, succeeds as a nice implementation of all the design ideas we have been discussing in the class for mixed-initiative systems. It factors in user input, combined with an AI model output, and shows users a layer of transparency in terms of how the AI makes the decision. However, fact-checking, as a topic, is complex enough not to warrant a solution in the form of a simplistic single-user prototype. So, I view this paper as opening up doors for building future mixed-initiative systems that can rely on similar design principles, but also factor in the complexities of fact-checking (which may require multiple opinions, user-user collaboration, etc).

Therefore, for me, this paper contributes an interesting concept in the form of a mixed-initiative prototype, but beyond that, I think the paper falls short of making it clear who the intended users are (end-users or journalists) or the intended scenario it is designed for. The evaluation with Turkers seemed to indicate that anyone can use it, which opens up the possibility of creating individual echo-chambers very easily and essentially, making the current news consumption landscape worse. 

The results also showed the possibility of AI biasing users when it’s wrong, and therefore, a future design would have to factor in that. One of the users felt overwhelmed as there was a lot going on with the interface, and therefore, a future system needs to address the issue of information overdose. 

The authors, however, did a great job discussing these points in detail about the potential misuse and some of the limitations. Going forward, I would love to see this work forming the basis for a more complex socio-technical system, that allows for nuanced inputs from multiple users, interaction with a fact-checking AI model that can improve over time, and a longitudinal evaluation with journalists and end-users on actual dynamic data. The paper, despite the flaws arising due to the topic, succeeds in demonstrating human-AI interaction design principles.

Questions

  1. What are some of the positive takeaways from the paper?
  2. Did you feel that fact-checking, as a topic, was addressed in a very simple manner, and deserves more complex approaches?
  3. How would you build a future system on top of this approach?
  4. Can a similar idea be extended for social media posts (instead of news articles)? How would this work (or not work)?

Read More

04/15/2020 – Vikram Mohanty – Algorithmic Accountability Journalistic investigation of computational power structures

Authors: Nicholas Diakopoulos

Summary

This paper discusses the challenges involved in algorithmic accountability reporting and the reverse engineering approaches used to frame a story. The author interviewed four journalists who have reported on algorithms, and discusses five different case studies to present the methods and challenges involved. Finally, the paper outlines the need for transparency and potential ethical issues.

Reflection

This paper offers great insights into the decision-making process behind the reporting of different algorithms and applications. It is particularly interesting to see the lengths journalists go to figure out the story and the value in reporting. The paper is a great read even for non-technical folks as it introduces the concepts of association, filtering, classification and prioritization with examples that can be understood universally. While discussing the different case studies, the paper manages to paint a picture of the challenges the journalists encountered in a very easy-to-understand manner (e.g. incorrectly determining that Obama’s campaign targeted by age) and therefore, succeeds in showing why reporting on algorithmic accountability is hard!

In most cases, the space for potential input(s) is large enough not to be figured out easily, making the field more challenging. This somehow necessitates using the skills of computational social scientists to conduct additional studies, collect additional data and come up with inferences. The paper makes a great point about reverse engineering offering more insights than directly asking the algorithm developers, as the unintended consequences would never surface without investigating the algorithms in operation. Another case of “we need more longitudinal studies with ecological validity”!

It was very interesting to see the discussion around last-mile interventions at the user interface stages (in case of the autocomplete case). It shows the fact that (some of the) developers are self-aware and therefore, ensure that the user experience is an ethical experience. Even though they may fall short, it’s a good starting point. This also demonstrates why augmenting an existing pipeline (be it data/AI APIs or models) to make it work for the end-user is desirable (something that some of the papers discussed in the class have shown).

The questions around the ethics, as usual, do not have an easy answer — whether the reporting can enable developers to make it difficult to investigate in the future. However, regulations around transparency can go a long way in holding algorithms accountable. The paper does a great job synthesizing the challenges in all the case studies and outlines four high-level points for how algorithms can become transparent.

Questions

  1. Would you add anything more to the reverse engineering approaches discussed for the different case studies in the paper? Would you have done anything differently?
  2. If you were to investigate into the power structures of an algorithm, which application/algorithm would you chose? What methods would you follow?
  3. Any interesting case studies that this paper misses out on?

Read More

04/15/2020 – Yuhang Liu – Algorithmic Accountability Journalistic investigation of computational power structure

Summary: in this paper, the author has mentioned that, in modern society, automated algorithms have become more and more important, and algorithms gradually regulate all aspects of our lives, but the outline of their functions may still be difficult to grasp. So, it is necessary to elucidating and articulating the algorithms’ power. The author proposes a new notion “algorithmic accountability reporting”. This concept can reveal how algorithms work, and it is well worth reviewing by computing journalists. The author explores methods such as transparency and reverse engineering, and how they can be useful in elucidating algorithmic capabilities. And the author analyzes the case studies of five journalists on algorithm research, and describes the challenges and opportunities they face when working on algorithm accountability. The final concept proposed by the author has highlights and main contributions: (1) It proposed the theoretical lens of various atomic algorithm decisions. These decisions raised some major issues that can guide algorithm research and algorithm transparency policy development. (2) It can conduct preliminary evaluation and analysis of the algorithm through algorithmic accountability, including various restrictions. and author discuss the challenges faced in adopting this reporting method, including human resources, legitimacy and ethics, and look ahead to how journalists themselves use transparency when using algorithms

Reflection: I think the author has put forward a very innovative idea. This is also the first point that comes to my mind when I meet or use some new algorithms: what is the boundary of this algorithm, and what scope can it be applied to. For example, the insurance algorithm of an insurance company, we all know that the insurance cost is generated based on a series of attributes, but people are often uncertain about the proportion of each attribute in the insurance algorithm, then there will be some doubts about the results, and even think some results are immoral. Therefore, it is very important to study the capabilities and boundaries of an algorithm.

At the same time, the concept of reverse engineering is mentioned in the article, that is, the ability to study algorithms by studying input and output, but there are often such mechanisms in some websites. It makes the algorithm dynamic, so we need other methods to solve this kind of problem. However, once the input-output relationship of the black box is determined, the challenge becomes a data-driven search for news stories. Therefore, I think the algorithm is more inclined to understand whether there is an unreasonable situation in an algorithm, and the root cause of this unreasonable situation is whether it is caused by man or negligence, or it is people’s deep-rooted ideas . So, in some aspects, I think exploring the borders of algorithms is exploring the morality of algorithms. Therefore, I think this article provides a framework for reviewing the morality of the algorithm. This method can effectively explore a place where the algorithm is unreasonable, and for news reporters, it can be used to discover meaningful news.

In addition, I think the framework described in this article is a special way of human-computer interaction, that is, people study the machine itself, and understand the process of algorithm operation through the feedback of the machine. This also broadened my understanding of human-computer interaction.

Problem:

  1. Do you think the framework mentioned in the paper can be used in detecting the ethic issues of an algorithm?
  2. Can this system be used in a automatic system to elucidating and articulating the algorithms’ power?
  3. Is there any other value of detecting algorithms’ power except news value?

Read More

04/15/2020 – Yuhang Liu – Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking

Summary:

This article discusses a system about fact detection. First of all, the article proposes that fact detection is a very important, challenging and time-sensitive. Usually, in this type of system, the human influence on the system is ignored, but the human influence is very important in this type of system. Therefore, this article establishes a hybrid startup system for fact checking. Enable users to interact with ML predictions to complete challenging fact checks. The author designed the interface through which the user can know the source of the prediction. In some applications, when the users know the prediction results but is not satisfied, the author also allows the user to use his own beliefs or inferences to cover these predictions. Through this system, the authors have come to a conclusion that when the model’s results are correct, these predictions will have a very positive impact on people. However, people should not overly trust the model’s predictions, when users think the predictions is wrong, the prediction result can be improved through interactive methods. And this also reflects the importance of a transparent, interactive system for fact detection from the side.

Reflection:

When I saw the title of this article, I thought that this article maybe have a same topic with my project, using crowdsourced workers to distinguish fake news, but when I read it to a certain extent, I found that this is not the case. But I think it affirmed my thinking in some aspects. First, fact detection is a very challenging project, especially when real-time is needed, so it is very necessary to rely on human power, and due to lack of Marked data, so if you want to directly complete the task through machine learning, in some cases, the prediction results will point in a completely opposite direction. For example, in my project, rumors and refuting rumors are both may be considered as a rumor, so we need crowd workers to distinguish it.

Secondly, for the project itself mentioned in the article, I think its method is a very good direction. First of all, human judgment is particularly important in this kind of system. This is also the main idea of many human-computer interaction systems to improve accuracy through humans. I think this method in the article is a good start. In a transparent system, let people decide whether to cover the forecast results. Not only do they not force people to participate in the system, but also let people make predictions There are very important weights.

But at the same time, I think the system also has some of the limitations described in the article. For example, the purpose of crowdsourcing workers and its own concerns may affect the results of the final system, so I think the article proposes a good direction, but we need to be more Careful research.

Problem:

  1. Do you think users usually can find the prediction is incorrect and cover it when a system is wrong?
  2. What role does the transparency of the system play in the interaction?
  3. How to prevent users trust the prediction too far in other human and computer interaction systems?

Read More