04/15/2020 – Yuhang Liu – Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking

Summary:

This article discusses a system about fact detection. First of all, the article proposes that fact detection is a very important, challenging and time-sensitive. Usually, in this type of system, the human influence on the system is ignored, but the human influence is very important in this type of system. Therefore, this article establishes a hybrid startup system for fact checking. Enable users to interact with ML predictions to complete challenging fact checks. The author designed the interface through which the user can know the source of the prediction. In some applications, when the users know the prediction results but is not satisfied, the author also allows the user to use his own beliefs or inferences to cover these predictions. Through this system, the authors have come to a conclusion that when the model’s results are correct, these predictions will have a very positive impact on people. However, people should not overly trust the model’s predictions, when users think the predictions is wrong, the prediction result can be improved through interactive methods. And this also reflects the importance of a transparent, interactive system for fact detection from the side.

Reflection:

When I saw the title of this article, I thought that this article maybe have a same topic with my project, using crowdsourced workers to distinguish fake news, but when I read it to a certain extent, I found that this is not the case. But I think it affirmed my thinking in some aspects. First, fact detection is a very challenging project, especially when real-time is needed, so it is very necessary to rely on human power, and due to lack of Marked data, so if you want to directly complete the task through machine learning, in some cases, the prediction results will point in a completely opposite direction. For example, in my project, rumors and refuting rumors are both may be considered as a rumor, so we need crowd workers to distinguish it.

Secondly, for the project itself mentioned in the article, I think its method is a very good direction. First of all, human judgment is particularly important in this kind of system. This is also the main idea of many human-computer interaction systems to improve accuracy through humans. I think this method in the article is a good start. In a transparent system, let people decide whether to cover the forecast results. Not only do they not force people to participate in the system, but also let people make predictions There are very important weights.

But at the same time, I think the system also has some of the limitations described in the article. For example, the purpose of crowdsourcing workers and its own concerns may affect the results of the final system, so I think the article proposes a good direction, but we need to be more Careful research.

Problem:

  1. Do you think users usually can find the prediction is incorrect and cover it when a system is wrong?
  2. What role does the transparency of the system play in the interaction?
  3. How to prevent users trust the prediction too far in other human and computer interaction systems?

2 thoughts on “04/15/2020 – Yuhang Liu – Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking

  1. I believe transparency of the system greatly encourages a better usage from the human in the loop. Not only does this allow a more granular pairing between a human and an algorithm, but this also creates a better foundation of trust. I believe this trust is created through the ability to better tweak the algorithm manually. This would ensure the user can determine there may or may not be a perfect combination, ensuring the algorithm is inherently imperfect.

  2. I think transparency plays an important role in mixed-initiated systems. This kind of transparency makes users know about how to use the system and may raise the trust of the users. Meanwhile, this kind of transparency also tells the users how the system operates and what is the limitations of the system. As a result, the users will not trust too much on the system and can make appropriate answers if the system gives wrong predictions.

Leave a Reply