4/15/2020 – Nurendra Choudhary – Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking

Summary

In this paper, the authors study the human-side or human trust in automated fact checking systems. In their experiments, they show that human beings were able to improve their accuracy when subjected to correct model predictions. However, the human judgement is also shown to degrade in case of incorrect model predictions. This establishes the trustful relationship between humans and their fact-checking models. Additionally, the authors find that humans interacting with the AI system improve their predictions significantly, suggesting transparency of models as a key aspect to human-AI interaction.

The authors provide a novel mixed-initiative framework to integrate human intelligence with fact checking models. Also, they analyze the benefits and drawbacks of such integrated systems.  

The authors also point out several limitations in their approach such as non-American representation in MTurk workers and bias towards AI predictions. Furthermore, they point out the system’s effectiveness in mediating debates and also convey real-time fact checks in an argument setting. Also, interaction with the tool could serve as a platform of identifying the reason for opinion differences.

Reflection

The paper is very timely in the sense that fake news has become a widely used tool of political and social gain. People, unfamiliar with the power of the internet, tend to believe unreliable sources and form very powerful opinions based on them. Such a tool can be extremely powerful in eliminating such controversies. The idea of analyzing human role in AI fact checkers is also extremely important. AI fact checkers lack perfect accuracy and given the problem, perfect accuracy is a requirement. Hence, the role of human beings in the system cannot be undermined. However, human mental models tend to trust the system after correct predictions and do not efficiently correct itself for incorrect predictions. This becomes an inherent limitation for these AI systems. Thus, the paper’s idea of introducing transparency is extremely appropriate and necessary. Given more incite into the mechanism of fact checkers, human beings would be able to better optimize their mental models, thus improving the performance of collaborative human-AI team.

AI systems can analyze huge repositories of information and humans can perform more detailed analysis. In that sense, fact-checking human-AI team utilizes the most important capabilities of both human and AI. However, as pointed out in the paper, humans tend to overlook their own capabilities and rely on model prediction. This could be due to human trust after some correct predictions. Given the plethora of existing information, it would be really inconvenient for humans to assess it all. Hence, I believe these initial trials are extremely important to build the correct amount of trust and expectations.

Questions

  1. Can a fact-checker learn from its mistakes pointed out by humans? How would that work? Would that make the fact-checker dynamic? If so, what is the extent of this change and how would the human mental models adapt effectively to such changes?
  2. Can you suggest a better way of model interaction between humans and models? Also, what other tasks can such interaction be effective?
  3. As pointed out in the paper, humans tend to overlook their own capabilities and rely on model prediction? What is a reason for this? Is their a way to make the collaboration more effective?
  4. Here, the assumption is that human beings are the veto authority. Can there be a case when this is not true? Is it always right to trust the judgement of humans (in this case underpaid crowd workers)?

Word Count: 588

Leave a Reply