04/15/2020 – Subil Abraham – Nguyen et al., “Believe it or not”

In today’s era of fake news where new information is constantly spawning everywhere, the great importance of fact checking cannot be understated. The public has a right to remain informed and be able to obtain true information from accurate, reputable sources. But all too often, people are inundated with too much information and the cognitive load of fact checking everything would be too much. Automated fact checking has made strides but previous work has focused primarily on model accuracy and not on the people who need to use them. This paper is the first to study an interface for humans to use a fact checking tool. The tool is pretrained on the Emergent dataset of annotated articles and sources and uses two models, one that predicts article stance on a claim and the other that calculates the accuracy of the claim based on the reputation of the sources. The application works by taking a claim and retrieving articles that talk about the claim. It uses the article stance model to classify if the articles are for or against the given claim, and then predicts the claim’s accuracy based on the collective reputation of its sources. It conveys that its models are not accurate and provides confidence levels for its accuracy claims. It also provides sliders for the human verifiers to adjust the predicted stance of the articles and also to adjust the source reputation according to their beliefs or new information. The authors run three experiments to test the efficacy of the tool for human fact checkers. They find that the users tend to trust the system, which can be problematic when the system is inaccurate.

I find it interesting that for the first experiment, the System group’s error rate somewhat follows the stance classifiers error rate. The crowd workers are probably not going through the process of independently verifying the stance of the articles and simply trust the predicted stance they are shown. Potentially this could be mitigated by adding incentives (like extra reward) to have them actually read the articles in full. But on the flip side, we can see that their accuracy (supposedly) becomes better when they are given the sliders to modify the stances and reputation. Maybe that interactivity was the clue they needed to understand that the predicted values aren’t set in stone and could potentially be inaccurate. Though I find it strange that the Slider group in the second experiment did not adjust the sliders if they were questioning the sources. What I find even stranger though is the fact that the authors decided to keep the claim that allowing the users to use the sliders made them more accurate. This claim is what most readers would take away unless they were carefully reading the experiments and the riders. And I don’t like that they kept the second experiment results despite them not showing any useful signal. Ultimately, I don’t buy into their push that this tool is something that is useful for the general user as it stands now. And I also don’t really see how this tool could serve as a technological mediator for people with opposing views, at least not the way they described it. I find that this could serve as a useful automation tool for expert fact checkers as part of their work but not for the ordinary user, which is what they model by using crowdworkers. I like the ideas that the paper is going for, of having automated fact checking that helps for the ordinary user and I’m glad they acknowledge the drawbacks. But I think there are too many drawbacks that prevent me from fully buying into the claims of this paper. It’s poetic that I have my doubts about the claims of a paper describing a system that asks you to question claims.

  1. Do you think this tool would actually be useful in the hands of an ordinary user? Or would it serve better in the hands of an expert fact checker?
  2. What would you like to see added to the interface, in addition to what they already have?
  3. This is a larger question, but is there value in having the transparency of the machine learning models in the way they have done (by having sliders that we can manipulate to see the final value change)? How much detail is too much? What about for more complex models where you can’t have that instantaneous feedback (like style transfer) how do you provide explainability there?
  4. Do you find the experiments rigorous enough and conclusions significant enough to back up the claims they are making?

Leave a Reply