04/15/2020 – Vikram Mohanty – Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking

Authors: An T. Nguyen, Aditya Kharosekar , Saumyaa Krishnan, Siddhesh Krishnan, Elizabeth Tate, Byron C. Wallace, and Matthew Lease

Summary

This paper proposes a mixed-initiative approach to fact-checking, combining human and machine intelligence. The system automatically finds and retrieves relevant articles from a variety of sources. It then infers the degree to which each article supports or refutes the claim, as well as the reputation of each source. Finally, the system aggregates this body of evidence to predict the veracity of the claim. Users can adjust the source reputation and stance of each retrieved article to reflect their own beliefs and/or correct any errors according to them. This will, in turn, update the AI model. The paper evaluates this approach through a user study on Mechanical Turk. 

Reflection

This paper, in my opinion, succeeds as a nice implementation of all the design ideas we have been discussing in the class for mixed-initiative systems. It factors in user input, combined with an AI model output, and shows users a layer of transparency in terms of how the AI makes the decision. However, fact-checking, as a topic, is complex enough not to warrant a solution in the form of a simplistic single-user prototype. So, I view this paper as opening up doors for building future mixed-initiative systems that can rely on similar design principles, but also factor in the complexities of fact-checking (which may require multiple opinions, user-user collaboration, etc).

Therefore, for me, this paper contributes an interesting concept in the form of a mixed-initiative prototype, but beyond that, I think the paper falls short of making it clear who the intended users are (end-users or journalists) or the intended scenario it is designed for. The evaluation with Turkers seemed to indicate that anyone can use it, which opens up the possibility of creating individual echo-chambers very easily and essentially, making the current news consumption landscape worse. 

The results also showed the possibility of AI biasing users when it’s wrong, and therefore, a future design would have to factor in that. One of the users felt overwhelmed as there was a lot going on with the interface, and therefore, a future system needs to address the issue of information overdose. 

The authors, however, did a great job discussing these points in detail about the potential misuse and some of the limitations. Going forward, I would love to see this work forming the basis for a more complex socio-technical system, that allows for nuanced inputs from multiple users, interaction with a fact-checking AI model that can improve over time, and a longitudinal evaluation with journalists and end-users on actual dynamic data. The paper, despite the flaws arising due to the topic, succeeds in demonstrating human-AI interaction design principles.

Questions

  1. What are some of the positive takeaways from the paper?
  2. Did you feel that fact-checking, as a topic, was addressed in a very simple manner, and deserves more complex approaches?
  3. How would you build a future system on top of this approach?
  4. Can a similar idea be extended for social media posts (instead of news articles)? How would this work (or not work)?

Vikram Mohanty

I am a 3rd year PhD student in the Department of Computer Science at Virginia Tech. I work at the Crowd Intelligence Lab, where I am advised by Dr. Kurt Luther. My research focuses on developing novel tools that leverage the complementary strengths of Artificial Intelligence (AI) and collective human intelligence for solving complex, open-ended problems.

4 thoughts on “04/15/2020 – Vikram Mohanty – Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking

  1. I agree with your comment that this paper is a starting point for future development in the mixed-initiative model. Knowing the end-user is also important as the system could cater to particular user needs. To answer your second question, yes, I do feel that the system is too simple and needs to be complex. As mentioned in the paper and your reflection, the proposed method has flaws where the user is influenced by AI biases resulting is false negatives. Hence, although the paper does a good job of developing a fact-checking model that is simple, transparent, and user-oriented, it needs to be complex, especially when it matters. For a simple AMT experiment, it could be okay to have a false positive or negative, however, if the decision is critical, then any false result would have tremendous negative impacts.

  2. Hi Vikram,
    Interesting question when thinking about how to implement it for social media. I think that it needs a more automated solution and a robust process as social media is much faster in spreading nowadays than news articles. In order for it to work it needs instant checking with workers available around the clock and workers that are allowing behavior monitoring to check that they actually fact check the claims with valid resources.

  3. To answer the second question, I do think fact checking was solved very simply. Probably before the internet revolution such techniques would be effective. But in the current scenario, false information is propagated by large institutions with a network of channels indistinguishable from true sources.

  4. To answer your fourth question, I think the approach will be very useful on social media to filter out fake news where natural language processing can be used to filter suspected post and humans can validate the content. The challenge with this approach is the scalability since there are millions and might be over a billion users of such platforms and performing a fact-checking is very time consuming and challenging.

Leave a Reply