04/22/2020 – Vikram Mohanty – SOLVENT: A Mixed Initiative System for Finding Analogies between Research Papers

Authors: Joel Chan, Joseph Chee Chang, Tom Hope, Dafna Shahaf, Aniket Kittur.

Summary

This paper addresses the problem of finding analogies between research problems across similar/different domains by providing computational support. The paper proposes SOLVENT, a mixed-initiative system where humans annotate aspects of research papers that denote their background (the high-level problems being addressed), purpose (the specific problems being addressed), mechanism (how they achieved their purpose), and findings (what they learned/achieved), and a computational model constructs a semantic representation from these annotations that can be used to find analogies among the research papers. The authors evaluated this system against baseline information retrieval approaches and also with potential target users i.e. researchers. The findings showed that SOLVENT performed significantly better than baseline approaches, and the analogies were useful for the users. The paper also discusses implications for scaling up.

Reflection

This paper demonstrates how human-interpretable feature engineering can improve existing information retrieval approaches. SOLVENT addresses an important problem faced by researchers i.e. drawing analogies to other research papers. Drawing from my own personal experiences, this problem has presented itself at multiple stages, be it while conceptualizing a new problem, or figuring out how to implement the solution, or trying to validate a new idea, or eventually, while writing the Related Work section of a paper. This goes without saying, that SOLVENT, if commercialized, would be a boon for the thousands of researchers out there. It was nice to see the evaluation including real graduate students as their validation seemed the most applicable for such a system.

SOLVENT demonstrates the principles of mixed-initiative interfaces effectively by leveraging the complementary strengths of humans and AI. Humans are better at understanding context, and in this case, it’s that of a research paper. AI can help in quickly scanning through a database to find other articles with similar “context”. I really like the simple idea behind SOLVENT i.e how would we, humans, find analogical ideas? We will look for similar purpose and/or similar/different mechanisms. So, how about we do just that? It’s a great case of how human-interpretable intuitions translate into intelligent system design, and also scores over end-to-end automation. Something I reflected in previous papers — it always helps to look for answers by beginning from the problem and understanding it better. And that’s reflected in what SOLVENT ultimately achieves i.e. scoring over an end-to-end automation approach.

The findings are definitely interesting, particularly the drive for scaling up. Turkers certainly provided an improvement over the baseline, even though their annotations fared worse than the experts and the Upwork crowd. I am not sure what the longer term implications here are, though. Should Turkers be used to annotate larger datasets? Or should the researchers figure out a way to improve Turker annotations? Or train the annotators? These are all interesting questions. One long term implication here is to re-format the abstract into a background + purpose + mechanism + findings structure right at the initial stage. This still does not solve the thousands of prior papers. Overall, this paper certainly opens doors for future analogy mining approaches.

Questions

  1. Should conferences and journals re-format the abstract template into a background + purpose + mechanism + findings to support richer interaction between domains and eventually, accelerate scientific progress?
  2. How would you address annotating larger datasets?
  3. How did you find the feature engineering approach used in the paper? Was it intuitive? How would you have done it differently?

Vikram Mohanty

I am a 3rd year PhD student in the Department of Computer Science at Virginia Tech. I work at the Crowd Intelligence Lab, where I am advised by Dr. Kurt Luther. My research focuses on developing novel tools that leverage the complementary strengths of Artificial Intelligence (AI) and collective human intelligence for solving complex, open-ended problems.

Leave a Reply