4/29/2020 – Akshita Jha – Accelerating Innovation Through Analogy Mining

Summary:
“Accelerating Innovation Through Analogy Mining” by Hope et al. talks about the problem of analogy mining from messy and chaotic real-world datasets. Hand created databases have high relational structures but are sparse in nature. On the other hand, machine learning and information retrieval techniques can be used but they lack the understanding of the underlying structure which is crucial to analogy related tasks. The authors leverage the strengths of both crowdsourcing and machine learning techniques to learn analogies from these real-world datasets. They make use of the creativity of the crowds with the cheap computing power of recurrent neural networks. The authors extract meaningful vector representations from product descriptions. They observe that this methodology achieves greater precision and recall than the traditional information-retrieval methods. The authors also demonstrate that the models significantly helped in generating more creative ideas compared to analogies retrieved by traditional methods.

Reflections:
This is a really interesting paper that talks about a scalable approach to finding analogies in large, real-world, messy datasets. The authors use a bi-directional Recurrent Neural Network (RNN) with Gated Recurrent Units (GRU) to learn the purpose and mechanism vectors for product descriptions. However, since the paper came out there have been great advances in the field of natural language processing tasks because of BERT: Bidirectional Encoder Representations from Transformers. BERT has achieved a state of the art results for many natural language tasks like question answering, natural language understanding, search, and retrieval, etc.. I’m curious to know how BERT would affect the results of this current system. Would we still need crowd workers for analogy detection or would using BERT alone for analogy computation suffice? One of the limitations of RNN is that it is directional, i.e., it can either read from right to left or left to right, or both. BERT is essentially non-directional, i.e, it takes all the words as inputs at once and hence, can compute the complex non-linear relationships between them. This would definitely prove helpful for detecting analogies. the approach taken by the authors by using TF-IDF did result in diversity but did not take into account the relevance. Also, the purpose and mechanism vector by the authors did not distinguish between high and low-level features. These learned vectors also did not take into account the intra-dependencies between different purposes and mechanisms or the inter-dependency between various purposes and mechanisms. It would be interesting to observe how these dependencies could be encoded and whether they would benefit the final tasks of analogy computation. Another aspect that can be looked into is the trade-off between generating useful vector and its relation with the creativity of the crowd-workers. Does creativity increase, decrease, or remain the same?

Questions:
1. Which another creative task can benefit from human-AI interaction?
2. Why is the task of analogy computation important?
3. How are you incorporating and leveraging the strength of crowd-workers and machine learning in your project?

Leave a Reply