Yuan Li, Reflection 1

Summary

It is not uncommon to solve a problem based on the knowledge from a different domain. The paper presents a crowdsourcing based technique to help this analogy searching process. As the authors noted, such process is challenging for both human and machine. The insight behind the proposed approach is representing a problem as the sets of structural relations between objects involved, rather than the characteristics or attributes of the objects themselves. Due to lack of textual similarity between analogical problems, conventional natural language techniques fall short. While it is equivalently hard for people as well, by introducing the structure identifying problem to a larger scale, the crowd, people are able to find deeper structural analogies than having to do that on their own. The crowdsourcing approach is evaluated by two experiments. Experiment one examines whether people’s ability to find analogical ideas is increased with the presence of structural schema of a problem; experiment two confirms that having analogical ideas leads to better solution than alternative methods. Overall, both experiments are successful and demonstrate the potential of the proposed approach (presence of good schemas). Yet, the paper fails to answer the question: how to reliably maintain good schemas from crowd workers in the first place?

Reflection

In my own experience, I often find analogical ideas useful to solve the problems I encountered. Therefore, I highly appreciate the contribution and practical value of this paper. I especially appreciate the author’s idea on how to computationally analyze analogical ideas. In terms of cognition, analogical thinking could be something that happens too fast for even the thinker to realize. By inducting analogical searching into a process of searching for the similarity of relational structures of source problem and relating problem other than the attributes or characteristics of the involving objects. Because the latter is certainly much harder to quantify and computationally analyze.

The paper’s focus lies in the two experiments that the author conducted to evaluate and validate their proposed crowdsourcing approach. Both of which seems to be thoroughly conducted. However, like many HCI experiments, some of the design choices do not have a super convincing basis, unlike other CS experiments or evaluation process with objective quantitative measurements. For example, while the “detaching one object from another” is clear in the cat problem and dough problem, there is no argument that the difference between these two problem won’t make a difference in the participants’ mental loop. I believe the authors must have thought about these little questions thoroughly, but documenting these things is a hard problem.

I especially like that the authors noted the limitations at the end. Because I was thinking about these limitations while I read the paper. It seems like the authors have a focus on the research questions and are not trying to cover the entire process, which I find valuable and practical in HCI studies.

Questions

When recruiting participants from the crowd, how can the authors ensure the quality of the data gathered? Especially there are nonnative English speakers? Or how can they be sure that people are honest with their background?