Encouraging Abstract Thinking

Reflection of Yu, L., Kittur, A., & Kraut, R. E. (2014, April). Searching for analogical ideas with crowds. In Proceedings of the 32nd annual ACM conference on Human factors in computing systems (pp. 1225-1234). ACM.

Summary:

The paper investigates using crowd with real-world dataset to search for analogical ideas. The workers were presented with a problem wherein an experimental group was provided with an abstraction of the problem (“schema”) which could be used to search for information to come up with solutions. Those who received schemas performed better in finding valid differing-at-surface-level analogs. In Experiment-2, they explore the influence of the solutions generated from the conditions in Experiment-1. They find that participants were able to generate better and useful solutions when the sample solutions provided to them were generated by those who had received schemas.

My reflection:

I was enthralled reading the paper. There were several elements in the paper that stood out for me. First, I felt that the experiments were well established and the results show a strong influence of the schemas in both the experiments.

Observing 7 participants who were in different experimental conditions as they searched Quirky.com was something I liked a lot. As expected, participants who had received features of the problem or had just encountered the problem were searching for just that! To abstract at a higher level requires practice. Just the fact that a significant volume of research engages with introductory object-oriented programming paradigm to help students learn about abstraction (classes and methods) suggests the issue at hand. Providing the schema would prime participants to think of the structural elements of the problem and hence, they would be able to filter out surface features. The participants who received the schema were primed to think in abstract levels and that priming helped.

This brings me to the problem that I had in mind reading the paper (and which the authors have listed in Future Work) — schemas (those abstractions) are not readily available like in the experiment! How do we nudge people to abstract the structural features of any kind of problem without giving them a schema?

Borrowing partly from Teevan & Yu (2017), I was thinking if it would be possible for us enable reflective self-sourcing: i.e. we pose general questions such as “What similar problems [to this problem] can you think would be similar in structure to this?”, “What other domains could encounter a similarly structured problem?”, “Can you think of any similar problem that a doctor/lawyer/teacher would come across in their profession?” such that we prime the participants to think abstractly. I hypothesize that participants would be more likely to come up with analogs that are structurally-similar but differing at surface-features. It could be something worth trying. It would be one of those (very) rare occasion of future work actually being done by someone else!