Improving Crowd Innovation with Expert Facilitation

Chan et al., “Improving Crowd Innovation with Expert Facilitation” CSCW’16

Discussion Leader (con): Nai-Ching

Summary

Although crowdsourcing has been shown to be useful for creativity tasks, the quality of creativity is still an issue. This paper demonstrates that the quality of crowdsourced creativity tasks can be improved by introducing experienced facilitators in real time work setting. The facilitators produce inspirations that are expected to facilitate the ideation. To measure the quality, divergence (fluency and breadth of search), convergence (depth of search) and creative outcomes (rated creativity of ideas) are used. The result from first experiment shows that with the help of experienced facilitators, both the number of generated ideas and max creativity of the output increase. The result of second experiment reveals that with novice/inexperienced facilitators, the creativity of the output is reduced. To further analyze the causes/reasons of the difference, the authors code the strategies that are used to generate the inspirations into categories including “Examples”, “Simulations” and “Inquiries”. While “Examples” and “Inquires” do not have significant effects on the output, “Simulations” are highly associated with higher max creativity of ideas. The authors also point out that the different intentions of experienced and novice facilitators might attribute to the different results of facilitation. The experienced facilitators tend to actually do the facilitating job while the inexperienced facilitators are more inclined to do the ideating job

 

Reflections

It seems to be contradictory that the paper first mentions that popularity and “rich get richer” effects might not be actual innovative potential but later on the facilitation dashboard, the keywords are sized by frequency which seems to be just another form of popularity.

It is not clear about the interaction between ideators and the “inspire me” function before the facilitator enters any inspiration. If there is no inspiration available, is the button disabled? And how do ideators know if there is new inspiration? Also, do facilitators know if ideators request inspiration? I think the “inspire me” function should help keep the workers and lower the attrition rate but based on the results, there is no significant difference between facilitated and unfacilitated conditions.

In addition, the increased creativity only happens at max creativity not including mean creativity. One the one hand, It makes sense as the authors argue that what innovators really care about is increasing the number of exceptional ideas and since it is more likely to get higher creativity with proper facilitation or say proper facilitation increases the potential of getting higher creativity, the proper facilitation is a good technique. On the other hand, it also shows the technique might not be reliable enough to avoid the manual effort of going through all the generated ideas to pick out the good ones (max creativity). This paper also reminds me of an earlier paper we discussed, “Distributed analogical idea generation: inventing with crowds”, which mainly increases the mean creativity and the change of max creativity is not reported. It might be possible to combine both techniques to both increase mean and max creativity of ideas.

It also seems to me that in addition to soliciting more ideas, keeping a good balance between divergence and convergence is also very important but I didn’t see in the future work section that it is important/helpful to show information of current breadth and depth of idea/solution space to the facilitator to help him/her divert the direction of inspirations.

It is interesting to see that one of the themes in ideators’ comments about inspirations provoking new frames of thinking about the problem but actually there is no significant difference of breadth between facilitated and unfacilitated conditions. So I wonder how general the theme is.

Questions

  • What reasons do you think cause the discrepancy between user perception and actual measurement of breadth search in the solution space?
  • What is the analogy between the technique from this paper and the technique from “Distributed analogical idea generation: inventing with crowds”?
  • Can most people appreciate creativity? That is, if a lot of people say something is creative, is it creative? Or if something is creative, do most people think it creative as well?

Leave a Reply

Your email address will not be published. Required fields are marked *