Nuo Ma – Reflection 9

9

Active listening is an important feedback in face to face communication, verbal cues like ‘uh huh’ or body language can provide feedback. However, online discussion lack of this kind of backchannels that demonstrate evidence of understanding. Feedback buttons like ‘thumb’ carry two functionalities: understand feedback and judge the statement. In this paper, the author address the problem by implemented an interface Reflect, which use restatement to substitute the functionality of active listening. Specifically, to let users summarize the statement just made during the session and provide feedbacks. By doing this, both sides can make sure no misunderstanding is created in this process, and also can promote new ideas.

If we follow the author’s argument, then this study is very effective in terms of promoting understanding in online discussions and the idea  is simple. However, I think there are also certain constraints that make this study weak in some scenarios. First, its constrained to strong linear document like discussion. And since speakers are privileged to verify summaries, it constraint the usage to one-to-multi teaching style online sessions. What if this process involves certain level of brainstorming or requires extensive discussions? If change the system to, instead of only supplying restatement, but also mixing comments and idea expansion, we can probably imagine when new ideas pops out, the UI can expand like a tree structure, and the original linear discussion can quickly derail if reached to a heated discussion point. This can be partly reflected in table 2. Since Slashdot mainly contains male professional engineers, we can clearly see the content analysis being, not strictly neutral reflection without elaboration, but also various other categories of replies. Compared to other scenarios that seems like more seminar style scenarios. Also, from the feedback of Wikimedia strategy case, it seems that people are using Reflect more as a tool to summarize posts in a participatory way, I believe that was not the author’s main intention of developing this tool. Can we assume that it is possible that this tool, while definitely increase the engagement during discussion, may also discourage new ideas being generated, and indirectly decrease the efficiency of such discussions? This being said, my second concern is that, I understand that deploying this tool by itself is already hard to achieve, this study lacks of a qualitative controlled study to show the effectiveness. I only see verbal feedbacks from field installations. Let’s forget the first point we just discussed, if we only stick to the original goal of promoting mutual understanding of discussion session, I’m curious to see how this would compare to a post-session quiz? If we have to design a user study, then post session survey is a good way to evaluate if participants fully understand in the correct way, although by doing this, the main discussant would not receive as much feedback in real time. Also another point that I didn’t fully understand is that, because the specific deployment steps was not illustrated in the paper, one would ask, how will this tool work in an time critical online discussion session? And what if there is a large number of users? It might be disastrous if the main discussant have to pause and read through all restatements. Active listening when face-to-face, in this case, the main discussant only have to make sure the majority of participants get the point by looking for action cues or verbal feedbacks. So if the system has to improve under this scenario, then there should be a way for the main discussant to quickly skim through the feedbacks, maybe be calculating the semantic distances?