Summary
Hahn’s paper “The Knowledge Accelerator: Big Picture Thinking in Small Pieces” utilizes a distributed information synthesis task as a probe to explore the opportunities and limitations of accomplishing big picture thinking by breaking it down into small pieces. Most traditional crowdsourcing work targets simple and independent tasks, but real-world tasks are usually complex and interdependent, which may require a big picture thinking. There are a few current crowdsourcing approaches that support the breaking-down of complex tasks by depending on a small group of people to manage the big picture view and control the ultimate objective. This paper proposes the idea that a computational system can automatically support big picture thinking all through the small pieces of work conducted by individuals. The researchers complete the distributed information synthesis in a prototype system for and evaluate the output of the system on different topics to validate the viability, strengths, and weaknesses of their proposed approach.
Reflection
I think this paper introduces an innovative approach for knowledge collection which can potentially replace a group of intermediate moderators/reviewers with an automated system. The example task explored in the paper is to answer a given question by collecting information in a parallel way. That relates with the question about how the proposed system enhances the quality of answer by a structured article compiled with the pieced information collected. To facilitate the similar question-answer task, we actually have a variety of online communities or platforms. Take Stack Overflow for example, it is a site for enthusiast programmers to learn and share their programming knowledge. A large number of professional programmers answer the questions on a voluntary basis, and usually a question would receive several answers detailing different approaches with the best solution on the top with a green check. You can check other answers as well in case you have tried one but that does not work for you. I think the variety of answers from different people sometimes enhance the possibility the problem can be solved. Somehow the proposed system reduces that kind of diversity in the answers. Also, one informative article is the final output of the system to a given question, then its quality would be important, but it seems hard to control the vote-then-edit pattern without any reviewers to ensure the quality of the final answer.
In addition, we need to be aware that much work in the real world can hardly be conducted via crowdsourcing because of the difficulty in decomposing tasks into small, independent units, and more importantly, the objective is beyond to accelerate the computational time or collect complete information. For creative work such writing a song, editing a film, designing a product, the goal is more like to encourage creativity and diversity. In those scenarios, even with a clear big picture in minds, it is very difficult to put together the small pieces of work by a group of recruited crowd workers to create a good piece of work. As a result, I think the proposed approach is limited to comparatively less creative tasks where each piece can be decomposed and processed in an independent way.
Discussion
I think the following questions are worthy of further discussion.
- Do you think the proposed system can completely replace the role of moderators/reviewers in that big picture? What are the advantages and disadvantages?
- This paper discusses the proposed system in the task of question-answer. What are the other possible applications the system could be helpful?
- Can you think about any possible aspect of improving the system to scale it up to other domains or even non-AI domains?
- Do you consider the breaking-down approach in your course project? If yes, how would you like to approach that?