Within the recent emergence of Human Computation, there has been many advancements that have pushed further out of the into the industry. This growth has been so sporadic as many of the terminologies and nomenclature has not been well defined within the scientific area. Though all of these “ideas”are all considered within the umbrella term Human Computation, the common explanation for Human Computation is not strictly defined, as it has been used frequently loosely tied papers and ideas. This work states Human Computation as coupling both the problems that can eventually be migrated to computers and “human participation is directed by the computational system”. Furthermore, the study starts to define related terms that are equally as loosely defined. These terms, under the collective idea of Human Computing, included the common technological terms such as Crowdsourcing, Social Computing, Data Mining, and Collective Intelligence. Following among these more collectively defined definitions,various Crowd Sourcing platforms are compared in a more inclusive classification system. Within the system, various aspects of the Crowd Sourcing platforms are categorized various labels retrieved by common usage in industry and literature.Through those labels include the following terms that are used throughout (to some extent)of each of the crowd sourcing platforms; motivation human skill, aggregation, quality control, process order, and task-request cardinality. Underneath each of these top categories is more sub categories better defining each of the platforms, for example a label like Motivation (for use with each platform) has the following sub-labels underneath it, Pay, Altruism (peoples inherit will to do good), enjoyment, Reputation (to work with a big company) ,and implicit work (underlying work from the system). From helping to tie this vocabulary down to a clearer definition, it is the hope of the authors to better understand each platform and to better realize how to make sure each system is humanly good.
I disagree with how the labeling is created through this system. It is always a fundamental idea that with the classification there may seem to be more “gray area” within some of the platforms put under the label. In addition, this may also stifle some of new creative ideas since these could be the “broader” buckets people use to standardize their ideas. This could be related to ideas such as a standardized test that may miss the general learning while enforcing strictly learning a singular path.
While I do potentially agree with the upper level labeling system itself, I believe the secondary labeling should be left more open-ended.This would be again due to the limiting or even “under shadowing” the of a new discovery by entertaining the idea of making a more distinctive approach to attempt to relate the ideas to commonly collected ideas.
- I would like to see how many of the crowd sourcing examples are cross listed within any of the dimensions. It seems from their current system the examples listed may be easily (relatively) defined, the others unlisted may be able to fit into the categories that would be dropped from the table.
- Since this is a common classification system, I would like to see if there has been a user survey (amongst people actively using the technology) done to see if these labels accurately represent the research area.
- My final question pertaining to this system is how much this has been used actively in the industry. Potentially between advertisements or cores of the new platforms