Summary
The paper elaborates on an affordance-based framework for human computation and human-computer collaboration. It was published in 2012 in IEEE Transactions on Visualization and Computer Graphics. Affordances is defined as “opportunities provided to an organism by an object or environment”. They reviewed 1271 papers on the area and formed a collection of 49 documents that have state-of-the-art research work. They have grouped them into machine and human based affordances.
In human affordance, they talk about all the skills that humans have to offer namely, visual perception, visuospatial thinking, audiolinguistic ability, sociocultural awareness, creativity and domain knowledge. In machine affordances they discussed about large-scale data manipulation, collecting and storing large amount of data, effective data movement, bias-free analysis. There also is a separate case where a system makes use of multiple affordances like the reCAPTCHA and the PatViz projects. They have included some possible extensions that include human adaptability and machine sensing. The paper also describes the challenges in measuring complexity of visual analytics and the best way to measure work.
Reflection
Affordance is a new concept to me. It was interesting how the authors defined human vs machine affordance-based system along with systems that make use of both. Humans have a special ability that outperforms machines that is creativity and comprehension. Nowadays, machines have the capability to classify data, but this requires a lot of training samples. Recent neural network-based architectures are “data hungry” and using such system are extremely challenging when proper labelled data is lacking. Additionally, humans have a good capability of perception, where distinguishing audio, images, video are easy for them. Platforms like Amara do take advantage of this and employ crowd-workers to caption a video. Humans are effective when it comes to domain knowledge. Jargons specific to a community e.g., chemical names, legal domain, medical domains are difficult for machines to comprehend. Named entity recognizers help machines in this aspect. However, the error is still high. The paper does succeed in highlighting the positives of both systems. Humans are good in various aspects as mentioned before but are often prone to error. This is where machines outperform humans and can be used effectively by systems. Machines are good when dealing with a large quantity of data. Machine-learning based algorithms are useful to classify, cluster data or other services as necessary. Additionally, not having perception acts as a plus as humans do tend to get influenced from certain opinion. If it is a task that require political angle, it would be extremely difficult for humans to have an-unbiased opinion. Hence, both humans and machines have a unique advantage over the other. It is the task of the researcher to utilize them effectively.
Questions
- How to effectively decide which affordance is the best for the task at hand? Human or machine?
- How to evaluate the effectiveness of the system? Is there any global evaluation metric that can be implemented?
- When using both the systems how to separate task effectively?