01/29/20 – Myles Frantz – An Affordance-Based Framework for Human Computation and Human-Computer Collaboration

Within the endeavor of research visual analytics, there has been much work to solve problems that require the close and interactive relationship between human and machines. With the standard practice for research at the initial basis, each paper usually creates a new standard to improve on the fundamental level of the area or improve upon a previously created standard. To this extent there have been many various projects that excel in their certain areas of expertise, however this paper is endeavors to create a framework to enable relativity (or comparability)between the various features of the projects to further the research. Within some of previous frameworks provided, they each created models based on the best of their abilities, including features such as maturity of the crowd sourcing platform, the model presentation, or the integration types. Whilst these are acknowledged as furthering the field, they are limited to their subsections and “cornering”their framework in relative to the framework presented in this paper. While the idea of the relationship between humans and computers were initially described and conversed from the early 1950’s, it was stabilized in the late 1970’s from J.J. Gibson in which “an organism and its environment complement each other”. These affordances are described are used as some of the core concepts between humans and machines,since through visual analytics the relationship (between human and machine) are at its core. Going through the multitude of papers underwent through this survey, include some of the following human affordances (human “features” required by machines); Visual perception, Visuo spatial thinking, creativity, and domain knowledge. Listed within the machine affordances (machine attributes used or further “exploited” for research purposes) includes some of the following; Large-Scale Data manipulation, efficient data movement, and bias-free analysis. Through these features, there can also be hybrid relationships where both the human and machine features.

In comparison to the other reading for the week, I do agree and like the framework created to relate crowd-sourcing tools and humans. Not only is it of more human aspects (suggesting a better future relationship), it also describes the co-dependency (currently) with a relatively bigger emphasis on human centric interactions.

I do also agree that this framework seems to be a good representation of the standard applications of visual analytics. While acknowledging the merging of both human and machine affordances, the human affordances seem to be enough for the framework. The machine affordances seem enough, though this may be due to the direction of the research in the area.

  • Like the other reading of the week, I would like to see a user study (of researchers or people in the industry in the area)and see how this comparison lines up to practical usage.
  • In the future, would there be a more granular weighting of which affordances are used by which platform? This is more practical of an application though it may help serve as a better direction in which companies (or researchers)could choose their platform to best fit their target audience.
  • Comparing the affordances (or qualities) of a project may not be as fair to each respective (at a high level)to potential consumers. Though potentially being game-able (increasing the numbers through such malicious means) and exaggerated, impact score and depth could help compare each project.

Read More

01/29/20 – Rohit Kumar Chandaluri – An Affordance-Based Framework for Human Computation andHuman-Computer Collaboration

Summary:

The author explains the research going on in the visual analytics area to collaborate the work between humans and computers.  While there have been multiple promising examples between human-computer collaboration but there are no proper solutions to the following questions

  1. How can  we tell if a problem will benefit from collaboration?
  2. How do we decide which tasks to delegate to which party and when?
  3. How can one system compare to others trying to solve the same problem?

The author tries to answer to the above questions in the paper. While the author explains the uses of visual analytics that exist in the present world and their usages. The author then explains the different kinds of affordances that exist in the visual analytical human computer collaboration and explains each affordance in detail further.

Reflections:

It was interesting to learn that visual analytics can be seen as a human computer collaboration. For analytics on large data we need larger computation power to visualize the analytical data. It was interesting to learn about the white box human computer collaboration and black box human computer collaboration.  The visual analytics help people with no expertise in the are to provide inputs to the problem.

Questions:

  1. How can we be sure that one visualization is the correct solution for a particular problem?
  2. The people who are developing the visualization tools are the humans, will the area of expertise of the people developing the tool affect the results?
  3. Is visual analytics solution is better than normal machine learning solution to solve the problem?

Read More

1/29/20 – Sukrit Venkatagiri – Affordance-Based Framework for Human Computation

Paper: An Affordance-Based Framework for Human Computation and Human-Computer Collaboration by R. Jordan Crouser and Remco Chang

Summary: 

This paper provides a survey of 49 papers on human-computer collaboration systems and interfaces. The authors highlight some affordances that arise from these collaborative systems and propose an affordance-based framework as a common language for understanding seemingly disparate branches of research and indicate unexplored avenues for future work. They discuss various systems, and provide extensions to these systems that provide human adaptability, and machine sensing. Finally, they conclude with a discussion of the utility of their framework in an increasingly collaborative world, and some complexity measures for visual analytics.

Reflection:

This paper focuses on some fundamental questions in mixed-initiative collaborations, such as how does one tell if a problem even benefits from a collaborative technique, and if so, who is the work delegated to? The paper also provides ways to evaluate complexity in different visual analytic setups, but raises more questions, such as what is the best way to evaluate work, and how can we account for individual differences? These suggestions and questions, however, only beget more questions. The nature of work is increasingly complex, requiring more unique ways to measure success that are application-specific. The paper tries to come up with a one-size-fits-all solution for this, but the solution ends up being more generic.

The paper also highlights the need for a more holistic evaluation approach. Typically, ML and AI research is focused solely on the performance of the model. However, this paper highlights the need to evaluate the performance of both the human and the system that they are collaborating with. 

The paper talks about human-computer collaboration, mostly focused on visual analytics. There is still more work to be done in studying how applicable this framework is to physical human-computer interfaces, for example, an exoskeleton, a robot that makes cars, etc. Here, there are different abilities of humans and robots, which are not covered in the paper. Perhaps humans’ visual skills may be combined with a robots’ accuracy. 

Questions:

  1. How might one apply this framework in the course of their class project?
  2. What about this framework is still/no longer applicable in the age of deep learning?
  3. Will AI ever surpass human creativity, audio linguistic abilities, and visuospatial thinking abilities? What does it mean to surpass human abilities?
  4. Is this framework applicable for cyber-physical systems? How does it differ?

Read More

01/29/20 – Yuhang Liu – Affordance-based framework for human-computer collaboration

In 1993, researchers from different backgrounds jointly discussed the challenges and benefits in the field of human-computer collaboration. They define collaboration as the process of two or more agents working together to achieve a common goal, while human-machine collaboration is defined as a collaboration involving at least one person and a computing agent. The field of visual analysis is deeply rooted in human-machine collaboration. That is, Visual Analytics attempts to leverage analyst intelligence and machine computing power in collaborations that analyze complex problems. The author has studied thousands of papers from many top conferences such as visual analysis, human-computer interaction and visualization. Provides a comprehensive overview of the latest technologies and provides a general framework based on the authors’ research. The author of this framework calls it “Affordance,” pointing out that there is a heuristic between humans and machines, which exists independently of each other instead of the other.

From reading the article, we learned that the word “Affordance” was first proposed by American psychologist J.J. Gibson. It means that the object and the environment provide the opportunity for action. When the word is used in human-computer collaboration, it means that both human and machine provide partners with opportunities for action. In this two-way relationship, it must be effectively perceived, By using it, we can achieve better human-machine collaboration. In this two-way relationship, people and computers have different abilities.

First of all, this kind of Affordance behaves differently for different human abilities. These different abilities mainly include visual perception, visuospatial thinking, audio linguistic ability, sociocultural awareness, creativity, and domain knowledge. Among them, the first three abilities belong to human strengths, especially visual abilities. Humans have powerful visual perception abilities and can easily distinguish color, shape, and even the texture and motion of the images, human have unparalleled advantages in this aspect of the machine, so it is very reasonable for people to work instead of computers in this respect. The latter three abilities require years of systematic learning and are difficult to fully embed in the computer, so using manual analysis and personnel experience as part of collaboration can greatly improve efficiency.

On the contrary, machines also have the abilities that humans do not have, large-scale data manipulation, collecting and storing large amounts of data, efficient data movement. These capabilities are not possessed by human beings, people cannot complete this series of tasks, and people cannot completely give up subjective thinking and realize unbiased analysis. There are other ways of revelation.

All in all, the author analyzed a large number of papers and finally got a general model, which can lay the foundation for future work, so it can well solve the problems encountered in previous research, and can judge whether it can be based on the “Affordance” idea. Collaborative technologies solve problems, when tasks are assigned to one party, and the ability to develop common languages. And I think that the middle frame is also very reasonable, and it can achieve the mutual cooperation between human and machine, and the inspiration effect. The integration of problems and the convergence of their solutions should also be the direction of development.

What are the disadvantages of the previous framework?

What characteristics of people and computers correspond to the “affordance” of human and computers?

How to make humans and machines play nicely for the extensions mentioned in the article?

Read More

01/29/20 – Vikram Mohanty – An Affordance-Based Framework for Human Computation and Human-Computer Collaboration.

Paper Authors: R. Jordon Crouser and Remco Chang

Summary

This paper provides an overview summary of some of the popular systems (back in 2012), which were built around human-computer collaboration. Based on this analysis, the authors uncover different key patterns in human and machine affordances, and propose an affordance-based framework that will help researchers think and strategize better about problems that can benefit from collaboration. Such an affordance-based framework, according to the authors, would enable easy comparison between systems via common metrics (discussed in the paper). In the age of intelligent user interfaces, the paper gives researchers a foundational direction or lens to break down problems and map the solution space in a meaningful manner.

Reflection

  1. This paper is a great reference resource for setting some foundational questions on human-computer collaboration – How do we tell if a problem would benefit from a collaborative solution? How do we decide which tasks to delegate to which party, and when? How do we compare different systems solving the same problem? At the same time, it also sets some foundational goals and objectives for a system rooted in human-computer collaboration. The paper illustrates all the concepts through different successful examples of systems, making it easy to visualize the bin in which your (anticipated) research would fit. 
  2. This paper makes a great motivating argument about developing systems from the problem space, rather than jumping directly to solutions, which may often lead to investment of significant time and energy into developing inefficient collaboration.
  3. The paper makes the case for evolving from a prior established framework (i.e. function allocation) for human-machine systems into the proposed affordance-based one. Even though they proposed this framework in 2012, which is also when deep learning techniques started becoming popular, I feel that this framework is dynamic and broad enough to accommodate the ubiquity of current AI and intelligent user interfaces.
  4. Following the paper’s direction of updating theories with technology’s evolution, I would argue for a “sequel” paper to discuss AI affordances as an extension to the machine affordances. This would require an in-depth discussion of the capacities and limitations of state-of-the-art AIs designed for different tasks, some of which currently fall under human affordances, such as visual perception (computer vision), creativity (language models), etc. While AIs may be far from being perfect in these tasks, they still provide imperfect avoidances. Inevitably, this also means re-focusing some of the human affordances described in the paper, and may be part of a bigger question i.e. “what is the role of humans in the age of AI?”. This also pushes the boundaries for what can be achieved with such hybrid interaction, e.g. AI’s last-mile problems [1].
  5. Currently, many different algorithms interact with human users via intelligent user interfaces (IUIs), and form a big part of decision-making processes. Over the years, researchers from different communities have pointed out how different algorithms can result in different forms of bias [2, 3] and have pushed for more fairness, accountability, transparency and interpretability of these algorithms in an effort to mitigate these biases. The paper, based in 2012, did not account for algorithms within machine affordances, and thus considered bias-free analysis as a machine affordance. 8 years later, to be able to detect biases still remains as somewhat more of a human affordance.

Questions

  1. Now, in 2020, how would you expand upon the machine affordances discussed in the paper?
  2. Does AI fit under machine affordances, or deserves a separate section – AI affordances? What kind of affordances does AI provide humans, and vice-a-versa? In other words, how do you envision this paper in current times? 
  3. For the folks working on AI or ML systems, is it possible for you to present the inaccuracies of the algorithms you are working on in descriptive, qualitative terms? Do you see human cognition, be it through novice or expert workers, as competent enough to fill in the gaps?
  4. Does this paper change the way you view your proposed project? If so, how does it change from before? Is it more in terms of how you present your paper?

Read More

01/29/20 – Lee Lisle – An Affordance-Based Framework for Human Computation and Human-Computer Collaboration

Summary

            Crouser and Chang make the argument that visual analytics is defined as “the science of analytical reasoning facilitated by visual interactive interfaces,” and is being pushed through two main directions of thought – human computation and human computer collaborations. However, there’s no common design language between the two subdisciplines.  Therefore, they took it upon themselves to do a survey of 1217 papers, whittling them down to 49 representative papers to then find common threads that can help define the fields. They then categorized the research into what affordances the research studies either for users or the machines. Humans are naturally better with visual perception, visuospatial thinking, audiolinguistic ability, sociocultural awareness, creativity, and domain knowledge, while machines are better with large-scale data manipulation, data storage and collection, efficient data movement and biasfree analysis. The authors then suggest that research explore human adaptability and machine sensing as well as discuss when to use these strategies.

Personal Reflection

When reading this I did question a few things about the studies.  For example, in bias-free analysis, while they do admit that human bias can be introduced during the programming, they fail to acknowledge the bias that can be present in the input data. Entire books have been written (Weapons of Math Destruction being one) that cover how “bias-free” algorithms can be fed input data that have clear bias, resulting in a biased system regardless of it being hard-coded in the algorithm.

Outlining these similarities between various human-computer collaborations allows other researchers to scope projects better. Bringing up the deficiencies of certain approaches allows for avoidance of the same pitfalls.

The complexity measure questions section, however, felt a little out of place considering it was the first time it was brought up in the paper. However, it asked strong questions that definitely impact this area of research. If ‘running time’ for a human is a long time, this could mean there are improvements to be made and areas that we can introduce more computer-aid.

Questions

  1. This kind of paper is often present in many different fields. Do you find these summary papers useful?  Why or Why not? Since it’s been 8 years since this was published, is it time for another one?
  2. Near the end of the paper, they ask what the best way is to measure human work.  What are your ideas? What are the tradeoffs for the types they suggested (input size, information density, human time, space)?
  3. Section 6 makes it clear that using multiple affordances at once needs to be balanced in order to be effectively used. Is this an issue with the affordances or an issue with the usability design of the tools?
  4. The authors mention two areas of further study in section 7: Human adaptability and machine sensing. Have these been researched since this paper came out? If not, how would you tackle these issues?

Read More

01/29/20 – Ziyao Wang – An Affordance-Based Framework for Human Computation and Human-Computer Collaboration.

In this paper, the authors conducted literature review on papers and publications which represent the state of the art in human-computer collaboration and human computation area. From these literature review, they identified the affordance into two groups: human-intelligence and machine-intelligence. Though they introduced the affordance can be split into two groups, there is also systems like reCAPTCHA and PatViz which can benefit from the combination of the two intelligences. Finally, they provided examples of how to utilize this framework and some advice on future researches. They announced that human adaptability and machine sensing is two extension of current work. Also, future work (find way to measure human work, assess human work in practice and account for individual differences in human operators) need combination of experts in theoretical computer science, as well as in psychology and neuroscience.

Reflections:

Primarily, I felt that both human affordance and machine affordance contribute to the success of current systems. There is great importance in allocate the tasks and make human and machine can support each other. Current systems may suffer from poor human-computer collaboration. For example, systems cannot assign proper work to human workers or the user interface is difficult to use. To avoid such kind of situation, policies and guidance are needed. There should be a common used evaluation criteria and make restriction on industry.

Secondly, researchers can benefit from making an overview of the related research areas. In most of the cases, to solve a problem we may need help from experts in different areas. As a result, the category of the problem may become ambiguous. As a result, researchers from different aspects may waste their effort on similar researches and they may be not able to get help from previous research in another area. For this reason, it is important to make a category of the researches with similar goals or related techniques. Current or future researches will benefit from such kind of category and the discussions between experts will become much easier. As a result, more ideas can be proposed and researchers can find out some fields which they have not been considered before.

Additionally, in the human-computation and human-computer collaborative systems, problems are solved using both human-intelligence and machine-intelligence. For such a comprehensive area, it is important to do reflection regularly. With the reflections, researchers can make comprehensive consideration on the problems they are going to solve. With the table in the paper, the affordance of human-intelligence and machine-intelligence can be overviewed. Additionally, we can find out in which areas there have been a lot of research and to which areas we should pay more attention to. With this common framework, the understanding and discussion on previous work would be much easier and novel ideas will occur. This kind of reflection can be applied in other areas too, which will result in rapid development in each industry.

Questions:

Why there is no updates in systems which is considered hard to use?

How to assess human work and machine work in practical?

For user interface, is it more important to let new workers use easily but there is limitation in customization or let experienced workers able to customize and reach high efficiency however new user may face some difficulty in using?

Read More

01/29/2020-Donghan Hu-An Affordance-Based Framework for Human Computation and Human-Computer Collaboration

Many researchers’ primary goals are to develop tools and methodologies that can facilitate human-machine collaborative problem solving and to understand and maximize the benefits of the partnership of size and complexity. The first problem is how do we tell if a problem would benefit from a collaborative technique? This paper mentioned that even though deploying various collaborative systems has led to many novel approaches to difficult problems, it has also led to the investment of significant time, expense and energy. However, these problems might be solved better by only depending on human or machine techniques. The second problem is how do we decide which tasks to delegate to which party and when? The authors stated that we are still lacking a language for describing the skills and capacity of the collaborating team. For the third question, how does one system compare to others trying to solve the same problem? Lacking of no common language or measures by which to describe new systems is one important reason.  About the research contributions, authors picked out 49 publications from 1271 papers which represent the state of the art in the study of human-computer collaboration and human computation. Then authors identify grouping based on human- and machine-intelligence affordances which form the basis of a common framework for understanding and discussing collaboration works. Last, the authors talked about the unexplored areas for future work. Each of the current frameworks is specific to a subclass of collaborative systems which is hard to extend them to a broader class of human-computer collaborative systems.

Based on the definition of “affordance”, I know both humans and machines bring to the partnership opportunities for action, and each mush be able to perceive and access these opportunities in order for them to be effectively leveraged. It is not surprised for me that the bandwidth of information presentation is potentially higher in the visual perception than any of the other senses. I consider that visual perception as the most important information processing for humans in most cases, that’s why there are a plethora of research studies combined with human visual processing to solve various problems. I am quite interested in the concept of sociocultural awareness. Individuals understand their actions in relation to others and to the social, cultural and historical context in which they are carried out. I think this is a paramount view in the study of HCI. Different individuals in different environments with different cultural backgrounds would behave different interactions with the same computers. In the future, I consider that cultural background should become an important factor in the studies of HCI.

I found that various applications are categorized into multiple affordances. If so, how can the authors answer the third question? For example, if two systems are trying to solve the same problem, but each of them have different human or computer affordance, how can I say which is better? Does different affordance have different weight values? Or we should treat them equally?

Less tools are designed for creativity, social and bias-free affordance, what does this mean? Is it mean that these affordances are less important or researchers are still working on these areas?

Read More

1/29/2020 – Jooyoung Whang – An Affordance-Based Framework for Human Computation and Human-Computer Collaboration

In this paper, the author reviews more than 1200 papers to identify how to best utilize human-machine collaboration. Their field of study was visual analytics, but the paper was well-generalized to fit many other research areas. The paper discusses two foundational factors to consider when designing a human-machine collaborative system: Allocation and affordance. In the many papers that the authors reviewed, systematic methods of trying to appropriately allocate work for each human and computer in a collaborative setting was studied. A good rule was introduced by Fitts, but it was found outdated later due to the increasing computational power of machines. The paper decides that inspecting affordance rather than allocation is a better way to utilize human-machine collaborative systems. Affordance can be best understood as what something an agent is good at than others. For example, humans can provide excellent visual processing skills while computers accel at large-data processing. The paper also introduces some case studies where multiple affordances from each party was utilized.

I greatly enjoyed reading about each of the affordances that human and machine can each provide. The list of affordances that the paper provides will serve as a good resource to come back to when trying to design a human-machine collaborative system. One machine affordance that I do not agree with is bias-free analysis. In machine learning scenarios, a learning model is very often easily biased. Both humans and machines can be biased in analyzing something based on previous experience or data. Of course, it is the responsibility of the designer of the system to ensure unbiased models, but as the designer is a human, it is often impossible to avoid bias of some kind. The case study regarding the reCAPTCHA system was an interesting read. I always thought that CAPTCHAs were only used for security purposes, and not machine learning. After learning how it is actually used, I was impressed how efficient and effective the system is at both securing Internet access as well as digitalizing physical books.

The followings are the questions that I came up with while reading the paper:

1. The paper does a great job at summarizing what each a human and a machine is relatively good at. The designer, therefore, simply needs to select appropriate tasks from the system to assign to each human and machine. Is there a good way to identify what affordance the system’s task needs?

2. There’s another thing that humans are really good at compared to a machine: adapting. Machines, upon their initial programming, does not change their response to an event according to time and era while humans very much do. Is there a human-machine collaborative system that would have a task which would require the affordance “adaptation” from a human collaborator?

3. Many human-machine collaborative systems register the tasks that needs to be processed using an automated machine. For example, the reCAPTCHA system (the machine) samples a question and asks the human user to process it. What if it was the other way around where a human register a task and assigns the task to either a machine or a human collaborator? Would there be any benefits to doing that?

Read More

01/29/20 – Affordance-Based Framework for Human Computation and Human-Computer Collaboration – Subil Abraham

Reading: R. Jordon Crouser and Remco Chang. 2012. An Affordance-Based Framework for Human Computation and Human-Computer Collaboration. IEEE Transactions on Visualization and Computer Graphics 18, 12: 2859–2868. https://doi.org/10.1109/TVCG.2012.195

This paper is creating a summary of data visualization innovations as well as more general human computer collaboration tools for interpreting and making conclusions for data. The goal of the paper is to create a common language by which to categorize these tools and thereby provide a way of comparing the tools and understanding exactly what is needed for a particular situation rather than relying on just researcher intuition. They set up a framework in terms of affordances, what a human or computer can find opportunity and are capable of doing to do given the environment. By framing things in terms of affordances, we are able to identify how a human and/or computer can contribute to the goal of a given task, as well as be able to frame a system in comparison to other systems in terms of their affordances.

The idea of categorizing human-computer collaborations in terms of affordances is certainly an interesting and intuitive idea. Framing the characteristics of the different tools and software we use in these terms is a useful way of looking at things. However, as useful as the framework is, having read a little bit about function allocation, I don’t see how hugely different affordances are from function allocation. They both seem to be saying the same thing, in my view. The list of affordances is a bit more comprehensive than the Fitts HABA-MABA list. However, they both seem to be conveying the same information. Perhaps I do not have the necessary width of knowledge to see the difference, but the paper doesn’t make any convincing argument that is easy for an outsider to this field to understand.

Questions for discussion:

  1. How effective of a system is affordances? What use is it actually able to provide besides being one more set of standards? (relevant xkcd: https://m.xkcd.com/927/)
  2. There is a seemingly clear separation between human and machine affordances. But human adaptability seems to be third kind of affordance, a hybrid affordance where a machine action is used to spark human ingenuity. Does that seem like a valid or would you say that adaptability falls clearly in one of the two existing categories?
  3. Now that we have a language to talk about this stuff, can we now use this language, these different affordances, to combine together to create new applications? What would that look like? Or are we limited to just identifying an application by its affordances after its creation?

Read More