4/8/20 – Akshita Jha – Agency plus automation: Designing artificial intelligence into interactive systems

Summary:
“Agency plus automation: Designing artificial intelligence into interactive systems” by Heer talks about the drawback of using artificial intelligence techniques for automating tasks, especially the ones that are considered repetitive and monotonous. However, this presents a monumentally optimistic point of view by completely ignoring the ghost work or the invisible labor that goes into making ‘automating’ these tasks. This gap between crowd work and machine automation highlights the need for design and engineering interventions. The authors of this paper try to make use of the complementary nature strengths and weaknesses of the two – creativity, intelligence, world-knowledge of the crowd workers and the cheap and no cognitive overload provided by automated systems. The authors describe in detail the case studies of interactive systems in three different areas – data wrangling, exploratory analysis, and natural language translation. These systems combine computational support with interactive systems. The authors also talk about sharing representations of tasks to include both human intelligence and automated support in the design itself. The authors conclude that “neither automated suggestions nor direct manipulation plays a strictly dominant role” and ” a fluent interleaving of both modalities can enable more productive, yet flexible, work.”

Reflections:
There is a lot of invisible work that goes into automating a task. Most automated tasks require hundreds, if not thousands, of annotations. Machine learning researchers turn a blind eye to all the effort that goes into annotations by calling their systems ‘fully automated’. This view is exclusionary and does not do justice to the vital but seemingly trivial work done by the crowd workers. One of the areas that one can focus on is the open question of shared representation – Is it possible to integrate data representation with human intelligence? If yes, is that useful? Data representation often involves the construction of latent space to reduce the dimensionality of input data and get concise and meaningful information. There may or may not be such representations exist for human intelligence. Maybe borrowing from social psychology might help in such a scenario. There can be other ways to go around this. For example, the authors focus on building interactive systems with ‘collaborative’ interfaces. The three interaction models: Wrangler, Voyager, and PTM do not distribute the tasks equally between humans and automated systems. The automated methods prompt the users with different suggestions which the end user reviews. The final decision making power lies with the end user. It would be interesting to see what would the results looks like if the roles were reversed and the system was turned on its head. An interesting case study could be if the suggestion was given by the end user and the ultimate decision making capability rested with the system. Would the system still be as collaborative? What would the drawbacks of such systems be?

Questions:

1. What are your general thoughts on the paper?
2. What did you think about the case studies? Which other case studies would you include?
3. What are your thoughts on evaluating systems with shared representations? Which evaluation criteria can we use?

Read More

4/8/20 – Akshita Jha – CrowdScape: Interactively Visualizing User Behavior and Output

Summary:
“CrowdScape: Interactively Visualizing User Behavior and Output” by Rzeszotarski and Kittur talks about crowdsourcing and the importance of interactive visualization using the complementary strengths and weaknesses of crowd workers and machine intelligence. Crowd sourcing helps work distribution. Quality control approaches for this are often not scalable. Crowd organizing algorithms like Partition-Map-Reduce, Find-Fix-Verify, and Price-Divide-Solve are used for easy distribution, merging and checking the work in crowd sourcing. However, they aren’t very accurate or useful in complex subjective tasks. CrowdScape assimilates worker behavior with worker input using interaction, visualization, and machine learning. This supports the human evaluation of crowd work. CrowdScape enables the user to hypothesize about and test the crowd to distill the selections by using a sensemaking loop. This paper proposes novel techniques for crowd worker’s product exploration and visualizations for crowd worker behavior. It also provides tools for classification or crowd workers and an interface for interactive exploration of these results using mixed-method machine learning.

Reflections:
There has been work done involving crowd behaviour centered on worker behaviour or worker output in isolation but combining them is very fruitful to generate mental models of the workers and build a feedback loop. Visualisation of the workers’ process helps us understand their cognitive process and thus perceive the end product better. CrowdScape can only be used in webpages online that allow the injection of JavaScript. It is not useful when this is blocked or for non-web offline interfaces. The set of aggregate features used might not always provide useful feedback. The already existing quality control measures are not very different from CrowdScape in case of clear, consensus ground truth exists, such as identifying a spelling error. In such cases, the effort put in learning and using CrowdSpace may not always be beneficial and hence may not be too advantageous. In some cases, the behavioral traces of the worker may not be very indicative. Such as when they work on a different editor and finally copy and paste the work in another one. Tasks that are heavily cognitive or totally offline are also not very compliant with the general methods supported by CrowdScape. This system heavily relies on the detailed level of behavioral traces such as mouse movement, scrolling, keypresses, focus events, and clicks. It should be ensured that this intrusiveness and the implied decrease in efficiency should be countered by the accuracy of the measurement of the behavior. An interesting point to note here is that this tool can become privacy-intrusive if care is not taken. We should ensure that changes are made to the tool as crowd work becomes increasingly relevant and the tool becomes vital to better understand the underlying data and crowd behaviour. Apart from these reflections, I would just like to point out that the graphs that the authors use in the paper help in conveying their results really well. I feel this is one detail that is vital but easily overlooked in most papers.

Questions:
1. What are your general thoughts about this paper?
2. Do you agree with the methodology followed?
3. Do you approve of the interface? Would you make any changes to the interface?

Read More

04/08/2020 – Bipasha Banerjee – CrowdScape: Interactively Visualizing User Behavior and Output

Summary

The paper focuses on tackling the problem of quality control of the work done by crowdworkers. They created a system named CrowdScape to evaluate the work done by humans through mixed-initiative machine learning and interactive visualization. The provided details of quality control in crowdsourcing. This involved mentioning various methods that help evaluate the content. Some methods mentioned were post-hoc output evaluations, behavioral traces, and integrated quality control. CrowdScape is a system developed by the authors to capture worker behavior, also in the form of interactive data visualizations. The system incorporates various techniques to monitor user behavior. It helps to understand if the work was done diligently or was done in a rush. The output of the work is indeed a good indicator of the quality of the work; however, an in-depth review of the user behavior is needed to understand the method in which the worker completed the task.

Reflection

To be very honest, I found this paper fascinating and extremely important for research work in this domain. Ensuring the work submitted is of good quality not only helps legitimize the output of the experiment but also increases trust in the platform as a whole. I was astonished to read that about one-third of all submissions are of low quality. The stats suggest that we are wasting a significant amount of resources. 

The paper mentions that the tool uses two sources of data, output, and worker behavior. I was intrigued by how they took into account the worker’s behavior, like accounting for the time taken to complete the task, the way the work was completed, including scrolling, key-press, and other activities. I was curious to know if the worker’s consent was explicitly taken. It would also be an interesting study to see if knowing that the behavior is being recorded affects performance. Additionally, dynamic feedback can also be incorporated. By feedback, I mean, if the worker is supposed to take “x” min, alerting them that the time taken on the task is too low. This will prompt them to take the work more seriously and avoid unnecessary rejection of the task.

I have a comment on the collection of YouTube video tutorials. One of the features taken into account was ‘Total Time’, that signified if the worker had seen the video completely first and then summarized the content. However, I would like to point out that sometimes videos can be watched at an increased playback speed. I sometimes end up watching most tutorial related videos at 1.5 times speed. Hence, if the total time taken is lesser than expected, it might simply signify that they might have watched it at a different speed. A simple check could help solve the problem. YouTube generally has a fixed number of playback speeds. Considering that into account, when calculating the total time might be a viable option. 

Questions

  1. How are you ensuring the quality of the work completed by crowdworkers for your course project?
  2. Were the workers informed that their behavior was “watched”? Would the behavior and, subsequently, the performance change if they are aware of the situation?
  3. Workers might use different playback speeds to watch videos. How is that situation handled here?

Read More

04/08/2020 – Bipasha Banerjee – Agency plus automation: Designing artificial intelligence into interactive systems

Summary

The paper discusses the fact that computer-aided products should be considered to be an enhancement of human work rather than it being a replacement. The paper emphasizes that technology, on its own, is not always full proof and that humans, at times, tend to rely completely on technology. In fact, AI in itself can yield faulty results due to biases in the training data, lack of enough data, among other factors. The authors point out how the coupling of human and machine efforts can be done successfully through some examples of autocompleting of google search and grammar/spelling correction. The paper aimed to use AI techniques but in a manner that makes sure that humans remain the primary controller. The authors considered 3 case studies, namely data wrangling, data visualization for exploratory analysis, and natural language translation, to demonstrate how shared representations perform. In each case, the models were designed to be human-centric and to have automated reasoning enabled. 

Reflection

I agree with the authors’ statement about data wrangling that most of the time is spent in cleaning and preparing the data than actually interpreting or applying the task one specializes in. I was amused by the idea that users’ work of transforming the data is cut short and aided by the system that suggests users the proper action to take. I believe this would indeed help the users of the system if they get the desired options directly recommended to them. If not, it will help improve the machine further. I particularly found it interesting to see that users preferred to maintain control. This makes sense because, as humans, we have an intense desire to control.

The paper never explains clearly who the participants of the system are. This would be essential to know who the users were exactly and how specialized they are in the field they are working on. It would also give an in-depth idea about the experience they had interacting with the system, and thus I feel the evaluation would be complete.  

The paper’s overall concept is sound. It is indeed necessary to have a seamless interaction between man and the machine. They have mentioned three case studies. However, all of them are data-oriented. It would be interesting to see how the work can be extended to other forms – videos, images. Facebook picture tagging, for example, does this task to some extent. It suggests users with the “probable” name(s) of the person in the picture. This work can also be used to help detect fake vs. real images or if the video has been tampered.

Questions

  1. How are you incorporating the notion of intelligent augmentation in your class project?
  2. Case studies are varied but mainly data-oriented. How would this work differ if it was to imply images? 
  3. The paper mentions “participants” and how they provided feedback etc. However, I am curious to know how they were selected? Particularly, the criteria that were used to select users to test the system.

Read More

04/08/2020 – Palakh Mignonne Jude – CrowdScape: Interactively Visualizing User Behavior and Output

SUMMARY

There are multiple challenges that exist while ensuring quality control of crowdworkers that are not always easily resolved by employing simple methods such as the use of gold standards or worker agreement. Thus, the authors of this paper propose a new technique to ensure quality control in crowdsourcing for more complex tasks. By utilizing features from worker behavioral traces as well as worker outputs, they aid researchers to better understand the crowd. As part of this research, the authors propose novel visualizations to illustrate user behavior, new techniques to explore crowdworker products, tools to group as well as classify workers, and mixed initiative machine learning models that build on a user’s intuition about the crowd. They created CrowdScape – built on top of MTurk which captures data from the MTurk API as well as a Task Fingerprinting system in order to obtain worker behavioral traces. The authors discuss various case studies such as translation, picking a favorite color, writing about a favorite place, and tagging a video and describe the benefits of CrowdScape in each case.

REFLECTION

I found that CrowdScape is a very good system especially considering the difficulty in ensuring quality control among crowdworkers in case of more complex tasks. For example, in case of a summarization task, particularly for larger documents, there is no single gold standard that can be used and it would be rare that the answers of multiple workers would match for us to use majority vote as a quality control strategy. Thus, for applications  such as this, I think it is very good that the authors proposed a methodology that combines both behavioral traces as well as worker output and I agree that it provides more insight that using either alone. I found that the example of the requester intending to have summaries written for YouTube physics tutorials was an appropriate example.

I also liked the visualization design that the authors proposed. They aimed to combine multiple views and made the interface easy for requesters to use. I especially found the use of 1-D and 2-D matrix scatter plots showing distribution of features over the group of workers that also enabled dynamic exploration to be well thought out.

I found the case study on translation to be especially well thought out – given that the authors structured the study such that they included a sentence that did not parse well in computer generated translations. I feel that such a strategy can be used in multiple translation related activities in order to more easily discard submissions by lazy workers. I also liked the case study on ‘Writing about a Favorite Place’ as it indicated the performance of the CrowdScape system in a situation wherein no two workers would provide the same response and traditional quality control techniques would not be applicable.

QUESTIONS

  1. The CrowdScape system was built on top of Mechanical Turk. How well does it extend to other crowdsourcing platforms? Is there any difference in the performance?
  2. The authors mention that workers who may possibly work on their task in a separate text editor and paste the text in the end would have little trace information. Considering that this is a drawback of the system, what is the best way to overcome this limitation?
  3. The authors the case study on ‘Translation’ to demonstrate the power of CrowdScape to identify outliers. Could an anomaly detection machine learning model be trained to identify such outliers and aid the researchers better?

Read More

04/09/2020 – Mohannad Al Ameedi – Agency plus automation Designing artificial intelligence into interactive systems

Summary

In this paper, the author proposes multiple systems that can combine the power of both artificial inttelgence and human computation and overcome each one weakness. The author thinks that automating all tasks can lead to a poor results as human component is needed to review and revise results get the best results. The author the autocomplete and spell checkers examples to show that artificial intelligence can offer suggestion and then human can review or revise these suggestions or dismiss the suggestions. The author propose different systems that uses predictive interaction that help users on their tasks that can be partially automated to help the users to focus more on the things that they care more about. One of these systems called Data Wrangling that can used by data analyst on the data preprocessing to help them with cleaning up the data to save more than %80 of their work. The users will need to setup some data mapping and can accept or reject the suggestions. The author proposed project called Voyager that can help with data visualization for exploratory analysis which can be used to help with suggesting visualization elements. The author suggests using AI to automate repeated task and offer the best suggestions and recommendations and let the human decide whether to accept or reject the recommendations. This kind of interaction can improve both machine learning results and human interaction.

Reflection

I found the material presented in the paper to be very interesting. Many discussions were about whether machine can replace human or not was addressed in this paper. The author mentioned that machine can do well with the help of human and the human in the loop will always be necessary.

I also like the idea of the Data Wrangling system as many data analysts and developer spend considerable time on cleaning up the data and most of the steps are repeated regardless of the type of data, and automating these steps will help a lot of people to do more effective work and to focus more on the problem that they are trying to solve rather than spending time on things that are not directly related to the problem.

I agree with author that human will always be in the loop especially on systems that will be used by humans. Advances in AI need human on annotating or labeling the data to work effectively and also to measure and evaluate the results.

Questions

  • The author mentioned that the Data Wrangler system can be used by data analysts to help with data preprocessing, do you think that this system can also be used by data scientist since most machine learning and deep learning projects require data cleanup ?
  • Can you give other examples of AI-Infused interactive systems that can help different domains and can be deployed into production environment to be used by large number of users and can scale well with increased load and demands?

Read More

Subil Abraham – 04/08/2020 – Rzeszotarski and Kittur, “CrowdScape”

Quality control in crowdwork is straightforward for straightforward tasks. Tasks like transcribing text on an image is fairly easy to evaluate the quality of because there is only one right answer. Requesters can use things like gold standard tests to evaluate the output of the crowdworkers directly in order to determine if they have done a good job, or use task fingerprinting to determine if the worker behavior indicates that they are making an effort. The authors propose CrowdScape as a way to combine both types of quality analysis, worker output and behavior, through a mix of machine learning and innovative visualization methods. CrowdScape includes a dashboard that provides a birds-eye view of the different aspects of the worker behavior in the form of graphs. These graphs showcase both aggregate behaviors of all the crowdworkers as well as the timeline of the individual actions a crowd worker takes on a particular task (scrolling, clicking, typing, and so on). They conduct multiple case studies on different kinds of tasks to show that their visualizations are beneficial in separating out the workers who make an effort in producing quality output versus those who are just phoning it in. Behavioral traces identify where the crowdworker spends their time by looking at their actions and how long they spend doing that action.

CrowdScape provides an interesting visual solution to the problem of “how to evaluate if the workers are being sincere in the completion of complex tasks”. Creative work especially, where you ask the crowd worker to write something on their own, is notoriously hard to determine because there is no gold standard test that you can do. So I find the inclusion of the behavior tracking visualizer where different colored lines along a timeline represent different actions done can be useful. Someone who makes an effort in typing out will show long blocks of typing with pauses for thinking. I can see how different behavioral heuristic can be applied for different tasks in order to determine if the workers are actually doing the work. I have to admit though that I find the scatter plots kind of obtuse and hard to parse. I’m not entirely sure how we’re supposed to read them and what information they are conveying. So I feel like the interface itself could do better in communicating exactly what the graphs are doing. There is promise for releasing this as a commercial or open source product (if it isn’t already one) once the polishing of the interface is done with. One last thing is the ability to group “good” submissions by the requester and then machine learning is used by CrowdScape to find other similar “good” submissions. However, the paper only makes mention of it and do not describe how it fits in with the interface as a whole. I felt this was another shortcoming of this design.

  1. What would a good interface for the grouping of the “good” output and subsequent listing of other related “good” output look like?
  2. In what kind of crowd work would CrowdScape not be useful (assuming you were able to get all the data that CrowdScape needs)?
  3. Did you find all the elements of the interface intuitive and understandable? Were there parts of it that were hard to parse?

Read More

Subil Abraham – 04/08/2020 – Heer, “Agency plus automation”

A lot of work has been independently done along the tangents of improving computers to allow humans to use them better, and separately in helping machines do work by themselves. The paper makes the case that in the quest for automation, research in augmenting humans to do work by improving the intelligence of tools has fallen to the wayside. This provides a rich area of exploration. The paper explores three tools in this space that work with the users in a specific domain and predict what they might need or want next, based on a combination of context clues from the user. Two of the three tools, Data Wrangler and Voyager use domain specific languages to represent to the user the operations that are possible, thus providing a shared representation of data transformations for the user and the machine. The last tool, for language translation, does not provide a shared representation but presents the suggestions directly because there is no real way of using a DSL here outside of exposing the parse tree which doesn’t really make sense for an ordinary end user. The paper also makes several suggestions of future work. This includes methods for better monitoring and introspection tools in these human AI systems, allowing shared representations to be designed by AI based on the domain instead of being pre-designed by a human, and finding techniques that would help to identify the right balance between human control and automation for a given domain.

The paper uses these three projects as a framing device to discuss the idea of developing better shared representations and their importance in human AI collaboration. I think its an interesting take, especially the idea of using DSLs as a means of communicating ideas between the human user and the AI underneath. They backed away from discussing what a DSL would look like for the translation software since anything outside of autocomplete suggestions don’t really make sense in that domain, but I would be interested in further exploration in that field. I also find it interesting and it makes sense that people might not like the machine predictions being thrust upon them, either because it influences the thinking or it is just annoying. I think the tools discussed manage to make a good balance in staying out of the users way. Yes, the user will be influenced but that is inevitable because the other option is to not give the predictions at all and now you get no benefit.

Although I see the point that the article is trying to make about shared representations (at least, I think I do), I really don’t see the reason for the article existing besides just the author saying “Hey look at my research, this research is very important and I’ve done things with it including making a startup”. The article doesn’t contribute any new knowledge. I don’t mean for that to sound harsh, and I can understand how reading this article is useful from a meta perspective (saves us the trouble of reading the individual pieces of research that are summarized in this article and trying to connect the dots between them).

  1. In the translation task, Why wouldn’t a parse tree work? Are there other kinds of structured representations that would aid a user in the translation task?
  2. Kind of a meta question, but do you think this paper was useful on its own? Did it provide anything outside of summarizing the three pieces of research the author was involved in?
  3. Is there any way for the kind of software discussed here, where it makes suggestions to the user, to avoid influencing the user and interfering with their thought process?

Read More

04/08/2020 – Mohannad Al Ameedi – CrowdScape Interactively Visualizing User Behavior and Output

Summary

In this paper, the authors propose a system that can evaluate complex tasks based on both workers output and behaviors. Other available systems are focus on once aspect of evaluation on either the worker output or behavior which can give poor results especially with complex or creative work. The proposed system combine works through interactive visualization and mixed initiative machine learning. The proposed system, CrowdScape, offers visualization to the users that allow them to filter out poor output to focus on a limited number of responses and use machine learning to measure the similarity of response with the best submission and that way the requester can get the best output and best behavior at the same time. The system provides time series data about for user actions like mouse move or scroll up and down to generate visual timeline for tracing user behavior. The system can work only with web pages and has some limitation, but the value that can give to the customer is high and can enable users to navigate through workers results easily and efficiently.

Reflection

I found the method used by the authors be very interesting. Requesters receive too much information about the workers and visualizing that data can help the requesters to know more about the data, and the use of machine learning can help a lot on classifying or clustering the optimal workers output and behaviors. Other approaches mentioned in the paper are also interesting especially for simple tasks that don’t need complex evaluation.

I also didn’t know that we can get detailed information about the workers output and behavior and found YouTube example mentioned in the paper to be very interesting. The example mentioned shows that MTurk returns everything related to the user actions using the help of JavaScript while working on the YouTube video which can be used in many scenarios. I agree with the authors about the approach which can combine the best of the two approaches. I think it will be interesting to know how many worker response are filtered out in the first phase of the process because that can tell us if sending the request even worthwhile. If too many responses are not considered, then it is possible that task need to be evaluated again.

Questions

  • The authors mentioned that their proposed system can help to filter out poor outputs on the first phase. Do you think if to many responses are filtered out means that the guidelines are the selection criteria needs to be reevaluated?
  • The authors depend on JavaScript to track information about the workers behaviors do you think MTurk needs to approve that or it is not necessary? And do you think that the workers also need to be notified before accepting the task?
  • The authors mention that CrowdScape can be used to evaluate complex and creative tasks, do you think that they need to add some process to make sure that the task really need to be evaluated by their system, or you think the system can also work with simple tasks?

Read More

04/08/2020 – Sushmethaa Muhundan – Agency plus automation: Designing artificial intelligence into interactive systems

This work explores strategies to balance the role of agency and automation by designing user interfaces that enable the shared representations of AI and humans. The goal is to productively employ AI methods while also ensuring that humans remain in control. Three case studies are discussed and these are data wrangling, data visualization for exploratory analysis, and natural language translation. Across each, strategies for integrating agency and automation by incorporating predictive models and feedback into interactive applications are explored. In the first case study, an interactive system is proposed that aims at reducing human efforts by recommending potential transformation, gaining feedback from the user, and performing the transformations as necessary. This would enable the user to focus on tasks that would require the application of their domain knowledge and expertise rather than spending time and effort manually performing transformations. A similar interactive system was developed to aid visualization efforts. The aim was to encourage more systematic considerations of the data and also reveal potential quality issues. In the case of natural language translation, a mixed-initiative translation approach was explored.

The paper has a pragmatic view of the current AI systems and makes a realistic observation that the current AI systems are not capable of completely replacing humans. There is an emphasis on leveraging the complementary strengths of both the human and the AI throughout the paper which is practical. 

Interesting observations were made in the Data Wrangler project with respect to proactive suggestions. If these were presented initially, before the user has had a chance to interact with the system, this feature received negative feedback and was ignored. But, if the same suggestions were presented to users whilst the user was engaging with the system, although the suggestions were not related to the user’s current task, it was received positively. Users viewed themselves as initiators in the latter scenario and hence felt that they were controlling the system. This observation was fascinating since it shows that while designing such user interfaces, the designers should ensure that their users feel in control and are not feeling insecure while using AI systems.

With respect to the second case study, it was reassuring to learn that the inclusion of automated support from the interactive system was able to shift user behavior for the better and helped broaden their understanding of the data. Another positive effect was that the system helped humans combat confirmation bias. This shows that if the interface is designed well, the benefits of AI amplifies the results gained when humans apply their domain expertise.

  • The paper deals with designing interactive systems where the complementary strengths of agents and automation systems are leveraged. What could be the potential drawbacks of such systems, if any?
  • How would the findings of this paper be translated in the context of your class project? Is there potential to develop similar interactive systems to improve the user experience of the end-users?
  • Apart from the three case studies presented, what are some other domains where such systems can be developed and deployed?

Read More