04/22/2020 – Palakh Mignonne Jude – Opportunities for Automating Email Processing: A Need-Finding Study

SUMMARY

In this paper, the authors conduct a mixed-methods investigation to identify the expectations of users in terms of automated email handling as well as the information and computation required to support the same. They divided their study into 3 probes – ‘Wishful Thinking’, ‘Existing Automation Software’, and ‘Field Deployment of Simple Inbox Scripting’.  The first probe was conducted in two stages. The first stage included a formative design workshop wherein the researchers enlisted 13 computer science students that were well-versed with programming to create rules. The second stage was a survey that enlisted 77 participants from a private university including 48% without technical backgrounds. The authors identified that there was a need for automated systems to have richer data models, use internal/external context, manage attention, alter the presentation of the inbox. In the second probe, the authors mined GitHub repositories to identify needs that programmers had implemented.  Some of the additional needs they identified included processing, organizing, and archiving content, altering the default presentation of email clients, email analytics and productivity tools. As part of the third probe, the authors deployed their ‘YouPS’ system that enables users to process email rules in Python. For this probe, they enlisted 12 email users (all of whom could code in Python). Common themes across the rules generated include the creation of email modes, leveraging interaction history, and a non-use of existing email client features. The authors found that users did indeed desire more automation in their email management especially in terms of richer data models, internal and time-varying external context, and automated content processing.

REFLECTION

I liked the overall motivation of the study and especially resonated with the need of automated content processing as I would definitely benefit from having mail attachments downloaded and stored appropriately. The subjects that mentioned a reaction to signal if a message was viewed reminded me about Slack’s interface that allows you to ‘Add reaction’. I also believe that having a tagging feature would be good to ensure that key respondents are alerted of tasks that must be performed by them (especially in case of longer emails).

I liked the setup of Probe 3 and found that this was an interesting study. However, I wonder about the adoptability of such a system and as mentioned by the authors in the future work, I would be very interested in knowing how non-programmers would make use of these rules via the use of a drag-and-drop GUI.

The authors found that the subjects (10 out of 12) preferred to write rules in Python rather than use the mail client’s interface. This reminded me of prior discussions in class for the paper ‘Agency plus automation: Designing artificial intelligence into interactive systems’ wherein we discussed how humans prefer to be in control of the system and the level of automation that users desire (in a broader context).

QUESTIONS

  1. The studies conducted include participants that had an average age group that was less than 30 and most of whom were affiliated with a university. Would the needs of business professionals vary in anyway as compared to the ones identified in this study?
  2. Would business organizations be welcoming of a platform such as the YouPS system? Would this raise any security concerns considering that the system is able to access the data stored in the emails?
  3. How would to rate the design of the YouPS interface? Do you see yourself using such a system to develop rules for your email?
  4. Are there any needs, in addition to the ones mentioned in this paper, that you feel should be added?
  5. The authors state that even though 2/3 studies focused on programmers, the needs identified were similar between programmers and non-programmers. Do you agree with this justification? Was there any bias that could have crept in as part of this experimental setup?

Read More

04/22/2020 – Palakh Mignonne Jude – SOLVENT: A Mixed Initiative System for Finding Analogies between Research Papers

SUMMARY

The authors attempt to assist researcher to analogies in other domains in an attempt to aid interdisciplinary research. They propose a modified annotation scheme that extends on the work described by Hope et. al. [1] and contains 4 elements – Background, Purpose, Mechanism, and Findings. The authors conduct 3 studies – the first, involving the sourcing of annotations from domain-expert researchers, the second, using SOLVENT to find analogies with real-world value, and the third, scaling up SOLVENT through crowdsourcing. In each study, semantic vector representations were created from the annotations. In the first study, the dataset used focused on papers from the CSCW conference and was annotated by members of the research team. In the second study, the researchers worked with an interdisciplinary team working with bioengineering and mechanical engineering in an attempt to identify whether SOLVENT can aid in identifying analogies not easily found through keyword/citation tree searches. In the third study, the authors used crowdsource workers from Upwork and AMT to perform the annotations. The authors found that these crowd annotations did have substantial agreement with researcher annotations but the workers struggled with purpose and mechanism annotations. Overall, the authors found that SOLVENT helped researchers to find analogies more effectively.

REFLECTION

I liked the motivation for this paper – especially the study 3 that used of crowdworkers for the annotations and was glad to know that the authors found substantial agreement between crowdworker annotations and researcher annotations. This was an especially good finding as the corpus that I deal with also contains scientific work and scaling the annotations for the same has been a concern in the past.

As part of the second study, the authors mention that they trained a word2vec model on 3,000 papers in the dataset curated using papers from the 3 domains under consideration. This made me wonder about the generalizability of their approach. Would it be possible to generated more scientific word vectors that span across multiple domains? I think it would be interesting to see how the performance of a such a system would measure against the existing system. In addition to this, word2vec is known to face issue with out-of-vocabulary words, so that made me wonder if the authors had made any provisions to deal with the same.

QUESTIONS

  1. In addition to the domains mentioned by the authors in the discussion section, what other domains can SOLVENT be applied to and how useful do you think it would be in those domains?
  2. The authors used majority vote as the quality control mechanism for Study 3. What more sophisticated measures could be used instead of majority vote? Would any of the methods proposed in the paper ‘CrowdScape: Interactively Visualizing  User Behavior and Output’ be applicable in this setting?
  3. How well would SOLVENT extend to the abstracts of Electronic Theses and Dissertations that would contain a mix of STEM as well as non-STEM research? Would any modifications be required to the annotation scheme presented In this paper?

REFERENCES

  1. Tom Hope, Joel Chan, Aniket Kittur, and Dafna Shahaf. 2017. Accelerating Innovation Through Analogy Mining. InProceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM,235–243.

Read More

04/22/2020 – Bipasha Banerjee – The Knowledge Accelerator: Big Picture Thinking in Small Pieces

Summary 

The paper talks about breaking larger tasks into smaller sub-tasks and then evaluating the performance of such systems. Here, the authors approach of dividing a large piece of work, mainly online work, into smaller chunks which would then use crowdworkers to perform the required tasks. The authors created a prototype system called “Knowledge Accelerator”. Its main goal is to use crowdworkers and help and find answers to open-ended, complex questions. However, the workers would only see part of the entire problem and work on a small amount of the task. It is mentioned that the maximum payment for any one task was $1. This gives an idea about how granular and simple tasks the authors wanted the crowdworkers to accomplish. The experiment was divided into two phases. In the first phase, the workers had to label some categories which were later used in the classification task. The second phase, on the other hand, required the workers to clean the output the classifier produced. This task involved the workers looking at the existing clusters and then tagging the new clips into an existing or a new cluster.

Reflection

I liked the way the authors approach the problem by dividing a huge problem into smaller manageable parts which in-turn becomes easy for workers to annotate. For our course project, we initially wanted the workers to read an entire chapter from an electronic thesis and dissertation and then label the department from which they think the document should belong to. We were not considering the fact that such a task is huge and would take a person around 15-30 minutes to complete. Dr. Luther pointed us in the right direction, where he asked us to break the chapter in parts and then present it to the workers. The paper also mentioned that too much context for workers could prove to be confusing. We can further decide better on how to divide the chapters so that we provide just the right amount of context.

I liked how the paper mentioned their ways of finding the sources, the filtering, and clustering techniques. It was interesting to see the challenges that they encountered while designing the task. This portion helps future researchers in the field to understand the mistakes and the decisions the authors took. I would view this paper as a guideline on how to best break a task into pieces so that it is easy as well as detailed enough for Amazon Mechanical Turkers. 

Finally, I would like to point out that it was mentioned in the paper that only workers from the US were only considered. The reason was also mentioned in the footnote, that because of currency conversion, the value of $ is relative. I thought this was a very thoughtful point to add and bring light to. This helps maintain the quality of the work involved. Although, I think a current currency converter (API) could have been incorporated to compensate accordingly. Since the paper deals with searching for relevant answers for complex questions, involving workers from other countries might help improve the final answer. 

Questions

  1. How are you breaking a task into sub-tasks for the course project? (We had to modify our task design for our course project and divide a larger piece of text into smaller chunks)
  2. Do you think that including workers from other countries would help improve the answers? (After considering the currency difference factor and compensating the same based on the current exchange rate.)
  3. How can we improve the travel-related questions? Would utilizing workers who are “travel-enthusiasts or bloggers” improve the situation?

Note: This is an extra submission for this week’s reading.

Read More

4/22/20 – Lee Lisle – Opportunities for Automating Email Processing: A Need-Finding Study

Summary

Park et al.’s paper covers the thankless task of email management. They discuss how people spend too much time reading and responding to emails, and how it might be nice to get some sort of automation going for dealing with the deluge of electronic ascii flooding our days. In their process, they interviewed 13 people in a design workshop setting where they came up with 42 different rules for dealing with emails. From these rules, they identified five different overarching categories for these rules. Using this data, the authors then sent a survey out and received 77 responses on how they would use a “smart robot” to handle their emails. They identified 6 categories of possible automation from this survey. The authors then took to GitHub to find any existing automation that coders have come up with to deal with email through searching for codebases that messed with IMAP standards. This came up with 8 different categories. They then took all of the data thus far and created an email automation tool they called YouPS (cute), and identified how today’s email clients needed to adjust to fully handle the wanted automation.

Personal Reflection

               I have to admit, when I first saw that they specified they gathered 13 “email users,” I laughed. Isn’t that just “people?” Furthermore, a “smart robot” is just a machine learning algorithm. The entire premise of calling their mail handler “YouPS.” This paper was full of funny little expressions and puns that I aspire to create one day.

               While I liked that they found that senders wanted recipients to have an easier time dealing wither their email, I wasn’t terribly surprised about that. If I wanted a reply to an email, I’d rather they get the email and be able to deal with it immediately rather than risking them forgetting about my request altogether. That’s the best of both worlds, where all parties involved have the right amount of time to apply to pressing concerns.

               I also appreciated that they were able to get responses from non-university affiliated people, as it’s often the case that research is found too narrowly focused on college students.

               Lastly, I enjoyed the abstraction they created with their YouPS system. While it was essentially just an API that allowed users to use standard python with an email library, it seemed genuinely useful for many different tasks.

Questions

  1. What is your biggest pet peeve about the way email is typically handled? How might automation solve that issue?
  2. Grounded Theory is a method that pulls a ton of data out of written or verbal responses, but requires a significant effort. Did the team here effectively use grounded theory, and was it appropriate for this format? Why or why not?
  3. How might you solve sender issues in email? Is it a worthwhile goal, or is dealing with those emails trivial?
  4. What puns can you create based on your own research? Would you use them in your papers? Would you go so far as to include them in the titles of your works?

Read More

04/22/20 – Fanglan Chen – The Knowledge Accelerator: Big Picture Thinking in Small Pieces

Summary

Hahn’s paper “The Knowledge Accelerator: Big Picture Thinking in Small Pieces” utilizes a distributed information synthesis task as a probe to explore the opportunities and limitations of accomplishing big picture thinking by breaking it down into small pieces. Most traditional crowdsourcing work targets simple and independent tasks, but real-world tasks are usually complex and interdependent, which may require a big picture thinking. There are a few current crowdsourcing approaches that support the breaking-down of complex tasks by depending on a small group of people to manage the big picture view and control the ultimate objective. This paper proposes the idea that a computational system can automatically support big picture thinking all through the small pieces of work conducted by individuals. The researchers complete the distributed information synthesis in a prototype system for and evaluate the output of the system on different topics to validate the viability, strengths, and weaknesses of their proposed approach.

Reflection

I think this paper introduces an innovative approach for knowledge collection which can potentially replace a group of intermediate moderators/reviewers with an automated system. The example task explored in the paper is to answer a given question by collecting information in a parallel way. That relates with the question about how the proposed system enhances the quality of answer by a structured article compiled with the pieced information collected. To facilitate the similar question-answer task, we actually have a variety of online communities or platforms. Take Stack Overflow for example, it is a site for enthusiast programmers to learn and share their programming knowledge. A large number of professional programmers answer the questions on a voluntary basis, and usually a question would receive several answers detailing different approaches with the best solution on the top with a green check. You can check other answers as well in case you have tried one but that does not work for you. I think the variety of answers from different people sometimes enhance the possibility the problem can be solved. Somehow the proposed system reduces that kind of diversity in the answers. Also, one informative article is the final output of the system to a given question, then its quality would be important, but it seems hard to control the vote-then-edit pattern without any reviewers to ensure the quality of the final answer.

In addition, we need to be aware that much work in the real world can hardly be conducted via crowdsourcing because of the difficulty in decomposing tasks into small, independent units, and more importantly, the objective is beyond to accelerate the computational time or collect complete information. For creative work such writing a song, editing a film, designing a product, the goal is more like to encourage creativity and diversity. In those scenarios, even with a clear big picture in minds, it is very difficult to put together the small pieces of work by a group of recruited crowd workers to create a good piece of work. As a result, I think the proposed approach is limited to comparatively less creative tasks where each piece can be decomposed and processed in an independent way.

Discussion

I think the following questions are worthy of further discussion.

  • Do you think the proposed system can completely replace the role of moderators/reviewers in that big picture? What are the advantages and disadvantages?
  • This paper discusses the proposed system in the task of question-answer. What are the other possible applications the system could be helpful?
  • Can you think about any possible aspect of improving the system to scale it up to other domains or even non-AI domains?
  • Do you consider the breaking-down approach in your course project? If yes, how would you like to approach that?

Read More

04/22/2020 – Vikram Mohanty – Opportunities for Automating Email Processing: A Need-Finding Study

Authors: Soya Park, Amy X. Zhang, Luke S. Murray, David R. Karger

Summary

This paper addresses the problem of automating email processing. Through an elaborate need-finding exercise with different participants, the paper synthesizes the different aspect of email management that users would want to automate, and the type of information and computation needed to achieve that. The paper also conducts a survey of existing email automation software to understand what has already been achieved. The findings show the need for a richer data model for rules, more ways to manage attention, leveraging internal and external email context, complex processing such as response aggregation, and affordances for senders.

Reflection

This paper demonstrates why need-finding exercises are useful, particularly, when the scope of automation is endless and one needs to figure out what deserves attention. This approach also helps developers/companies avoid proposing one-size-fits-all solutions and when it comes to automation, avoid end-to-end automated solutions that often fall short (in case of emails, it’s certainly debatable what qualifies as end-to-end solution). Despite the limitations mentioned in the paper, I feel the paper took steps in the right direction by gathering multiple opinions to help scope down the email processing-related automation problem into meaningful categories. Probing for developers who have shared code on Github certainly provided great value to the search to understand how experts think about the problem.

One of the findings was that users re-purposed existing affordances in the email clients to fit their personal needs. Does that mean the original design did not factor in user needs? Or the fact that the email clients need to evolve as per these new user needs?

NLP can support building richer data models for emails by learning the latent structures over time. I am sure, there’s enough data out there for training models. Of course, there will be biases and inaccuracies, but that’s where design can help mitigate the consequences.

Most of the needs were filters/rules-based, and therefore, it made sense to deploy YouPS and see how participants used them. Going forward, it will be really interesting to see how non-computer scientists use a GUI+Natural Language-based version of YouPS to fit their needs. The findings, there, will make it clear about which automation aspects should be prioritized for developing first.

As an end-user of email client, some, if not most, of my actions are at a sub-conscious level. For e.g. there are certain types of emails I do not think for even one second before marking them as read. I wonder, if a need-finding exercise, as described in this paper, would be able to capture those thoughts . Or, in addition to all the categories proposed in this paper, there should also be one where an AI attempts to make sense of your actions, and shows you a summary of what it thinks. The user, can then, reflect and figure out, if the AI’s “sensemaking” holds up or needs tweaking, and eventually, be automated. This is a mixed-initiative solution, which can effectively, over a period of time adapt to the user’s needs. This certainly depends on the AI being good enough to interpret the patterns in the user’s actions.

Questions

  1. Keeping the scope/deadlines of the semester class project aside, would you consider a need-finding exercise for your class project? How would you do it? Who would be the participants?
  2. Did you find the different categories for automated email processing exhaustive? Or would you have added something else?
  3. Do you employ any special rules/patterns in handling your email?

Read More

4/22/20 – Lee Lisle – SOLVENT: A Mixed Initiative System for Finding Analogies between Research Papers

Summary

Chan et al.’s paper discusses a way to find similarities in research papers through the use of mixed initiative analysis. They use a combination of humans to identify sections of abstracts and machine learning algorithms to identify key words in those sections in order to distill the research down into a base analogy. They then compare across abstracts to find papers with the same or similar characteristics. This enables researchers to find similar research as well as potentially apply new methods to different problems. They evaluated these techniques through three studies. The first study used grad students reading and annotating abstracts from their own domain as a “best-case” scenario. Their tool worked very well with the annotated data as compared to using all words. The second study looked at helping find analogies to fix similar problems, using out-of-domain experts to annotate abstracts. Their tool found more possible new directions than the all words baseline tool. Lastly, the third study sought to scale up using crowdsourcing. While the annotations were of a lesser quality with mTurkers, they still outperformed the all-words baseline.

Personal Reflection

               I liked this tool quite a bit, as it seems a good way to “unstuck” oneself in the research black hole and find new ways of solving problems. I also enjoyed that the annotations didn’t necessarily require domain-specific or even researcher-specific knowledge even with the various jargon that is used. Furthermore, though it confused me initially, I liked how they used their own abstract as an extra figure of sorts – using their own approach to annotating their abstract was a good idea. It cleverly showed and explained how their approach works quickly without reading the entire paper.

               I did find a few things confusing about their paper, however. They state that the GloVe model doesn’t work very well in one section, but then use it in another. Why go back to using it if it had already disappointed the researchers in one phase? Another complication I noticed was that they didn’t define the dataset in the third study. Where did the papers come from? I can glean from reading it that it was from one of the prior two studies, but I think its relevant to ask if it was the domain-specific or the domain-agnostic datasets (or both).

               I was curious about total deployment time for this kind of thing. Did they get all of the papers analyzed by the crowd in 10 minutes? 60 minutes? A day? With how parallel the task can be performed, I can imagine it could be very quick to get the analysis performed. While this task doesn’t need to be quickly performed, it could be an excellent bonus of the approach.

Questions

  1. This tool seems extremely useful. When would you use it? What would you hope to find using this tool?
  2.  Is the annotation of 10,000 research papers worth $4000? Why or why not?
  3. Based on their future work, what do you think is the best direction to go with this approach? Considering the cost of the crowdworkers, would you pay for a tool like this, and how much would be reasonable?

Read More

04/22/2020 – Vikram Mohanty – SOLVENT: A Mixed Initiative System for Finding Analogies between Research Papers

Authors: Joel Chan, Joseph Chee Chang, Tom Hope, Dafna Shahaf, Aniket Kittur.

Summary

This paper addresses the problem of finding analogies between research problems across similar/different domains by providing computational support. The paper proposes SOLVENT, a mixed-initiative system where humans annotate aspects of research papers that denote their background (the high-level problems being addressed), purpose (the specific problems being addressed), mechanism (how they achieved their purpose), and findings (what they learned/achieved), and a computational model constructs a semantic representation from these annotations that can be used to find analogies among the research papers. The authors evaluated this system against baseline information retrieval approaches and also with potential target users i.e. researchers. The findings showed that SOLVENT performed significantly better than baseline approaches, and the analogies were useful for the users. The paper also discusses implications for scaling up.

Reflection

This paper demonstrates how human-interpretable feature engineering can improve existing information retrieval approaches. SOLVENT addresses an important problem faced by researchers i.e. drawing analogies to other research papers. Drawing from my own personal experiences, this problem has presented itself at multiple stages, be it while conceptualizing a new problem, or figuring out how to implement the solution, or trying to validate a new idea, or eventually, while writing the Related Work section of a paper. This goes without saying, that SOLVENT, if commercialized, would be a boon for the thousands of researchers out there. It was nice to see the evaluation including real graduate students as their validation seemed the most applicable for such a system.

SOLVENT demonstrates the principles of mixed-initiative interfaces effectively by leveraging the complementary strengths of humans and AI. Humans are better at understanding context, and in this case, it’s that of a research paper. AI can help in quickly scanning through a database to find other articles with similar “context”. I really like the simple idea behind SOLVENT i.e how would we, humans, find analogical ideas? We will look for similar purpose and/or similar/different mechanisms. So, how about we do just that? It’s a great case of how human-interpretable intuitions translate into intelligent system design, and also scores over end-to-end automation. Something I reflected in previous papers — it always helps to look for answers by beginning from the problem and understanding it better. And that’s reflected in what SOLVENT ultimately achieves i.e. scoring over an end-to-end automation approach.

The findings are definitely interesting, particularly the drive for scaling up. Turkers certainly provided an improvement over the baseline, even though their annotations fared worse than the experts and the Upwork crowd. I am not sure what the longer term implications here are, though. Should Turkers be used to annotate larger datasets? Or should the researchers figure out a way to improve Turker annotations? Or train the annotators? These are all interesting questions. One long term implication here is to re-format the abstract into a background + purpose + mechanism + findings structure right at the initial stage. This still does not solve the thousands of prior papers. Overall, this paper certainly opens doors for future analogy mining approaches.

Questions

  1. Should conferences and journals re-format the abstract template into a background + purpose + mechanism + findings to support richer interaction between domains and eventually, accelerate scientific progress?
  2. How would you address annotating larger datasets?
  3. How did you find the feature engineering approach used in the paper? Was it intuitive? How would you have done it differently?

Read More

04/22/20 – Jooyoung Whang – SOLVENT: A Mixed Initiative System for Finding Analogies between Research Papers

This paper proposes a novel mixed-initiative method called SOLVENT that has the crowd annotate relevant parts of a document based on purpose and mechanism and representing the documents on a vector space. The authors identify that representing technical documents using the purpose-mechanism concept with crowd workers has obstacles such as technical jargon, multiple sub-problems in one document, and the presence of understanding-oriented papers. Therefore, the authors modify the structure to hold background, purpose, mechanism, and findings instead. With each document represented by this structure, the authors were able to apply natural language processing techniques to perform analogical queries. The authors found better query results than baseline all-words representations. To scale the software, the authors made workers of Upwork and Mturk annotate technical documents. The authors found that the workers struggled with the concept of purpose and mechanism, but still provided improvements for analogy-mining.

I think this study will go nicely together with document summarization studies. It would especially help since the annotations are done by specific categories. I remember one of our class’s project involved ETDs and required summaries. I think this study could have benefited that project given enough time.

This study could also have benefited my study. One of the sample use-cases that the paper introduced was improving creative collaboration between users. This is similar to my project which is about providing creative references for a creative writer. However, if I want to apply this study to my project, I would need to additionally label each of the references provided by the Mturk workers by purpose and mechanism. This will cost me additional funds for providing one creative reference. This study would have been very useful if I had enough money and wanted more quality content rankings in terms of analogy.

It was interesting that the authors mentioned different domain papers could still have the same purpose-mechanism. It made me wonder if researchers would really want similar purpose-mechanism papers on a different domain. I understand multi-disciplinary work is being highlighted these days but would each of the disciplines involved in a study try to address the same purpose and mechanism? Wouldn’t they address different components of the project?

The followings are the questions that I had while reading the paper.

1. The paper notes that many technical documents are understanding-oriented papers that have no purpose-mechanism mappings. The authors resolved this problem by defining a larger mapping that is able to include these documents. Do you think the query results would have had higher quality if the mapping was kept compact instead of increasing the size? For example, would it have helped if the system separated purpose-mechanism and purpose-findings?

2. As mentioned in my reflection, do you think the disciplines involved in a multi-disciplinary project all have the same purpose and mechanism? If not, why?

3. Would you use this paper for your project? To put in other words, does your project require users or the system to locate analogy inside a text document? How would you use the system? What kind of queries would you need out of the combinations possible (background, purpose, mechanism, findings)?

Read More

04/22/20 – Jooyoung Whang – Opportunities for Automating Email Processing: A Need-Finding Study

In this paper, the authors explore the kinds of automated functionalities or needs for E-mail interfaces users would want. The authors held workshops with technical and non-technical people to learn about these needs. The authors found the need for functionalities such as additional or richer E-mail data models involving latent information, internal or external context, using mark-as-read to control notifications, self-destructing event E-mails, different representation of E-mail threads, and content processing. Afterward, the authors mined Github repositories that actually held implementation of E-mail automation and labeled them. The authors found prevalent implementations were on automizing repetitive processing tasks. Outside the needs identified from their first probe, the authors also found needs such as using the E-mail inbox as middleware and analyzing E-mail statistics. The authors did a final study by providing users with their own programmable E-mail inbox interface called YouPS.

I really enjoyed reading the section about probes 2 and 3 where actual implementations were done using IMAP libraries. I especially like the one about notifying the respondent using flashing visuals on a Raspberry PI. It looks like a very creative and fun project. I also noticed that many of the automation were in processing repetitive tasks. This again confirms the machine affordance about being able to process many repetitive tasks.

I personally thought YouPS to be a very useful tool. I also frequently have trouble organizing my tens of thousands of unread E-mails comprising of main advertisements. I think YouPS could serve me nicely in fixing this. I found that YouPS is public and accessible online (https://youps.csail.mit.edu/editor). I will definitely return to this interface once time permits and start dealing with my monstrosity of an inbox. YouPS addresses nicely the complexity of developing a custom inbox management system. I am not familiar with the concept of IMAPs, which hinders me from implementing E-mail related functionalities in my personal projects. A library like YouPS that simplifies the protocol would be very valuable to me.

The followings are the questions that I had while reading this paper.

1. What kind of E-mail automation would you want to make given the ability to make any automation functionality?

2. The authors mentioned in their limitations that their study’s participants were mostly technical programmers. What difference would there be between programmers and non-programmers? If the study was able to be done with only non-programmers do you think the authors would have seen a different result? Is there something specifically relevant to programmers that resulted in the existing implementations of E-mail automation? For example, maybe programmers usually deal with more technical E-mails?

2. What interface is desirable for non-programmers to meet their needs? The paper mentions that one participant did not like that current interfaces required many clicks and typing to create an automation rule and they didn’t even work properly. What would be a good way for non-programmers to develop an automation rule? The creation of a rule requires a lot of logical thinking comprising of many if-statements. What would be a minimum requirement or qualification for non-programmers to create an automation rule?

Read More