Authors: Soya Park, Amy X. Zhang, Luke S. Murray, David R. Karger
Summary
This paper addresses the problem of automating email processing. Through an elaborate need-finding exercise with different participants, the paper synthesizes the different aspect of email management that users would want to automate, and the type of information and computation needed to achieve that. The paper also conducts a survey of existing email automation software to understand what has already been achieved. The findings show the need for a richer data model for rules, more ways to manage attention, leveraging internal and external email context, complex processing such as response aggregation, and affordances for senders.
Reflection
This paper demonstrates why need-finding exercises are useful, particularly, when the scope of automation is endless and one needs to figure out what deserves attention. This approach also helps developers/companies avoid proposing one-size-fits-all solutions and when it comes to automation, avoid end-to-end automated solutions that often fall short (in case of emails, it’s certainly debatable what qualifies as end-to-end solution). Despite the limitations mentioned in the paper, I feel the paper took steps in the right direction by gathering multiple opinions to help scope down the email processing-related automation problem into meaningful categories. Probing for developers who have shared code on Github certainly provided great value to the search to understand how experts think about the problem.
One of the findings was that users re-purposed existing affordances in the email clients to fit their personal needs. Does that mean the original design did not factor in user needs? Or the fact that the email clients need to evolve as per these new user needs?
NLP can support building richer data models for emails by learning the latent structures over time. I am sure, there’s enough data out there for training models. Of course, there will be biases and inaccuracies, but that’s where design can help mitigate the consequences.
Most of the needs were filters/rules-based, and therefore, it made sense to deploy YouPS and see how participants used them. Going forward, it will be really interesting to see how non-computer scientists use a GUI+Natural Language-based version of YouPS to fit their needs. The findings, there, will make it clear about which automation aspects should be prioritized for developing first.
As an end-user of email client, some, if not most, of my actions are at a sub-conscious level. For e.g. there are certain types of emails I do not think for even one second before marking them as read. I wonder, if a need-finding exercise, as described in this paper, would be able to capture those thoughts . Or, in addition to all the categories proposed in this paper, there should also be one where an AI attempts to make sense of your actions, and shows you a summary of what it thinks. The user, can then, reflect and figure out, if the AI’s “sensemaking” holds up or needs tweaking, and eventually, be automated. This is a mixed-initiative solution, which can effectively, over a period of time adapt to the user’s needs. This certainly depends on the AI being good enough to interpret the patterns in the user’s actions.
Questions
- Keeping the scope/deadlines of the semester class project aside, would you consider a need-finding exercise for your class project? How would you do it? Who would be the participants?
- Did you find the different categories for automated email processing exhaustive? Or would you have added something else?
- Do you employ any special rules/patterns in handling your email?