The authors conducted a series of surveys regarding email automation. Firstly, they held a workshop which invited 13 computer science students who are able to program. They were required to write email rules using natural language or pseudocode to identify categories of needed email automation. Then they analyzed the source code of scripts on GitHub to see what is needed and already developed by programmers. Finally, they deployed a programmable system YouPS which enables users to write custom email automation rules and made a survey after they used the system for one week. Finally, they found that currently limited email automation cannot meet users’ requirements, about 40% or the rules cannot be deployed using existed systems. Also, they concluded these extra user requirements for future development.
Reflections
The topic of this paper is really interesting. We use email every day and sometimes are annoyed by some of the emails. Though the email platforms already deployed some automation and allow users to customize their own scripts, some of the annoying emails can still go into users’ inboxes and some important emails are classified as spams. For me, I used to adjust myself to the automation mechanism. Check my spams and delete the advertisements from the inbox every day. But it would be great if the automation can be more user-friendly and provide more labels or rules for users to customize. This paper focused on this problem and did a thorough series of surveys to understand the users’ requirements. All the example scripts shown in the results seem useful to me and I really want the system can be deployed practically.
We can also learn from the methods used by the authors to do the surveys. Firstly, they hired computer science students to find general requirements. These students can seem like pilots. From these pilots, the researchers can have an overview of what is needed by the users. Then they did background researches according to the findings from the pilots. Finally, they combined the findings from both pilots and background researches to implement a system and test the system with the crowdsource workers, who can represent the public. This series of works is a good example of our projects. For future projects, we may also follow this workflow.
From my point of view, a significant limitation in the paper is that they only test the system on a small group of people. Neither computer science students nor programmers who upload their codes to GitHub cannot represent the public. Even the crowd workers still cannot represent the public. Most of the public knows little about programming and do not complete Hits on MTurk. Their requirements are not considered. If the condition available, the surveys should be done with more people.
Questions:
What is your preference in email automation? Do you have any preference which is not provided by current automation?
Can the crowd workers represent the public?
What should we do if we want to test systems with the public?