Word count: 667
Summary of the Reading
This paper focuses on the problem of how humans interact with AI systems. The paper starts with a discussion of automation and how fully automated systems are a long way off and are currently not the best way to do things. We do not have the capabilities to make fully automated systems now, so we should not be trying to. Instead, we should make it easier for humans and machines to interact and work together.
The paper then describes three systems that try to use these principles to make systems that have humans and machines working together. All of these systems give the human the most power. They always have the final say on what to do. The machine will give suggestions, but the humans decide whether or not to accept these ideas. The systems include a tool to help with data analytics, a tool to help with data visualization, and a tool to help with natural language processing.
Reflections and Connections
This paper starts with a heavy dose of skepticism about automation. I think that many people are too optimistic about automation. This class has shown me how people are needed to make these “automated” systems work.Crowd workers often fill in the gaps for systems which pretend to be fully automated, but are not actually fully automated. Rather than pretend that people aren’t needed, we should embrace it and build tools to help the people who make these systems possible. We should be working to help people, rather than replace them. It will take a long time to fully replace many human jobs. We should build tools for the present, not the future.
The paper also argues that a good AI should be easily accessed and easy to dismiss. I completely agree. If AI tools are going to be commonplace, we need easy ways to get information from them and dismiss them when we don’t need them. In this way, they are much like any other tool. For example, suggesting software should give suggestions, but should get out of the way when you don’t like any of them.
This paper brings up the idea that users prefer to be in control when using many systems. I think this is something many researchers miss. People like to have control. I often do a little more work so that I don’t have to use suggested actions, so that I know exactly what is being done. Or, I will go back through and try to check the automated work to make sure I did it correct. For example, I would much rather make my own graph in Excel than use the suggested ones.
Questions
- Is the public too optimistic about the state of automation? What about different fields of research? Should we focus less on fully automating systems and instead on improving the systems we have with small doses of automation?
- Do companies like Tesla need to be held responsible for misleading consumers about the abilities of their AI technologies? How can we, as computer scientists, help people to better understand the limitations of AI technologies?
- When you use software that has options for automation, are you ever skeptical? Do you ever do things yourself because you think the system might not do it right? When we are eventually trying to transition to fully automated systems, how can we get people to trust the systems?
- The natural language translation experiment showed that the automated system made the translators produce more homogenous translations. Is this a good thing? When would having more similar results be good? When would it be bad?
- What are some other possible applications for this type of system, where an AI suggests actions and a user decides whether to accept those actions or not? What are some applications where this kind of system might not work? What limitations cause it not to work there?
I agree with your reflection comment that people prefer to be in control. It is true that machines in certain tasks can outperform humans but, in order for them to be effective and efficient, humans should be in the loop at some point or the other. I think humans can never be replaced by machines completely. From both human and AI perspective, we should focus on working with the other, instead of constantly fighting the battle about which one is replaceable. AI could be what it is today because it was coded and created by humans, labeled, trained and finally used by them. This paper is a good step towards the direction where humans are in-charge and machines are there in place when they are needed to “help” them.