Summary
Diakopoulos’s paper makes the point that AI has a power over users that is not often clearly expressed to the users, even when those algorithms have massive amounts of power over users’ lives. The author then points out four different ways algorithms have power over users: prioritization, classification, association, and filtering. After a brief description of each, the author then speaks about how transparency is key to the balancing of these powers.
Then a series of AI implementations were discussed and showed how they exerted some amount of power without informing the user. The author used autocompletions on Google & Bing, autocorrections on iPhone, political emails, price discrimination, and stock trading as examples. The author then uses interviews in order to gain insight in how journalists better understand AIs and write stories on them. This is a form of accountability, and journalists use this information to allow users to understand the technology around them.
Personal Reflection
I thought this paper brought up a good point that was seen in other readings this week: even if the user is given agency over the final decision, the AI biases them towards a particular set of actions. Even if the weaknesses of the AI are understood, like in the Bansal et al. paper on updates, the participant is still biased from the actions and recommendations of the AI. This power, when combined with the effect it can have on peoples’ lives, can greatly change the course of lives.
The author also makes a point that interviews with designers is a form of reverse-engineering. I had not thought of it in this way before, so it was an interesting insight into journalism. Furthermore, the idea that AIs are black boxes, but their inputs and outputs can be manipulated such that the interior workings can be better understood was another thing I hadn’t thought of.
I was actually aware of most of the cases the author presented as ways of algorithms exerting power. I have used different computers and safe modes on browsers in the past to ensure I was getting the best deal in travel or hotels before, for example.
Lastly, I thought the idea of journalists having to uncover these AI (potential) malpractices was an interesting quandary. Once they do this, they must publish a story, but then most people will likely not hear about it. There’s an interesting problem here of how to warn people about potential red flags in algorithms that I felt the paper didn’t discuss well enough.
Questions
- Are there any specific algorithms that have biased you in the past? How did they? Was it a net positive, or net negative result? What type of algorithmic power did it exert?
- Which of the four types of algorithmic power is the most serious, in your opinion? Which is the least?
- Did any of the cases surprise you? Do they change how you may use technology in the future?
- What ways can the users abuse the AI systems?
I believe various algorithms have biased many of my search results in the past, most notably with the YouTube algorithm. This usually was a net negative result since it exerted a strong filtering power in which I normally would go directly to certain channels. In regards to YouTube and similar platforms, this is the most available power that can be abused by malicious actors. It has been known YouTube provides a strong platform for taking down videos and then investigate any claims, and this system has been greatly weaponized in the past.
To answer your first question, I think algorithms always bias us in some way or another. For example, with Uber’s algorithm, although they advertise a “drop time” on the app, the actual drop time may be much later given current traffic conditions. However, the advertised drop time is often less than Lyft’s (which may be more honest about real drop times), and so I would be biased towards choosing Uber, which I might later regret. I think this points to the importance of being able to audit algorithms externally.