Algorithms have pervaded our every day lives, because computers have become essential in our every day lives. Their pervasion also means that they need to be closely scrutinized to ensure that they are functioning like they should, without bias, obeying the guarantees the creators have promised. Algorithmic Accountability is a category of journalism where the journalists investigate these algorithms to validate their claims and find if there are any violations. The goal is to find mistakes and omissions or bias creeping into the algorithms because though computers do exactly what they’re told, they are still created by humans with blinspots. They classify the four kinds of decisions that algorithm decision making falls under. They claim that transparency alone is not enough because full transparency can often be prevented by trade secret excuses. They utilize the idea of reverse engineering where they put in inputs and observe the outputs, without looking at the inner workings because journalists are often dealing with black box algorithms. They look at five case studies of journalists who’ve done such investigations with reverse engineering, as well as putting a theory and a methodology on how to find news-worthy stories in this space.
This paper is a very interesting look from the non CS/HCI perspective of studying how algorithms function in our lives. This paper, coming from the perspective of journalism and looking at the painstaking way journalists investigate these algorithms. Though not the focus, this work also brings to light the incredible roadblocks that come with investigating proprietary software, especially those from large secretive companies who would leverage laws and expensive lawyers to fight such investigations if it is not in their favor. In an ideal world, everyone would have integrity and would disclose all the flaws in their algorithms but that’s unfortunately not the case which is why the work these journalists are doing is important, especially when they don’t have easy access to the algorithms they’re investigating, and sometimes don’t have access to the right inputs. There is a danger here that a journalist could end up being discredited because they did the best investigation they could with the limited resources they have but the PR team of the company they’re investigating latches on to a poor assumption or two to discredit the otherwise good work. The difficulty in performing these investigations, especially for journalists who may not have prior training or experience in dealing with computers, exemplifies the need for at least some computer science education for everyone so that they can better understand the systems they’re dealing with and have a better handle on running investigations as algorithms pervade even in our lives.
- Do you think some of the laws in place that allow companies to obfuscate their algorithms should be relaxed to allow easier investigation?
- Do you think current journalistic protections are enough for journalists investigating these algorithms?
- What kind of tools or training can be given to journalists to make it easier for them to navigate this world of investigating algorithms?