04/15/2020 – Yuhang Liu – Algorithmic Accountability Journalistic investigation of computational power structure

Summary: in this paper, the author has mentioned that, in modern society, automated algorithms have become more and more important, and algorithms gradually regulate all aspects of our lives, but the outline of their functions may still be difficult to grasp. So, it is necessary to elucidating and articulating the algorithms’ power. The author proposes a new notion “algorithmic accountability reporting”. This concept can reveal how algorithms work, and it is well worth reviewing by computing journalists. The author explores methods such as transparency and reverse engineering, and how they can be useful in elucidating algorithmic capabilities. And the author analyzes the case studies of five journalists on algorithm research, and describes the challenges and opportunities they face when working on algorithm accountability. The final concept proposed by the author has highlights and main contributions: (1) It proposed the theoretical lens of various atomic algorithm decisions. These decisions raised some major issues that can guide algorithm research and algorithm transparency policy development. (2) It can conduct preliminary evaluation and analysis of the algorithm through algorithmic accountability, including various restrictions. and author discuss the challenges faced in adopting this reporting method, including human resources, legitimacy and ethics, and look ahead to how journalists themselves use transparency when using algorithms

Reflection: I think the author has put forward a very innovative idea. This is also the first point that comes to my mind when I meet or use some new algorithms: what is the boundary of this algorithm, and what scope can it be applied to. For example, the insurance algorithm of an insurance company, we all know that the insurance cost is generated based on a series of attributes, but people are often uncertain about the proportion of each attribute in the insurance algorithm, then there will be some doubts about the results, and even think some results are immoral. Therefore, it is very important to study the capabilities and boundaries of an algorithm.

At the same time, the concept of reverse engineering is mentioned in the article, that is, the ability to study algorithms by studying input and output, but there are often such mechanisms in some websites. It makes the algorithm dynamic, so we need other methods to solve this kind of problem. However, once the input-output relationship of the black box is determined, the challenge becomes a data-driven search for news stories. Therefore, I think the algorithm is more inclined to understand whether there is an unreasonable situation in an algorithm, and the root cause of this unreasonable situation is whether it is caused by man or negligence, or it is people’s deep-rooted ideas . So, in some aspects, I think exploring the borders of algorithms is exploring the morality of algorithms. Therefore, I think this article provides a framework for reviewing the morality of the algorithm. This method can effectively explore a place where the algorithm is unreasonable, and for news reporters, it can be used to discover meaningful news.

In addition, I think the framework described in this article is a special way of human-computer interaction, that is, people study the machine itself, and understand the process of algorithm operation through the feedback of the machine. This also broadened my understanding of human-computer interaction.

Problem:

  1. Do you think the framework mentioned in the paper can be used in detecting the ethic issues of an algorithm?
  2. Can this system be used in a automatic system to elucidating and articulating the algorithms’ power?
  3. Is there any other value of detecting algorithms’ power except news value?

Leave a Reply