Summary:
In this paper, the author first presents the algorithmic power, prioritization, classification, association, as well as filtering. Then the author concludes based on the description of algorithmic power that a significant number of humans would be influenced by algorithms outcome. Thus, the author made the point that it is significant to interpreting the output of algorithms in the course of making higher-level decisions. Next, the author examined the possibility and weaknesses of requiring algorithm transparency. Therefore, the paper introduces a replaced method called reverse engineering. In this work, journalistics combined the interviews, document reviews as well as reverse engineering analysis to shed light on the algorithms’ functioning. They introduced five cases of studies of journalistic investigations and also presented the challenges and opportunities for doing algorithmic accountability work. The primary process of the inquiry includes identifying newsworthy algorithm, sampling the input-output relationships to study the correlations, and finally seeking a story. Finally, the author provides a series of suggestions regarding the transparency policy for algorithms.
Reflection:
First, algorithm accountability is not a new topic nowadays. With the penetration of algorithms into our lives, the application of algorithms to all walks of life, not only for entertainment, learning, daily tools, but even for the significant impact on our experiences of security issues, privacy issues, and even the distribution of social resources. People are starting to ask the question, can algorithms be trusted? To what extent are they trustworthy? I have also seen many examples of guessing and analyzing the internal structure of such a black box. I want to share one of them.
The approach of reversing engineering, especially the process that sampling the input-output relationships of the algorithms to study the correlations, remind me of a news report which identifies the bias from the algorithm. That algorithm was designed for individual risk assessment, which is predicting the likelihood of each committing a future crime. It has been increasingly common in courtrooms across the nation, but in 2014, it was accused that the risk scores might be injecting bias into the courts. The way people found the bias in the algorithm is the same as reverse engineering. Here’s there finding in that paper ( I also put the link below):
- The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.
- White defendants were mislabeled as low risk more often than black defendants.
Based on this outcome, it seems that reverse engineering is essential and efficient. I think this is a better way to examine the algorithm accountability than transparency. As mentioned in this article, leaving aside the trade secrets problem, disclose the source code of algorithms might helpful for specialists but does not able to improve user experience since they may not make meaningful choices considering their lack of expertise. Thus, identify the issue of algorithms instead of focus on the implementation process is more efficient in encouraging the designer to perfect the algorithm.
Questions:
- Do you think the algorithm is trustworthy? How much confidence do you have in an algorithm?
- What do you think about transparency? How transparent do you think the algorithm should be?
- What do you think of reverse engineering? Does it work? Do you have any other examples regarding this approach?
Link:https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Word Count: 544
I agree with you that reverse engineering is essential and efficient. It plays an important role in evaluating algorithms. But I have an idea about if companies can disclose more about the algorithms, researchers can pay less effort in evaluating of them.
I think reverse engineering works well. the generated I/O relationship can be analyzed to find the limitations or discriminations of the algorithm. This would be great for developers who are working on the security or ethical issues of the algorithms.
Hi, thank you for you sharing, and it is interesting. From my perspective, I think transparency help increase the confidence of the algorithm. However, in modern digital competition, it is difficult for companies or other original let their algorithm to be transparent, they may take some risks. But I believe companies may have a middle ground; they can post some algorithmic decisions, such as the functions used, and let users know what data is being used, which will help us understand algorithm better.