This paper mainly explores the injustice of the results of machine learning. These injustices are usually reflected in gender and race, so in order to make the results of machine learning better serve people, the author of the paper conducted an empirical study with four types of programmatically generated explanations to understand how they impact people’s fairness judgments of ML systems. In the experiment, these four interpretations have different characteristics, and after the experiment, the author has the following findings:
- Some interpretations are inherently considered unfair, while others can increase people’s confidence in the fairness of the algorithm;
- Different interpretations can more effectively expose different fairness issues, such as the model-wide fairness issue and the fairness difference of specific cases.
- There are differences between people, different people have different positions, and the perspective of understanding things will affect people’s response to different interpretation styles.
In the end, the authors obtained that in order to make the results of machine learning generally fair, in different situations, different corrections are needed and differences between people must be taken into account.
Reflection:
In another class this semester, the teacher gave three reading materials on the results of machine learning and increased discrimination. In the discussion of those three articles, I remember that most students thought that the reason for discrimination should not be Is the inaccuracy of the algorithm or model, and I even think that machine learning is to objectively analyze things and display the results, and the main reason that people feel uncomfortable and even feel immoral in the face of the results is that people are not willing to face these results. It is often difficult for people to have a clear understanding of the whole picture of things, and when these unnoticed places are moved to the table, people will be shocked or even condemn others, but it is difficult to really think about the cause of things. But after reading this paper, I think my previous understanding was narrow: First, the results of the algorithm and the interpretation of the results must be wrong and discriminatory in some cases. So only if we resolve this discrimination can the results of machine learning be able to better serve people. At the same time, I also agree with the ideas and conclusions in the article. Different interpretation methods and different emphasis will indeed affect the fairness of interpretation. All the prerequisites to eliminate injustices are to understand the causes of these injustices. At the same time, I think the main solution to eliminate injustice is still on the researcher. Reason why I think computer is fascinating is it can always keep things rational and objective to deal with problems. People’s response to different results and the influence of different people on different model predictions are the key to eliminating this injustice. Of course, I think people will think that part of the cause of injustice is also the injustice of our own society. When people think that the results of machine learning carry discrimination based on race, sex, religion, etc., should we think about this discrimination itself, should we pay more attention to gender equality, ethnic equality and how to make the results look better.
Question:
- Do you think that this unfairness is more because the results of machine learning mislead people or it is existed in people’s society for a long time.
- The article proposes that in order to get more fair results, more people need to be considered, what changes should users make.
- How to combine the points of different machine learning explanations to create a fairer explanation.
I think it is not the algorithms’ fault. The algorithms are trained by people using collected data. If the algorithms finally become unfair, it is either the developers’ fault or the data is biased. I agree the point the unfairness existed in people’s society for a long time, which may be the reason for the unfairness judgement.
I think developers should remove the bias in the data before using the data to train the models. For the users of the models, they should hold their own view when unfair judgement was made by the models.