02/26/20 – Nan LI – Explaining Models: An Empirical Study of How Explanations Impact Fairness Judgment

Summary:

The main objective of this paper is to investigate how people make fairness judgments of the ML system and how explanations impact their fairness judgments. In particular, they explored the difference between a global explanation, which describes how the model works, and local explanations, which are sensitive and case-based. Besides, the author also demonstrates the impact of individual differences in cognitive style and prior position on algorithmic fairness impact the fairness judgment regarding different explanations. To achieve this goal, the author conducted an online survey-style study with Amazon Mechanical Turk workers with specific criteria. The experiment results indicate that based on different kinds of fairness issues and user profiles, there are varies effective explanation. However, a hybrid explanation that using global explanations to comprehend and evaluate the model and using local explanations to examine individual cases may be essential for accurate fairness judgment. Furthermore, they also demonstrated that individuals’ previous positions on the fairness of algorithms affect their response to different types of interpretations.

Reflection:

First, I think this paper talked about a very critical and imminent topic. Since the exploration and implementation of the machine learning system and AI system, it has been wildly deployed that using ML prediction to make decisions on high-stake fields such as healthcare and criminal predictive. However, societies have great doubts about how the system makes decisions. They cannot accept or even understand why these important decisions should be left to a piece of algorithm. Then, the community’s call for algorithm transparency is getting higher and higher. At this point, an effective, unbiased and user-friendly interpretation of ML system which enables the public to identify fairness problems would not only improve on ensuring the fairness of the ML system, but also increase public trust in ML system output.

However, it is also tricky that there is no one-size-all solution for an effective explanation. I do understand that different people shall have a different reaction to explanations, nevertheless, I was kinda surprised that people have very different opinions on the judgment of fairness. Even though this is understandable considering their prior position on the algorithm, their cognition, and different background, this will make it more complex to ensuring the fairness of the machine learning system. Since the system may need to take into account individual differences in their fairness positions, which may require different corrective or adaptive actions.

Finally, this paper reminds me of another similar topic. When we explain how the model works, how much information should we provide? What kind of information should we preserved so that this information will not be abused? In this paper, the author only mentioned that they would provide two types of explanations, global explanations that describe how the model works, and local explanations that attempt to justify the decision for a specific case. However, they didn’t examine the extent of system model information provided in the explanation. I think this is an interesting topic since we are investigating the impact of explanations on fairness judgment.

Question:

  1. Which type of explanations mentioned in this article would you prefer to see when you judge the fairness of the ML system?
  2. How did the user perceive machine learning system fairness influence the fairness ensuring process when designing the system?
  3. This experiment conducted based on an online survey with crowd workers instead of judges, do you think this would have any influence on experiment results?

Word Count: 564

One thought on “02/26/20 – Nan LI – Explaining Models: An Empirical Study of How Explanations Impact Fairness Judgment

  1. For question 2, I think when someone design a system, usually they won’t consider fairness, because usually people decide whether a system is fairness depends on the result. So, I looks like a black box, you never know the result after you complete design it.

Leave a Reply