Summary
The authors believe that in order to ensure fairness in machine learning systems, it is mandatory to have a human in the loop process. In order to identify fairness problems and make improvements, they suppose relying on developers, users, and the general public is an effective way to follow that process. The paper conducts an empirical study with four types of programmatically generated explanations to understand how they impact people’s fairness judgments of ML systems. They try to answer three research questions:
- RQ1 How do different styles of explanation impact fairness judgment of a ML system?
- RQ2 How do individual factors in cognitive style and prior position on algorithmic fairness impact the fairness judgment with regard to different explanations?
- RQ3 What are the benefits and drawbacks of different explanations in supporting fairness judgment of ML systems?
The authors focus on a racial discrimination case study in terms of model unfairness and Case-specific disparate impact. They performed an experiment with 160 Mechanical Turk workers. Their hypothesis proposed that given local explanations focus on justifying a particular case, they should more effectively surface fairness discrepancies between cases.
The authors show that:
- Certain explanations are considered inherently less fair, while others can enhance people’s confidence in the fairness of the algorithm
- Different fairness problems-such as model-wide fairness issues versus case-specific fairness discrepancies-may be more effectively exposed through different styles of explanation
- Individual differences, including prior positions and judgment criteria of algorithmic fairness, impact how people react to different styles of explanation.
Reflection
This is a really informative paper. I like that it had a straightforward hypothesis and chose one existing case study that they evaluated. But I would have loved to see this addressed with judges instead of crowdworkers. They mentioned it in their limitations and I hope that they find enough judges willing to work on a follow-up paper. I believe that they would have insightful knowledge to contribute especially since they practice it. It would give a more meaningful analysis to the case study itself from professionals in the field.
I also wonder how this might scale to different machine learning systems that cover similar racial biases. Having a specific case study makes it harder to generalize even for something in the same domain. But definitely worth investigating since there are so many existing case studies! I also wonder if changing the case study analyzed, we’d notice a difference in the local vs. global explanations patterns in fairness judgement. And how would a mix of both affect the judgement, too.
Discussion
- What are other ways you would approach this case study?
- What are some explanations that weren’t covered in this study?
- How would you facilitate this study to be performed with judges?
- What are other case studies that you could generalize this to with small changes to the hypothesis?
I do strongly agree that the direct lack of judges being used in the paper may weaken it’s impact. The wide range of Crowd Workers definitely has the stronger availability compared to active judges. Gathering a set of participants recently from taking the Bar would be a good avenue for the team to take. Not only would they be well-versed on the history of cases, but they would likely have more free time from older judges.
Test
tes