Word count: 573
Summary of the Reading
This paper investigates explaining AI and ML systems. An easy way to explain AI and ML systems is to have another computer program to help generate an explanation of how the AI or ML system works. This paper works towards that goal, comparing 4 different programmatically generated explanations of AI And ML systems and seeing how they impact judgments of fairness. These different explanations had a large impact on perceptions of fairness and bias in the systems, with a large degree of variation between each of the explanation systems.
Not only did the kind of explanation used have a large impact on the perceived fairness of the algorithm, but the pre-existing feelings of the participants towards AI and ML and bias in these fields also had a profound impact on whether or not participants saw the explanations as fair or not. People who did not already trust AI fairness equally distrusted all of the explanations.
Reflections and Connections
To start, I think that this type of work is extremely useful to the future of the AI and ML fields. We need to be able to explain how these kinds of systems work and there needs to be more research into that. This issue of explainable AI becomes even more important when we put it in the context of making AI fair to the people who have to interact with it. We need to be able to tell if an AI system that is deciding whether or not to free people from jail is fair or not. The only way we can really know if these models are fair or not is to have some way to explain the decisions that the AI systems make.
I think that one of the most interesting parts of the paper is the variation in the number of people with different circumstances who thought that the models were fair or not. Pre-existing ideas about whether or not AI systems are fair had a huge impact on whether or not people thought these models were fair when given an explanation of how they work. This shows how human of a problem this is and how hard it can be to decide if a model is fair or not, even when you have access to an explanation. Views of the model will differ from person to person.
I also found it interesting how the type of explanation used had a big impact on the judgment of fairness. To me, this congers up ideas of a future where the people who build algorithms can just pick the right kind of explanation to prove that their algorithm is fair, in the same way companies now use language in a very questionable way. I think that this field still has a long way to go and that it will become increasingly important as AI penetrates more and more fasciates of our lives.
Questions
- When each explanation produces such different results, is it possible to make a concrete judgment on the fairness of an algorithm?
- Could we use computers or maybe even machine learning to decide if an algorithm is fair or would that just produce more problems?
- With so many different opinions, even when the same explanation is used, who should be the judge if an algorithm is fair or not?