02/26/20 – Lulwah AlKulaib- Interpretability

Summary

Machine learning (ML) models are integrated in many departments nowadays (for example: criminal justice, healthcare, marketing, etc.). The universal presence of ML has moved beyond academic research and grew into an engineering discipline. Because of that, it is important to interpret ML models and understand how they work by developing interpretability tools. Machine Learning engineers, practitioners, and data scientists have been using these tools. However, due to the minimal evaluation of the extent to which these tools achieve interpretability, the authors study the use of two interpretability tools to uncover issues that arise when building and evaluating models. The interpretability tools are: InterpretML implementation of GAMs and the SHAP Python package. They conduct a contextual inquiry and survey197 data scientists to observe how they use interpretability tools to uncover common issues that arise when building and evaluating ML models. Their results show that data scientists did utilize visualizations produced by interpretability tools to uncover issues in datasets and models. Yet, the availability of these tools has led to researchers over-trust and misuse of them.

Reflection

Machine learning is now being used to address important problems like predicting crime rates in cities to help police distribute manpower, identifying cancerous cells, predicting recidivism in the judiciary system, and locating buildings that are subject to catching on fire. Unfortunately, these models have been shown to learn biases. Detecting these biases is subtle, especially to beginners in the field. I agree with the authors that it is troublesome when machine learning is misused, whether intently or due to ignorance, in situations where ethics and fairness are eminent. Lacking models explainability can lead to biased and ill-informed decisions. In our ethics class, we went over case studies where interpretability was lacking and caused representing racial bias in facial analysis systems [1], biasing recidivism predictions [2], and textual gender biases learned from language [3]. Some of these systems were used in real life and have affected people’s lives. I think that using a similar analysis to the one presented in this paper before deploying systems into practice should be mandatory. It would give developers better understanding of their systems and help them avoid making biased decisions that can be corrected before going into public use. Also, informing developers on how dependable are interpretability tools and when to tell that they’re over trusting them, or when are they misusing them is important. Interpretability is a “new” field to machine learning and I’ve been seeing conferences adding sessions about it lately. I’m interested in learning more about interpretability and how we can adapt it in different machine learning modules.

Discussion

  • Have you used any of the mentioned interpretability packages in your research? How did it help in improving your model?
  • What are case studies that you know of where machine learning bias is evident? Were these biases corrected? If so, How?
  • Do you have any interpretability related resources that you can share with the rest of the class?
  • Do you plan to use these packages in your project? 

References

  1. https://splinternews.com/predictive-policing-the-future-of-crime-fighting-or-t-1793855820
  2. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  3. Bolukbasi, T., Chang, K. W., Zou, J. Y., Saligrama, V., & Kalai, A. T. (2016). Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Advances in neural information processing systems (pp. 4349-4357).

One thought on “02/26/20 – Lulwah AlKulaib- Interpretability

Leave a Reply