Algorithms That Make You Think

Fourth Annual Virginia Tech Workshop on the Future of Human-Computer Interaction, April 11-12, 2019
reading-group

Reading Group Summary: How the machine ‘thinks’: Understanding opacity in machine learning algorithms

How does the problem of opacity in machine learning algorithms affect the mechanisms of ranking and classification? For example, determining whether an email is spam, identifying possible credit card fraud, news trends, credit scoring, etc. These mechanisms of classification and ranking derive from machine learning algorithms.

Burrell examines 3 kinds of opacity:

  1. Intentional opacity for corporate or government secrecy or self-protection (Is the algorithm proprietary?)
  2. Technical illiteracy/writing and reading code is a specialized skill (Is the algorithm too complex or highly technical?)
  3. Opacity resulting from characteristics of ML algorithms and the large amount of data needed to apply them meaningfully.

In examining the algorithms, Burrell focuses on the second and third types of opacity, especially, and draws on CS literature and industry practices, to design and test some code auditing. By recognizing the different types and occasions for opacity we can develop technical and non-technical solutions to mitigate unintended harm they might be generating. The potential harm is occurring in a widening array of social contexts affecting access to resources, equality of opportunity and social mobility.

Opacity characterizes the results from the data analysis – the classification decision — the recipient does not typically see exactly what data were used as input nor examine or understand the calculations that produced the results.

Some new influences: 1) Pervasive and interconnected technologies (mobile devices, digital data/content, share-ability of content) 2) New techniques of data collection 3) Archives of personal data, such as, purchasing history, location, etc.

Recommendations:
– Partnerships among computer scientists, social scientists, legal scholars, and domain experts.
– Broader education in coding and computational thinking
– Regulations and Code Auditors: who can assure the code is non-discriminatory
– Simplify ML models through ‘feature extraction’ – that is, identify which features are key to the classification outcome and remove all other features from the model.

Seek ways to detect discriminatory effects in classifications without concern for the how or why of the decisions making up the classification.

Leave a Reply

Your email address will not be published. Required fields are marked *