Algorithms That Make You Think

Fourth Annual Virginia Tech Workshop on the Future of Human-Computer Interaction, April 11-12, 2019
reading-group

Reading Group Summary: Fairness and Abstraction in Sociotechnical Systems

The paper “Fairness and Abstraction in Sociotechnical Systems” by Selbst et al., 2019, focuses on the idea of ‘fairness’ in Machine Learning (ML) – where an algorithm is making decisions about a social situation (e.g., a judge evaluating a parolee’s flight risk).  It describes different problems that users of ML systems confront that can affect ‘fair’ outcomes. Some of these problems, posed as ’traps’ include: portability, formalism, ripple effect, and solutionism.

To raise awareness about these ‘traps’, the authors suggest an STS approach, specifically Social Construction of Technology (SCOT), and how this approach could be helpful in dealing with the fairness problems related to designing and using algorithms.

Interesting book along these lines by Neil Postman, ‘Technopoly’ – in which computers appear to the public to be neutral.

Industry is less likely than academics to take into account how appropriate the data is for training for a larger data set and follow on analytics. Company designers may work independently at the outset from programmers who don’t work on the data until later and weaknesses (traps) may have been baked in.

Difference between an algorithm and a model – touched on in the ’portability’ trap – difficulties of applying an algorithm in a different social context than the one it was originally designed for.  But the concept of ‘re-use’ is very important in CS education.

Raise awareness about how algorithms are created, the way we have tried to do about media bias (TV), for example (media criticism, media literacy).

Leave a Reply

Your email address will not be published. Required fields are marked *