04/15/2020 – Bipasha Banerjee – Algorithmic Accountability

Summary 

The paper provides a perspective on algorithmic accountability from the journalists’ eyes. The motivation of the paper is to detect how algorithms influence various decisions in different cases. The author investigates explicitly the area of computational journalism and how such journalists could use their power to “scrutinize” to uncover bias and other issues current algorithms pose. He lists out a few of the decisions that algorithms make and which has the potential to affect the algorithms capability to be unbiased. Some of the decisions are classification, prioritization, association, filtering, and algorithmic accountability. It is also mentioned that transparency is a key factor in building trust in an algorithm. The author then proceeds to discuss reverse engineering by providing examples of a few case studies. Reverse engineering is described in the paper as a way by which the computational journalists have reverse engineered to the algorithm. Finally, he points out all the challenges the method poses in the present scenario.

Reflection

The paper gives a unique perspective on the algorithmic bias from a computational journalists’ perspective. Most of the papers we read come from either completely the computational domain or the human-in-the-loop perspective. Having journalists who are not directly involved in the matter is, in my opinion, brilliant. This is because journalists are trained to be unbiased. From the CS perspective, we tend to be “AI” lovers and want to defend the machine’s decision and consider it as true. The humans using the system wither blindly trust them or completely doubt them. Journalists, on the other hand, are always motivated to seek the truth, however unpleasant it might be. Having said that, I am intrigued to know the computational expertise level of the journalists. Although having an-in-depth knowledge about AI systems might introduce a separate kind of bias. Nonetheless, this would be a valid experiment to conduct. 

The challenges that the author mentioned include ethics, legality, among others. These are some of the challenges that are not normally discussed. We, from the computational side, need to be aware of these challenges. The “legal ramification” could be enormous if we do not end up using authorized data to train the model and then publish the results. 

I agree with the author that transparency indeed helps bolster confidence in an algorithm. However, I also agree that it is difficult for companies to be transparent in the modern digital competitive era. It would be difficult for companies to take the risk and make all the decisions public. I believe there might be a middle ground for companies; they could publish part of the algorithmic decisions like the features they use and let the users know what data is being used. This might help improve trust. For example, Facebook could publish the reasons why they recommend a particular post, etc.

Questions

  1. Although the paper talks about using computational journalism, how in-depth is the computational knowledge of such people? 
  2. Is there a way for an algorithm to be transparent, yet the company not lose its competitive edge?
  3. Have you considered the “legal and ethical” aspect of your course project? I am curious to know about the data that is being used and other models etc.?

One thought on “04/15/2020 – Bipasha Banerjee – Algorithmic Accountability

  1. I agree with your point that journalists might not have the required computational background but maybe they don’t need to. It might help if we can combine the complementary strengths of machine learning algorithms and journalists. That said, training the journalists to help them understand the working of these algorithms can definitely prove beneficial.

Leave a Reply