Reflection #6 – [09/12] – [Vibhav Nanda]

Readings:

[1] Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms

Summary:

The central question that this paper addressed was the mechanisms that are available to scholars and researchers to determine the operation of the algorithms. The authors started talking about the traditional reasons why an audit was carried out, explained how audits were carried out traditionally and why it was ok to cross some ethical borders to find answers for the greater good of the public. They went on to detail how overly restrictive laws (CFAA) and scholarly guidelines are a serious impediment today for a similar study that would not be bound by such laws and guidelines in the 1970’s, consequentially hindering social science researchers from finding answers to the problems they need to solve. Throughout the paper the authors profiled and detailed five varying algorithm audit designs including code audit, noninvasive user audit, scraping audit, sock puppet audit, and collaborative audit or crowdsourced audit.

Reflection/Questions:

Through the entirety of the paper the authors addressed algorithms as something that has conscience and this method of addressing algorithms bothered me, for instance the last question that the author poses  “how do we as a society want these algorithms to behave?”. Usage of the word behave was not apropos according to me and a better fitting word would have been function, so something along the lines “how do we as a society want these algorithms to function?” The authors of this paper also addressed various issues regarding algorithmic transparency that I brought up in my previous blog and in class — ” On many platforms the algorithm designers constantly operate a game of cat-and-mouse with those who would abuse or “game” their algorithm. These adversaries may themselves be criminals (such as spammers or hackers) and aiding them could conceivably be a greater harm than detecting unfair discrimination in the platform itself”. Within the text of the paper the authors contradicted themselves by first saying that audits are carried out to find out trends and not punish any one entity, howbeit later in the paper they say that algorithmic audits on a wide array of algorithms will not be possible and ergo the researchers would have to resort to targeting individual platforms. I disagree that algorithms can incur any sort of bias since biases are based out of emotions, and pre-conceived notions which are a part of human conscience and algorithms don’t have emotions. On that end, let’s say that research finds a specific algorithm on a platform to be biased, who is accountable ?  the company ? the developer/ the developers who created the libraries? the manager of the team?  Lastly, according to me googles “screen science” was perfectly acceptable — one portion of the corporation supporting another portion, just like the concept of a donor baby.

 

[2 ]  Measuring Personalization of Web Search

Summary:

In this paper the authors detail their methodology for measuring personalization in web searches, apply their methodology to numerous users, and finally dive into cause of personalization on web. The methodology created by the researchers exposed that 11.7% searches were personalized, mainly caused due to geographic location of the user, and the users account being logged in. The method for finding out personalization also controlled for various noise sources, hence delivering more accurate results. The authors acknowledged the drawback in their methodology — which will only identify positive instances of personalization and will not identify absence of personalization.

Reflection/Questions:

Filter bubble’s and media go hand in hand. People consume what they want to consume. Like I have previously said, personalizing search outputs isn’t the evil of all societal problems. According to me it almost seems as if personalization is being associated with manipulation, which is not the same. If search engines do not personalize, the users get frustrated and find a place that will deliver them the content that they want. I would say there are two different types of searches: factual searches, and personal searches. Factual searches include searches which have a factual answer and there is no way that can be manipulated/personalized, however personal searches include things about feelings, products, ideas, perceptions, etc. and these results are personalized, which I think should rightly be.  Authors also write that there is a “possibility that certain information may be unintentionally hidden from users,” which is not a draw back of personalization but reflective and indicative of real life, where  a person is never exposed to all the information on one topic. Howbeit, the big questions I think about personalization are what is the threshold of personalization ?  At what point is the search engine a reflection of our personality and not an algorithm anymore ? At what point does the predictive analysis of searches becomes creepy ?

Leave a Reply

Your email address will not be published. Required fields are marked *