Reflection #6 – [09/13] – [Prerna Juneja]

Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms by Sandvig et al

Measuring Personalization of Web Search by Hannak et al

Summary:

In the first paper authors believe that every algorithm deserves scrutiny as it might be manipulated and discriminatory. They introduce the social scientific audit study used to test for discrimination in housing etc. and propose a similar idea of “algorithm audits” to find out algorithmic bias. They then outline five algorithmic audit designs namely code audit, non-invasive user audit, scraping audit, sock puppet audit and collaborative or crowdsourced audit and discuss the advantages and disadvantages of each. They find the ‘crowdsourced audit technique’ to be most promising.

In the second paper, authors study personalization in algorithms using the “crowdsourced audit” technique described in the first paper. They propose a methodology to measure personalized web results and apply it on 200 mechanical turks and observe that 11.7% of search results show differences due to personalization. Only two factors- being logged in to google account and geographic location leads to measurable personalization. Queries associated with ‘politics’ and ‘companies’ were associated with highest personalization.

Reflection:

Companies have always been altering algorithms to their benefit. Google takes advantage of its market share to promote its services like ‘google maps’. One will almost never find search results containing URLs to MapQuest, Here WeGo and yahoo news. Market monopoly can be a dangerous thing. It kills competition. Who knows if google starts charging for its free services in the future when we all get used to its products.

A case of gender discrimination was found in Linked where in response to search for a female contact name you’ll be prompted male versions of that name. Example: “A search for “Stephanie Williams brings up a prompt asking if the searcher meant to type “Stephen Williams” instead.”[1]. While google example shows an intentional bias which although is not harming the users directly but is killing the market competition, the linkedin incident seems to be an unintentional bias that cropped up in their algorithm since it depended on relative frequencies of words appearing in the queries. So probably ‘Stephan’ was searched more than ‘Stephanie’.  Citing their spokesperson “The search algorithm is guided by relative frequencies of words appearing in past queries and member profiles, it is not anything to do [with] gender.” So authors are right when they say that no algorithm should be considered unbiased.

Some companies are building Tools to detect bias in their AI algorithms like Facebook (Fairness Flow)[2], Microsoft [3] and Accenture[4]. But the problem is that just like their algorithms these tools will be a black box for us. And we will never know if these companies found bias in their algorithms.

Privacy vs personalization/convenience:  Shouldn’t users have the control over their data. Of what they want to share with the companies? Google was reading our mails for almost a decade for personalised advertisements before it stopped that in 2017 [5]. It still reads them though. It knows about our flight schedules, restaurant reservations. My phone number get distributed to so many retailers, I wonder who is selling them this data

In the second paper the authors mention that once the user logs in to one of the google services they are automatically logged-in to all. So does that mean my YouTube search affects my Google search?

According to an article [6] google autocomplete feature is leading to spread of misinformation. The first suggestion that comes up when you type “climate change is” comes out to be “climate change is a hoax”. How is misinformation and conspiracy theories ranking up on these platforms?

Determining bias seems like a very complex problem with online algorithms changing everyday. And there could be multiple dimensions to bias: gender, age, economic status, language, geographical location etc. The collaborative auditing seems to be a good way of collecting data provided it is done systematically and testers are chosen properly. But then again, how many turkers one should hire? Can a few 100 represent the billion population that is using the internet?

[1] https://www.seattletimes.com/business/microsoft/how-linkedins-search-engine-may-reflect-a-bias/

[2] https://qz.com/1268520/facebook-says-it-has-a-tool-to-detect-bias-in-its-artificial-intelligence/

[3] https://www.technologyreview.com/s/611138/microsoft-is-creating-an-oracle-for-catching-biased-ai-algorithms/

[4] https://techcrunch.com/2018/06/09/accenture-wants-to-beat-unfair-ai-with-a-professional-toolkit/

[5] https://variety.com/2017/digital/news/google-gmail-ads-emails-1202477321/

[6] https://www.theguardian.com/technology/2016/dec/16/google-autocomplete-rightwing-bias-algorithm-political-propaganda

 

Leave a Reply

Your email address will not be published. Required fields are marked *