04/15/2020 – Vikram Mohanty – Algorithmic Accountability Journalistic investigation of computational power structures

Authors: Nicholas Diakopoulos

Summary

This paper discusses the challenges involved in algorithmic accountability reporting and the reverse engineering approaches used to frame a story. The author interviewed four journalists who have reported on algorithms, and discusses five different case studies to present the methods and challenges involved. Finally, the paper outlines the need for transparency and potential ethical issues.

Reflection

This paper offers great insights into the decision-making process behind the reporting of different algorithms and applications. It is particularly interesting to see the lengths journalists go to figure out the story and the value in reporting. The paper is a great read even for non-technical folks as it introduces the concepts of association, filtering, classification and prioritization with examples that can be understood universally. While discussing the different case studies, the paper manages to paint a picture of the challenges the journalists encountered in a very easy-to-understand manner (e.g. incorrectly determining that Obama’s campaign targeted by age) and therefore, succeeds in showing why reporting on algorithmic accountability is hard!

In most cases, the space for potential input(s) is large enough not to be figured out easily, making the field more challenging. This somehow necessitates using the skills of computational social scientists to conduct additional studies, collect additional data and come up with inferences. The paper makes a great point about reverse engineering offering more insights than directly asking the algorithm developers, as the unintended consequences would never surface without investigating the algorithms in operation. Another case of “we need more longitudinal studies with ecological validity”!

It was very interesting to see the discussion around last-mile interventions at the user interface stages (in case of the autocomplete case). It shows the fact that (some of the) developers are self-aware and therefore, ensure that the user experience is an ethical experience. Even though they may fall short, it’s a good starting point. This also demonstrates why augmenting an existing pipeline (be it data/AI APIs or models) to make it work for the end-user is desirable (something that some of the papers discussed in the class have shown).

The questions around the ethics, as usual, do not have an easy answer — whether the reporting can enable developers to make it difficult to investigate in the future. However, regulations around transparency can go a long way in holding algorithms accountable. The paper does a great job synthesizing the challenges in all the case studies and outlines four high-level points for how algorithms can become transparent.

Questions

  1. Would you add anything more to the reverse engineering approaches discussed for the different case studies in the paper? Would you have done anything differently?
  2. If you were to investigate into the power structures of an algorithm, which application/algorithm would you chose? What methods would you follow?
  3. Any interesting case studies that this paper misses out on?

Read More

04/15/2020 – Yuhang Liu – Algorithmic Accountability Journalistic investigation of computational power structure

Summary: in this paper, the author has mentioned that, in modern society, automated algorithms have become more and more important, and algorithms gradually regulate all aspects of our lives, but the outline of their functions may still be difficult to grasp. So, it is necessary to elucidating and articulating the algorithms’ power. The author proposes a new notion “algorithmic accountability reporting”. This concept can reveal how algorithms work, and it is well worth reviewing by computing journalists. The author explores methods such as transparency and reverse engineering, and how they can be useful in elucidating algorithmic capabilities. And the author analyzes the case studies of five journalists on algorithm research, and describes the challenges and opportunities they face when working on algorithm accountability. The final concept proposed by the author has highlights and main contributions: (1) It proposed the theoretical lens of various atomic algorithm decisions. These decisions raised some major issues that can guide algorithm research and algorithm transparency policy development. (2) It can conduct preliminary evaluation and analysis of the algorithm through algorithmic accountability, including various restrictions. and author discuss the challenges faced in adopting this reporting method, including human resources, legitimacy and ethics, and look ahead to how journalists themselves use transparency when using algorithms

Reflection: I think the author has put forward a very innovative idea. This is also the first point that comes to my mind when I meet or use some new algorithms: what is the boundary of this algorithm, and what scope can it be applied to. For example, the insurance algorithm of an insurance company, we all know that the insurance cost is generated based on a series of attributes, but people are often uncertain about the proportion of each attribute in the insurance algorithm, then there will be some doubts about the results, and even think some results are immoral. Therefore, it is very important to study the capabilities and boundaries of an algorithm.

At the same time, the concept of reverse engineering is mentioned in the article, that is, the ability to study algorithms by studying input and output, but there are often such mechanisms in some websites. It makes the algorithm dynamic, so we need other methods to solve this kind of problem. However, once the input-output relationship of the black box is determined, the challenge becomes a data-driven search for news stories. Therefore, I think the algorithm is more inclined to understand whether there is an unreasonable situation in an algorithm, and the root cause of this unreasonable situation is whether it is caused by man or negligence, or it is people’s deep-rooted ideas . So, in some aspects, I think exploring the borders of algorithms is exploring the morality of algorithms. Therefore, I think this article provides a framework for reviewing the morality of the algorithm. This method can effectively explore a place where the algorithm is unreasonable, and for news reporters, it can be used to discover meaningful news.

In addition, I think the framework described in this article is a special way of human-computer interaction, that is, people study the machine itself, and understand the process of algorithm operation through the feedback of the machine. This also broadened my understanding of human-computer interaction.

Problem:

  1. Do you think the framework mentioned in the paper can be used in detecting the ethic issues of an algorithm?
  2. Can this system be used in a automatic system to elucidating and articulating the algorithms’ power?
  3. Is there any other value of detecting algorithms’ power except news value?

Read More

04/15/2020 – Yuhang Liu – Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking

Summary:

This article discusses a system about fact detection. First of all, the article proposes that fact detection is a very important, challenging and time-sensitive. Usually, in this type of system, the human influence on the system is ignored, but the human influence is very important in this type of system. Therefore, this article establishes a hybrid startup system for fact checking. Enable users to interact with ML predictions to complete challenging fact checks. The author designed the interface through which the user can know the source of the prediction. In some applications, when the users know the prediction results but is not satisfied, the author also allows the user to use his own beliefs or inferences to cover these predictions. Through this system, the authors have come to a conclusion that when the model’s results are correct, these predictions will have a very positive impact on people. However, people should not overly trust the model’s predictions, when users think the predictions is wrong, the prediction result can be improved through interactive methods. And this also reflects the importance of a transparent, interactive system for fact detection from the side.

Reflection:

When I saw the title of this article, I thought that this article maybe have a same topic with my project, using crowdsourced workers to distinguish fake news, but when I read it to a certain extent, I found that this is not the case. But I think it affirmed my thinking in some aspects. First, fact detection is a very challenging project, especially when real-time is needed, so it is very necessary to rely on human power, and due to lack of Marked data, so if you want to directly complete the task through machine learning, in some cases, the prediction results will point in a completely opposite direction. For example, in my project, rumors and refuting rumors are both may be considered as a rumor, so we need crowd workers to distinguish it.

Secondly, for the project itself mentioned in the article, I think its method is a very good direction. First of all, human judgment is particularly important in this kind of system. This is also the main idea of many human-computer interaction systems to improve accuracy through humans. I think this method in the article is a good start. In a transparent system, let people decide whether to cover the forecast results. Not only do they not force people to participate in the system, but also let people make predictions There are very important weights.

But at the same time, I think the system also has some of the limitations described in the article. For example, the purpose of crowdsourcing workers and its own concerns may affect the results of the final system, so I think the article proposes a good direction, but we need to be more Careful research.

Problem:

  1. Do you think users usually can find the prediction is incorrect and cover it when a system is wrong?
  2. What role does the transparency of the system play in the interaction?
  3. How to prevent users trust the prediction too far in other human and computer interaction systems?

Read More

04/15/2020 – Bipasha Banerjee – Algorithmic Accountability

Summary 

The paper provides a perspective on algorithmic accountability from the journalists’ eyes. The motivation of the paper is to detect how algorithms influence various decisions in different cases. The author investigates explicitly the area of computational journalism and how such journalists could use their power to “scrutinize” to uncover bias and other issues current algorithms pose. He lists out a few of the decisions that algorithms make and which has the potential to affect the algorithms capability to be unbiased. Some of the decisions are classification, prioritization, association, filtering, and algorithmic accountability. It is also mentioned that transparency is a key factor in building trust in an algorithm. The author then proceeds to discuss reverse engineering by providing examples of a few case studies. Reverse engineering is described in the paper as a way by which the computational journalists have reverse engineered to the algorithm. Finally, he points out all the challenges the method poses in the present scenario.

Reflection

The paper gives a unique perspective on the algorithmic bias from a computational journalists’ perspective. Most of the papers we read come from either completely the computational domain or the human-in-the-loop perspective. Having journalists who are not directly involved in the matter is, in my opinion, brilliant. This is because journalists are trained to be unbiased. From the CS perspective, we tend to be “AI” lovers and want to defend the machine’s decision and consider it as true. The humans using the system wither blindly trust them or completely doubt them. Journalists, on the other hand, are always motivated to seek the truth, however unpleasant it might be. Having said that, I am intrigued to know the computational expertise level of the journalists. Although having an-in-depth knowledge about AI systems might introduce a separate kind of bias. Nonetheless, this would be a valid experiment to conduct. 

The challenges that the author mentioned include ethics, legality, among others. These are some of the challenges that are not normally discussed. We, from the computational side, need to be aware of these challenges. The “legal ramification” could be enormous if we do not end up using authorized data to train the model and then publish the results. 

I agree with the author that transparency indeed helps bolster confidence in an algorithm. However, I also agree that it is difficult for companies to be transparent in the modern digital competitive era. It would be difficult for companies to take the risk and make all the decisions public. I believe there might be a middle ground for companies; they could publish part of the algorithmic decisions like the features they use and let the users know what data is being used. This might help improve trust. For example, Facebook could publish the reasons why they recommend a particular post, etc.

Questions

  1. Although the paper talks about using computational journalism, how in-depth is the computational knowledge of such people? 
  2. Is there a way for an algorithm to be transparent, yet the company not lose its competitive edge?
  3. Have you considered the “legal and ethical” aspect of your course project? I am curious to know about the data that is being used and other models etc.?

Read More

04/15/2020 – Bipasha Banerjee – Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking

Summary

The paper emphasizes the importance of a mixed-initiative model for fact-checking. It points out the advantages of humans and machines working closely together to verify the veracity of the facts. The paper’s main aim from the mixed-initiative approach was to make the system, especially the user interface, more transparent. The UI presents a claim to the user along with a list of articles related to the statement. The paper also mentions all the prediction models that have been used to create the UI experience. Finally, the authors conducted three experiments using crowd workers who had to predict the correctness of claims presented to them. In the first experiment, the users were shown the results page without the prediction of the truthfulness of the claim. Users were subsequently divided into two subgroups, where one group was given slightly more information. In the second experiment, the crowdworkers were presented with interactive UI. They, too, were further divided into two subgroups, with one group having the power to change the initial predictions. The third experiment was a gamified version of the previous experiment. The authors concluded that human-ai collaboration could be useful, although the experiment brought into light some contradictory findings. 

Reflection

I agree with the author’s approach that the transparency of a system leads to the confidence of the user using a particular system. My favorite thing about the paper is that the authors describe the systems very well. They do a very good job of describing the AI models as well as the UI design and give a good explanation to their decisions. I also enjoyed reading about the experiments that they conducted with the crowdworkers. I had a slight doubt about how the project handled latency, especially when the related articles were presented to the workers in real-time.

I also liked how the experiments were conducted in sub-groups, with a group having information not presented to the other. This shows that a lot of use cases were thought of when the experimentation took place. I agree with most of the limitations that the authors wrote. I particularly agree that if the veracity of predictions is shown to the users, there is a high chance of that influencing people. We, as humans, have a tendency to believe machines and its prediction blindly. 

I would also want to see the work being performed on another dataset. Additionally, if the crowdworkers have knowledge about the domain in the discussion, how does that affect the performance? It is definite that having knowledge would improve detecting the claim of a statement. Nonetheless, this might help in determining to what extent. A potential use case could be researchers reading claims from research papers in their domain and assessing their correctness. 

Questions

  1. How would you implement such systems in your course project?
  2. Can you think of other applications of such systems?
  3. Is there any latency associated when the user is produced with the associated articles? 
  4. How would the veracity claim system extend to other domains (not news based)? How would it perform on other datasets? 
  5. Would experience (in one domain) crowdworkers perform better? The answer is likely yes, but how much? And how can this help improve targeted systems (research paper acceptance, etc.)?

Read More

04/15/20 – Jooyoung Whang – Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking

In this paper, the authors state that the current fully automatic fact-checking systems are not good enough for three reasons: model transparency, taking world facts into consideration, and model uncertainty communication. So, the authors went on and built a system including humans in the loop. Their proposed system uses two classifiers that each predict the reliability of a supporting document of a claim and the veracity of the document. Using these weighted classifications, the confidence of the system’s prediction about a claim is shown to the user. The users can further manipulate the system by modifying the weights of the system. The authors conducted a user study of their system with Mturk workers. The authors found that their approach was effective, but also noted that too much information or misleading predictions can lead to big user errors.

First off, it was hilarious that the authors cited Wikipedia to introduce Information Literacy in a paper about evaluating information. I personally took it as a subtle joke left by the authors. However, it also led me to a question about the system. If I did not miss it, the authors did not explain where the relevant sources or articles came from that supported a claim. I was a little concerned if some of the articles used in the study were not reliable sources.

Also, the users conducted the user study using their own defined set of claims. While I understand this was needed for efficient study, I wanted to know how the system would work in the wild. If a user searched a claim that he or she knows is true, would the system agree with high confidence? If not, would the user have been able to correct the system using their interface? It seemed that some portion of the users were confused, especially with the error correction part of the system. I think these would have been valuable to know and would seriously need to be addressed if the system were to become a commercial product.

These are the questions that I had while reading the paper:

1. How much user intervention do you think is enough for these kinds of systems? I personally think if the users are given too much power over the system, users will apply their bias to the correction and get false positives.

2. What would be a good way for the system to only retrieve ‘reliable’ sources to reference? Stating that a claim is true based on a Wikipedia article would obviously not be so assuring. Also, academic papers cannot address all claims, especially if they are social claims. What would be a good threshold? How could this be detected?

3. Given the current system, would you believe the results that the system gives? Do you think the system addresses the three requirements that the authors introduced which all fact-checking systems should possess? I personally think that system transparency is still lacking. The system shows a lot about what kind of sources it used and how much weight it’s putting into them, but it does not really explain how it made the decision.

Read More

04/15/20 – Ziyao Wang – Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking

The authors focused on fact-checking, which is the task of assessing the veracity of claims. They proposed a mixed-initiative approach to fact-checking. In this system, they combined human knowledge and experience with AI’s efficiency and scalability in information retrieval. They argue that if we want to use fact-checking models practically, the models should be transparent, supporting to integrating user knowledge and have quantification and communication of model uncertainty. Following these principles, they developed their mixed-initiative system and did experiments with participants from MTurk. They found that the system can help humans when they are giving correct predictions and could be harmful when they are giving wrong predictions. And the interaction between participants and the system is not as effective as expected. Finally, they found that making tasks to be games does not help in users’ performance. In conclusion, they found that users are intended to trust models, and may be affected by the models to make the wrong choice. For this reason, transparent models are important in mixed-initiative systems.

Reflection:

I have tried to use the system mentioned in the paper. It is quite interesting. However, for the first time, I used it, I am confused about what should I do to use it. Though the interface is similar to Google.com and I am quite sure I should type something into the text box, there are limited instructions about what should I type, how the system will work and what should I do after searching my typed claim. Also, after I searched for a claim, the result page is still confusing. I know the developers want to show me some findings of the claim and provide me with the prediction result of the system. However, I am still confused about what should I do, and some given searching results are not related to my typed claim.

After several times of use, I got familiar with the system and it does help in my judgement of whether a claim is correct or not. I agree with the authors that some feedbacks about not being able to interact with the system properly comes from the users’ unfamiliar of the system. But apart from this, the authors should provide more instructions to the users so that they can get familiar with the system quickly. I think this is related to the transparency of the system and may raise users’ trust.

Another issue I found during use is that there are no words like the results can only be used as a reference, you should make a judgement using your own mind, or similar explanations. I think this may be a reason that the error rate of users’ results increased significantly when the system made wrong predictions. Participants may change their own minds when they saw that the prediction result is different from their own results because they think know little about the system and may think that system would be more likely to get the correct answer. If the system is more transparent to the users, the users may be able to provide more correct answers to the claims.

Questions:

How to let the participants make correct judgements when the system provides wrong predictions?

What kinds of instructions should be added so that participants can get familiar with the system more quickly?

Can this system be used in areas other than fact-checking?

Read More

04/15/2020 – Ziyao Wang – Algorithmic accountability

In this report, the author studied about how algorithms execute and are worthy of scrutiny by computational journalists. He used methods such as transparency and reverse engineering to analyze the algorithms. Also, he analyzed five kinds of atomic decisions, including prioritization, classification, association, filtering, and algorithmic accountability, to assess algorithmic power. For the reverse engineering part, he analyzed numerous daily cases and presented a new scenario of reverse engineering which considers both inputs and outputs. He considered the variable observability of I/O relationships and identifying, sampling, and finding newsworthy stories about algorithms. Finally, the author discussed challenges that may be faced by the application of algorithmic accountability reporting in the future. Also, he proposed that transparency can be used to effectively force applications to take journalistic norms when newsroom algorithms are applied.

Reflections:

I am really interested in the reverse engineering part of this report. The author concluded different cases of researchers doing reverse engineering towards algorithms. It is quite exciting to understand the opportunities and limitations of the reverse engineering approach to investigating algorithms. And, reverse engineering is valuable in explaining how algorithms work and finding limitations of the algorithms. As many current applied algorithms or models are trained using unsupervised learning or deep learning, it is hard for us to understand and explain them. We can only use metrics like recall or precision to evaluate them. But with reverse engineering, we can know about how the algorithms work and modify them to avoid limitations and potential discriminations. However, I think there may be some ethical issues in reverse engineering. When some bad guys did reverse engineering to some applications, they can steal the ideas in the developed applications or algorithms. Or, they may bypass the security system of the application making use of the drawbacks they found using reverse engineering.

For the algorithmic transparency, I felt that I paid little attention to this principle before. I used to only consider whether the algorithm works or not. However, after reading this report, I felt that algorithmic transparency is an important aspect of system building and maintenance. Instead of letting researchers employing reverse engineering to find the limitations of systems, it is better to make some part of the algorithms, the use of the algorithms and some other data to the public. On one hand, this will raise the public trust of the system due to its transparency. On the other hand, experts from outside the company or the organization can make a contribution to the improvement and secure the system. However, currently, transparency is far from a complete solution to balancing algorithmic power. Apart from the author’s idea that researchers can apply reverse engineering to analyze the systems, I think both corporations and governments can pay more attention to the transparency of the algorithms.

Questions:

I am still confused about how to find the story behind the input-output relationship after reading the report. How can we find out how the algorithm operates with an input-output map?

How can we avoid crackers making use of reverse engineering to do attacks?

Apart from journalists, which groups of people should also employ reverse engineering to analyze systems?

Read More

04/15/20 – Jooyoung Whang – What’s at Stake: Characterizing Risk Perceptions of Emerging Technologies

In this paper, the authors conduct a survey with a listing of known technological risks, asking the participants to rate the severity of each risk. The authors state that their research is an extension of prior work done in the 1980s. The paper’s survey was taken between experts and non-experts, where experts were collected from Twitter and non-experts from Mturk. From the old work and their own, the authors found that people tend to rate voluntary risks low even if in reality they are high. They also found that many emerging technological risks were regarded as involuntary. It was also shown that non-experts tended to underestimate the risks of new technologies. The authors also introduce a risk-sensitive design based on their findings. The authors show a risk-perception graph that can be used to decide whether a proposed technology is perceived by non-experts as risky as experts think or are underestimated and whether the design is acceptable.

This paper nicely captures the user characteristics of technical risk perception. I liked that the paper did not end explaining the results but also went further to propose a tool for technical designers. However, it was a little unclear to me how to use the tool. The risk-perception graph that the authors show only has “low” and “high” on the axis’s labels, which are very subjective terms. A way to quantify risk perception would have served nicely.

This paper also made me think what’s the point of providing terms of use for a product if the users get the feeling that they have involuntarily exposed to risk. I feel like a better representation would be needed. For example, a short summary outlining the most important risks in a short sentence and providing details in a separate link would be more effective than throwing a wall of text at a (most likely) non-technical user.

I also think a way to address the gap of risk perception between designers and users is to involve users in the development process in the first place. I am unsure of the exact term, but I recall learning about the term users-in-the-loop development cycle from a UX class. This development method allows designers to fix user problems early in the process and end up with higher quality products. I feel it would also inform the designers more about potential risks.

These are the questions that I had while reading the paper:

1. What are some disasters that may happen due to the gap in risk perception between users and designers of a system? Would any additional risks occur due to this gap?

2. What would be a good way to reduce the gap in risk perception? Do you think using the risk-perception graph from the paper is useful for addressing this gap? How would you measure the risk?

3. Would you use the authors’ proposed risk-sensitive design approach in your project? What kind of risks do you expect from your project? Are they technical issues and do you think your users will underestimate the risk?

Read More

04/15/2020 – Myles Frantz – Algorithmic accountability

Summary

With the prevalence of technology, the mainstream programs that help the rise of it not only dictate the technological impact but also the direction of news media and people’s opinions. With journalists turning to various outlets and adapting to the efficiency created by technology, the technology used may introduce bias based on their internal sources or efficiencies and therefor introduce bias into their story. This team measured multiple algorithms against four different categories: prioritization, classification, association, and filtering. Using a combination of these different categories, these are then measured within a user survey to measure how different auto complete features bias their opinions. Using these measurements, it has also been determined by the team that popular search engines like Google specifically tailor results based on other information the user has previously searched. For a normal user this makes sense however for some investigative journalist these results may not accurately represent a source of truth. 

Reflection

Noted by the team, there is a strong conflict in the transparency used within an algorithm. These transparency discrepancies may be due to certain government concerns dependent on certain secrets. These creates a strong sense of resiliency and distrust against the use of certain algorithms based. Though these secrets are claimed for national security, there may be misuse of power or overstepping of definition that overuses the term for personal or political gain and are not correctly appropriated. These kinds of acts may be located at any level of government, from the lowest of actors to the highest of rankings.  

One of the key discussion points raised by the team to fix this potential bias in independent research is to teach journalists how better to use computer systems. This may only seem to bridge the journalist’s new medium they are not familiar with. This could also be seen as an attempt to create a handicap for the journalists to better understand a truly fragmented news system. 

Questions

  • Do you think introducing journalists into a computer science program would extend their capabilities or it would only further direct their ideas while potentially removing certain creativity? 
  • Since there is a type of monopolization throughout the software ecosystem, do you believe people are “forced” to use such technologies that tailor the results? 
  • Given how a lot of technology uses user information for potential misuse, do you agree with this information being introduced with a small disclaimer acknowledging the potential preference? 
  • There are a lot of services that offer you better insights to clean your internet trail and clear any biases internet services cache to ensure a faster and more tailored search results. Have you personally used any of these programs or step by step guides to clean your internet footprint? 
  • Many programs capture and record user usage with a small disclaimer at the end detailing their usage on data. It is likely many users do not read these for various reasons. Do you think if normal consumers of technology were to see how corrective and auto biasing the results could be that they would continue using the services? 

Read More