04/15/2020 – Palakh Mignonne Jude – What’s at Stake: Characterizing Risk Perceptions of Emerging Technologies

SUMMARY

The authors of this paper adapt a survey instrument from existing risk perception literature to analyze the perception of risk surrounding newer emerging data-driven technologies. The authors surveyed 175 participants (26 experts and 149 non-experts). They categorize an ‘expert’ to be anyone working in a technical role or earning a degree in a computing field. Inspired by the original 1980’s paper ‘Facts and Fears: Understanding Perceived Risk’, the authors consider 18 risks (15 new risks and 3 from the original paper). These 15 new risks include ‘biased algorithms for filtering job candidates’, ‘filter bubbles’, and ‘job loss from automation’. The authors also consider 6 psychological factors while conducting this study. The non-experts (as well as a few who were later on considered to be ‘experts’) were recruited using MTurk. The authors borrowed quantitative measures that were used in the original paper and added two new open-response questions – describing the worst-case scenario for the top three risks (as indicated by the participant) and adding new serious risks to society (if any). The authors also propose a risk-sensitive design based on the results of their survey.  

REFLECTION

I found this study to be very interesting and liked that the authors adapted the survey from existing risk perception literature. The motivation the paper reminded me about a New York Times article titled ‘Twelve Million Phones, One Dataset, Zero Privacy’ and the long-term implications of such data collection and its impact on user privacy.

 I found it interesting to learn that the survey results indicated that both experts and non-experts rated nearly all risks related to emerging technologies as characteristically involuntary. It was also interesting to learn that despite consent processes built into software and web services; the corresponding risks were not perceived to voluntary.  I thought that it was good that the authors included the open-resource question on what the user’s perceived as the worst case scenario for the top three riskiest technologies. I liked that they provided some amount of explanation for their survey results.

The authors mention that technologists should attempt to allow more discussion around data practices and be willing to hold-off rolling out new features that raise more concerns than excitement. However, this made me wonder if any of the technological companies would be willing to perform such a task. It would probably cause external overhead and the results may not be perceived by the company to be worth the amount of time and effort that such evaluations may entail.

QUESTIONS

  1. In addition to the 15 new risks added by the authors for the survey, are there any more risks that should have been included? Are there any that needed to be removed or modified from the list? Are there any new psychological factors that should have been added?
  2. As indicated by the authors, there are gaps in the understanding of the general public. The authors suggest that educating the public would enable this gap to be reduced more easily as compared to making the technology less risky. What is the best way to educate the public in such scenarios? What design principles should be kept in mind for the same?
  3. Have any follow-up studies been conducted to identify ‘where’ the acceptable marginal perceived risk line should be drawn on the ‘Risk Perception Curve’ introduced in the paper?  

Read More

04/15/20 – Lee Lisle – Believe it or Not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking

Summary

Ngyuen et al’s paper discusses the rise of misinformation and the need to combat it via tools that can verify claims while also maintaining users’ trust of the tool. They designed an algorithm that finds sources that are similar to a given claim to determine whether or not the claim is accurate. They also weight the sources based on esteem. They then ran 3 studies (with over 100 participants in each) where users could interact with the tool and change settings (such as source weighting) in order to evaluate their design. The first study found that the participants trusted the system too much – when it was wrong, they tended to be inaccurate, and when it was right, they were more typically correct. The second study allowed participants to change the inputs and inject their own expertise into the scenario. This study found that the sliders did not significantly impact performance. The third study focused on gamification of the interface, and found no significant difference.

Personal Reflection

               I enjoyed this paper from a 50,000 foot perspective, as they tested many different interaction types and found what could be considered negative results. I think papers that show that all work is not necessarily good have a certain amount of extra relevance – they certainly show that there’s more at work than just novelty.

I especially appreciated the study on the effectiveness of gamification. Often, the prevailing theory is that gamification increases user engagement and increases the tools’ effectiveness. While the paper is not conclusive that gamification cannot do this, it certainly lends credence to the thought that gamification is not a cure-all.

However, I took some slight issue with their AI design. Particularly, the AI determined that the phrase “Tiger Woods” indicated a supportive position. While their stance was that AIs are flawed (true), I felt that this error was quite a bit more than we can expect from normal AIs, especially ones that are being tweaked to avoid these scenarios. I would have liked to see experiment 2 and 3 improved with a better AI, as it does not seem like they cross-compared studies anyway.

Questions

  1. Does the interface design including a slider to adjust source reputations and user agreement on the fly seem like a good idea? Why or why not?
  2.  What do you think about the attention check and its apparent failure to accurately check? Should they have removed the participants with incorrect answers to this check?
  3. Should the study have included a pre-test to determine how the participants’ world view may have affected the likelihood of them agreeing with certain claims? I.E., should they have checked to see if the participants were impartial, or tended to agree with a certain world view? Why or why not?
  4. What benefit do you think the third study brought to the paper? Was gamification proved to be ineffectual, or is it a design tool that sometimes doesn’t work?

Read More

04/15/2020 – Vikram Mohanty – Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking

Authors: An T. Nguyen, Aditya Kharosekar , Saumyaa Krishnan, Siddhesh Krishnan, Elizabeth Tate, Byron C. Wallace, and Matthew Lease

Summary

This paper proposes a mixed-initiative approach to fact-checking, combining human and machine intelligence. The system automatically finds and retrieves relevant articles from a variety of sources. It then infers the degree to which each article supports or refutes the claim, as well as the reputation of each source. Finally, the system aggregates this body of evidence to predict the veracity of the claim. Users can adjust the source reputation and stance of each retrieved article to reflect their own beliefs and/or correct any errors according to them. This will, in turn, update the AI model. The paper evaluates this approach through a user study on Mechanical Turk. 

Reflection

This paper, in my opinion, succeeds as a nice implementation of all the design ideas we have been discussing in the class for mixed-initiative systems. It factors in user input, combined with an AI model output, and shows users a layer of transparency in terms of how the AI makes the decision. However, fact-checking, as a topic, is complex enough not to warrant a solution in the form of a simplistic single-user prototype. So, I view this paper as opening up doors for building future mixed-initiative systems that can rely on similar design principles, but also factor in the complexities of fact-checking (which may require multiple opinions, user-user collaboration, etc).

Therefore, for me, this paper contributes an interesting concept in the form of a mixed-initiative prototype, but beyond that, I think the paper falls short of making it clear who the intended users are (end-users or journalists) or the intended scenario it is designed for. The evaluation with Turkers seemed to indicate that anyone can use it, which opens up the possibility of creating individual echo-chambers very easily and essentially, making the current news consumption landscape worse. 

The results also showed the possibility of AI biasing users when it’s wrong, and therefore, a future design would have to factor in that. One of the users felt overwhelmed as there was a lot going on with the interface, and therefore, a future system needs to address the issue of information overdose. 

The authors, however, did a great job discussing these points in detail about the potential misuse and some of the limitations. Going forward, I would love to see this work forming the basis for a more complex socio-technical system, that allows for nuanced inputs from multiple users, interaction with a fact-checking AI model that can improve over time, and a longitudinal evaluation with journalists and end-users on actual dynamic data. The paper, despite the flaws arising due to the topic, succeeds in demonstrating human-AI interaction design principles.

Questions

  1. What are some of the positive takeaways from the paper?
  2. Did you feel that fact-checking, as a topic, was addressed in a very simple manner, and deserves more complex approaches?
  3. How would you build a future system on top of this approach?
  4. Can a similar idea be extended for social media posts (instead of news articles)? How would this work (or not work)?

Read More

04/15/2020 – Bipasha Banerjee – Algorithmic Accountability

Summary 

The paper provides a perspective on algorithmic accountability from the journalists’ eyes. The motivation of the paper is to detect how algorithms influence various decisions in different cases. The author investigates explicitly the area of computational journalism and how such journalists could use their power to “scrutinize” to uncover bias and other issues current algorithms pose. He lists out a few of the decisions that algorithms make and which has the potential to affect the algorithms capability to be unbiased. Some of the decisions are classification, prioritization, association, filtering, and algorithmic accountability. It is also mentioned that transparency is a key factor in building trust in an algorithm. The author then proceeds to discuss reverse engineering by providing examples of a few case studies. Reverse engineering is described in the paper as a way by which the computational journalists have reverse engineered to the algorithm. Finally, he points out all the challenges the method poses in the present scenario.

Reflection

The paper gives a unique perspective on the algorithmic bias from a computational journalists’ perspective. Most of the papers we read come from either completely the computational domain or the human-in-the-loop perspective. Having journalists who are not directly involved in the matter is, in my opinion, brilliant. This is because journalists are trained to be unbiased. From the CS perspective, we tend to be “AI” lovers and want to defend the machine’s decision and consider it as true. The humans using the system wither blindly trust them or completely doubt them. Journalists, on the other hand, are always motivated to seek the truth, however unpleasant it might be. Having said that, I am intrigued to know the computational expertise level of the journalists. Although having an-in-depth knowledge about AI systems might introduce a separate kind of bias. Nonetheless, this would be a valid experiment to conduct. 

The challenges that the author mentioned include ethics, legality, among others. These are some of the challenges that are not normally discussed. We, from the computational side, need to be aware of these challenges. The “legal ramification” could be enormous if we do not end up using authorized data to train the model and then publish the results. 

I agree with the author that transparency indeed helps bolster confidence in an algorithm. However, I also agree that it is difficult for companies to be transparent in the modern digital competitive era. It would be difficult for companies to take the risk and make all the decisions public. I believe there might be a middle ground for companies; they could publish part of the algorithmic decisions like the features they use and let the users know what data is being used. This might help improve trust. For example, Facebook could publish the reasons why they recommend a particular post, etc.

Questions

  1. Although the paper talks about using computational journalism, how in-depth is the computational knowledge of such people? 
  2. Is there a way for an algorithm to be transparent, yet the company not lose its competitive edge?
  3. Have you considered the “legal and ethical” aspect of your course project? I am curious to know about the data that is being used and other models etc.?

Read More

04/15/2020 – Bipasha Banerjee – Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking

Summary

The paper emphasizes the importance of a mixed-initiative model for fact-checking. It points out the advantages of humans and machines working closely together to verify the veracity of the facts. The paper’s main aim from the mixed-initiative approach was to make the system, especially the user interface, more transparent. The UI presents a claim to the user along with a list of articles related to the statement. The paper also mentions all the prediction models that have been used to create the UI experience. Finally, the authors conducted three experiments using crowd workers who had to predict the correctness of claims presented to them. In the first experiment, the users were shown the results page without the prediction of the truthfulness of the claim. Users were subsequently divided into two subgroups, where one group was given slightly more information. In the second experiment, the crowdworkers were presented with interactive UI. They, too, were further divided into two subgroups, with one group having the power to change the initial predictions. The third experiment was a gamified version of the previous experiment. The authors concluded that human-ai collaboration could be useful, although the experiment brought into light some contradictory findings. 

Reflection

I agree with the author’s approach that the transparency of a system leads to the confidence of the user using a particular system. My favorite thing about the paper is that the authors describe the systems very well. They do a very good job of describing the AI models as well as the UI design and give a good explanation to their decisions. I also enjoyed reading about the experiments that they conducted with the crowdworkers. I had a slight doubt about how the project handled latency, especially when the related articles were presented to the workers in real-time.

I also liked how the experiments were conducted in sub-groups, with a group having information not presented to the other. This shows that a lot of use cases were thought of when the experimentation took place. I agree with most of the limitations that the authors wrote. I particularly agree that if the veracity of predictions is shown to the users, there is a high chance of that influencing people. We, as humans, have a tendency to believe machines and its prediction blindly. 

I would also want to see the work being performed on another dataset. Additionally, if the crowdworkers have knowledge about the domain in the discussion, how does that affect the performance? It is definite that having knowledge would improve detecting the claim of a statement. Nonetheless, this might help in determining to what extent. A potential use case could be researchers reading claims from research papers in their domain and assessing their correctness. 

Questions

  1. How would you implement such systems in your course project?
  2. Can you think of other applications of such systems?
  3. Is there any latency associated when the user is produced with the associated articles? 
  4. How would the veracity claim system extend to other domains (not news based)? How would it perform on other datasets? 
  5. Would experience (in one domain) crowdworkers perform better? The answer is likely yes, but how much? And how can this help improve targeted systems (research paper acceptance, etc.)?

Read More

04/15/2020 – Subil Abraham – Nguyen et al., “Believe it or not”

In today’s era of fake news where new information is constantly spawning everywhere, the great importance of fact checking cannot be understated. The public has a right to remain informed and be able to obtain true information from accurate, reputable sources. But all too often, people are inundated with too much information and the cognitive load of fact checking everything would be too much. Automated fact checking has made strides but previous work has focused primarily on model accuracy and not on the people who need to use them. This paper is the first to study an interface for humans to use a fact checking tool. The tool is pretrained on the Emergent dataset of annotated articles and sources and uses two models, one that predicts article stance on a claim and the other that calculates the accuracy of the claim based on the reputation of the sources. The application works by taking a claim and retrieving articles that talk about the claim. It uses the article stance model to classify if the articles are for or against the given claim, and then predicts the claim’s accuracy based on the collective reputation of its sources. It conveys that its models are not accurate and provides confidence levels for its accuracy claims. It also provides sliders for the human verifiers to adjust the predicted stance of the articles and also to adjust the source reputation according to their beliefs or new information. The authors run three experiments to test the efficacy of the tool for human fact checkers. They find that the users tend to trust the system, which can be problematic when the system is inaccurate.

I find it interesting that for the first experiment, the System group’s error rate somewhat follows the stance classifiers error rate. The crowd workers are probably not going through the process of independently verifying the stance of the articles and simply trust the predicted stance they are shown. Potentially this could be mitigated by adding incentives (like extra reward) to have them actually read the articles in full. But on the flip side, we can see that their accuracy (supposedly) becomes better when they are given the sliders to modify the stances and reputation. Maybe that interactivity was the clue they needed to understand that the predicted values aren’t set in stone and could potentially be inaccurate. Though I find it strange that the Slider group in the second experiment did not adjust the sliders if they were questioning the sources. What I find even stranger though is the fact that the authors decided to keep the claim that allowing the users to use the sliders made them more accurate. This claim is what most readers would take away unless they were carefully reading the experiments and the riders. And I don’t like that they kept the second experiment results despite them not showing any useful signal. Ultimately, I don’t buy into their push that this tool is something that is useful for the general user as it stands now. And I also don’t really see how this tool could serve as a technological mediator for people with opposing views, at least not the way they described it. I find that this could serve as a useful automation tool for expert fact checkers as part of their work but not for the ordinary user, which is what they model by using crowdworkers. I like the ideas that the paper is going for, of having automated fact checking that helps for the ordinary user and I’m glad they acknowledge the drawbacks. But I think there are too many drawbacks that prevent me from fully buying into the claims of this paper. It’s poetic that I have my doubts about the claims of a paper describing a system that asks you to question claims.

  1. Do you think this tool would actually be useful in the hands of an ordinary user? Or would it serve better in the hands of an expert fact checker?
  2. What would you like to see added to the interface, in addition to what they already have?
  3. This is a larger question, but is there value in having the transparency of the machine learning models in the way they have done (by having sliders that we can manipulate to see the final value change)? How much detail is too much? What about for more complex models where you can’t have that instantaneous feedback (like style transfer) how do you provide explainability there?
  4. Do you find the experiments rigorous enough and conclusions significant enough to back up the claims they are making?

Read More

04/15/2020 – Subil Abraham – Diakopoulos, “Algorithmic accountability”

Algorithms have pervaded our every day lives, because computers have become essential in our every day lives. Their pervasion also means that they need to be closely scrutinized to ensure that they are functioning like they should, without bias, obeying the guarantees the creators have promised. Algorithmic Accountability is a category of journalism where the journalists investigate these algorithms to validate their claims and find if there are any violations. The goal is to find mistakes and omissions or bias creeping into the algorithms because though computers do exactly what they’re told, they are still created by humans with blinspots. They classify the four kinds of decisions that algorithm decision making falls under. They claim that transparency alone is not enough because full transparency can often be prevented by trade secret excuses. They utilize the idea of reverse engineering where they put in inputs and observe the outputs, without looking at the inner workings because journalists are often dealing with black box algorithms. They look at five case studies of journalists who’ve done such investigations with reverse engineering, as well as putting a theory and a methodology on how to find news-worthy stories in this space.

This paper is a very interesting look from the non CS/HCI perspective of studying how algorithms function in our lives. This paper, coming from the perspective of journalism and looking at the painstaking way journalists investigate these algorithms. Though not the focus, this work also brings to light the incredible roadblocks that come with investigating proprietary software, especially those from large secretive companies who would leverage laws and expensive lawyers to fight such investigations if it is not in their favor. In an ideal world, everyone would have integrity and would disclose all the flaws in their algorithms but that’s unfortunately not the case which is why the work these journalists are doing is important, especially when they don’t have easy access to the algorithms they’re investigating, and sometimes don’t have access to the right inputs. There is a danger here that a journalist could end up being discredited because they did the best investigation they could with the limited resources they have but the PR team of the company they’re investigating latches on to a poor assumption or two to discredit the otherwise good work. The difficulty in performing these investigations, especially for journalists who may not have prior training or experience in dealing with computers, exemplifies the need for at least some computer science education for everyone so that they can better understand the systems they’re dealing with and have a better handle on running investigations as algorithms pervade even in our lives.

  1. Do you think some of the laws in place that allow companies to obfuscate their algorithms should be relaxed to allow easier investigation?
  2. Do you think current journalistic protections are enough for journalists investigating these algorithms?
  3. What kind of tools or training can be given to journalists to make it easier for them to navigate this world of investigating algorithms?

Read More

04/15/20 – Jooyoung Whang – Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking

In this paper, the authors state that the current fully automatic fact-checking systems are not good enough for three reasons: model transparency, taking world facts into consideration, and model uncertainty communication. So, the authors went on and built a system including humans in the loop. Their proposed system uses two classifiers that each predict the reliability of a supporting document of a claim and the veracity of the document. Using these weighted classifications, the confidence of the system’s prediction about a claim is shown to the user. The users can further manipulate the system by modifying the weights of the system. The authors conducted a user study of their system with Mturk workers. The authors found that their approach was effective, but also noted that too much information or misleading predictions can lead to big user errors.

First off, it was hilarious that the authors cited Wikipedia to introduce Information Literacy in a paper about evaluating information. I personally took it as a subtle joke left by the authors. However, it also led me to a question about the system. If I did not miss it, the authors did not explain where the relevant sources or articles came from that supported a claim. I was a little concerned if some of the articles used in the study were not reliable sources.

Also, the users conducted the user study using their own defined set of claims. While I understand this was needed for efficient study, I wanted to know how the system would work in the wild. If a user searched a claim that he or she knows is true, would the system agree with high confidence? If not, would the user have been able to correct the system using their interface? It seemed that some portion of the users were confused, especially with the error correction part of the system. I think these would have been valuable to know and would seriously need to be addressed if the system were to become a commercial product.

These are the questions that I had while reading the paper:

1. How much user intervention do you think is enough for these kinds of systems? I personally think if the users are given too much power over the system, users will apply their bias to the correction and get false positives.

2. What would be a good way for the system to only retrieve ‘reliable’ sources to reference? Stating that a claim is true based on a Wikipedia article would obviously not be so assuring. Also, academic papers cannot address all claims, especially if they are social claims. What would be a good threshold? How could this be detected?

3. Given the current system, would you believe the results that the system gives? Do you think the system addresses the three requirements that the authors introduced which all fact-checking systems should possess? I personally think that system transparency is still lacking. The system shows a lot about what kind of sources it used and how much weight it’s putting into them, but it does not really explain how it made the decision.

Read More

04/15/20 – Ziyao Wang – Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking

The authors focused on fact-checking, which is the task of assessing the veracity of claims. They proposed a mixed-initiative approach to fact-checking. In this system, they combined human knowledge and experience with AI’s efficiency and scalability in information retrieval. They argue that if we want to use fact-checking models practically, the models should be transparent, supporting to integrating user knowledge and have quantification and communication of model uncertainty. Following these principles, they developed their mixed-initiative system and did experiments with participants from MTurk. They found that the system can help humans when they are giving correct predictions and could be harmful when they are giving wrong predictions. And the interaction between participants and the system is not as effective as expected. Finally, they found that making tasks to be games does not help in users’ performance. In conclusion, they found that users are intended to trust models, and may be affected by the models to make the wrong choice. For this reason, transparent models are important in mixed-initiative systems.

Reflection:

I have tried to use the system mentioned in the paper. It is quite interesting. However, for the first time, I used it, I am confused about what should I do to use it. Though the interface is similar to Google.com and I am quite sure I should type something into the text box, there are limited instructions about what should I type, how the system will work and what should I do after searching my typed claim. Also, after I searched for a claim, the result page is still confusing. I know the developers want to show me some findings of the claim and provide me with the prediction result of the system. However, I am still confused about what should I do, and some given searching results are not related to my typed claim.

After several times of use, I got familiar with the system and it does help in my judgement of whether a claim is correct or not. I agree with the authors that some feedbacks about not being able to interact with the system properly comes from the users’ unfamiliar of the system. But apart from this, the authors should provide more instructions to the users so that they can get familiar with the system quickly. I think this is related to the transparency of the system and may raise users’ trust.

Another issue I found during use is that there are no words like the results can only be used as a reference, you should make a judgement using your own mind, or similar explanations. I think this may be a reason that the error rate of users’ results increased significantly when the system made wrong predictions. Participants may change their own minds when they saw that the prediction result is different from their own results because they think know little about the system and may think that system would be more likely to get the correct answer. If the system is more transparent to the users, the users may be able to provide more correct answers to the claims.

Questions:

How to let the participants make correct judgements when the system provides wrong predictions?

What kinds of instructions should be added so that participants can get familiar with the system more quickly?

Can this system be used in areas other than fact-checking?

Read More

04/15/2020 – Ziyao Wang – Algorithmic accountability

In this report, the author studied about how algorithms execute and are worthy of scrutiny by computational journalists. He used methods such as transparency and reverse engineering to analyze the algorithms. Also, he analyzed five kinds of atomic decisions, including prioritization, classification, association, filtering, and algorithmic accountability, to assess algorithmic power. For the reverse engineering part, he analyzed numerous daily cases and presented a new scenario of reverse engineering which considers both inputs and outputs. He considered the variable observability of I/O relationships and identifying, sampling, and finding newsworthy stories about algorithms. Finally, the author discussed challenges that may be faced by the application of algorithmic accountability reporting in the future. Also, he proposed that transparency can be used to effectively force applications to take journalistic norms when newsroom algorithms are applied.

Reflections:

I am really interested in the reverse engineering part of this report. The author concluded different cases of researchers doing reverse engineering towards algorithms. It is quite exciting to understand the opportunities and limitations of the reverse engineering approach to investigating algorithms. And, reverse engineering is valuable in explaining how algorithms work and finding limitations of the algorithms. As many current applied algorithms or models are trained using unsupervised learning or deep learning, it is hard for us to understand and explain them. We can only use metrics like recall or precision to evaluate them. But with reverse engineering, we can know about how the algorithms work and modify them to avoid limitations and potential discriminations. However, I think there may be some ethical issues in reverse engineering. When some bad guys did reverse engineering to some applications, they can steal the ideas in the developed applications or algorithms. Or, they may bypass the security system of the application making use of the drawbacks they found using reverse engineering.

For the algorithmic transparency, I felt that I paid little attention to this principle before. I used to only consider whether the algorithm works or not. However, after reading this report, I felt that algorithmic transparency is an important aspect of system building and maintenance. Instead of letting researchers employing reverse engineering to find the limitations of systems, it is better to make some part of the algorithms, the use of the algorithms and some other data to the public. On one hand, this will raise the public trust of the system due to its transparency. On the other hand, experts from outside the company or the organization can make a contribution to the improvement and secure the system. However, currently, transparency is far from a complete solution to balancing algorithmic power. Apart from the author’s idea that researchers can apply reverse engineering to analyze the systems, I think both corporations and governments can pay more attention to the transparency of the algorithms.

Questions:

I am still confused about how to find the story behind the input-output relationship after reading the report. How can we find out how the algorithm operates with an input-output map?

How can we avoid crackers making use of reverse engineering to do attacks?

Apart from journalists, which groups of people should also employ reverse engineering to analyze systems?

Read More