04/15/2020 – Ziyao Wang – Algorithmic accountability

In this report, the author studied about how algorithms execute and are worthy of scrutiny by computational journalists. He used methods such as transparency and reverse engineering to analyze the algorithms. Also, he analyzed five kinds of atomic decisions, including prioritization, classification, association, filtering, and algorithmic accountability, to assess algorithmic power. For the reverse engineering part, he analyzed numerous daily cases and presented a new scenario of reverse engineering which considers both inputs and outputs. He considered the variable observability of I/O relationships and identifying, sampling, and finding newsworthy stories about algorithms. Finally, the author discussed challenges that may be faced by the application of algorithmic accountability reporting in the future. Also, he proposed that transparency can be used to effectively force applications to take journalistic norms when newsroom algorithms are applied.

Reflections:

I am really interested in the reverse engineering part of this report. The author concluded different cases of researchers doing reverse engineering towards algorithms. It is quite exciting to understand the opportunities and limitations of the reverse engineering approach to investigating algorithms. And, reverse engineering is valuable in explaining how algorithms work and finding limitations of the algorithms. As many current applied algorithms or models are trained using unsupervised learning or deep learning, it is hard for us to understand and explain them. We can only use metrics like recall or precision to evaluate them. But with reverse engineering, we can know about how the algorithms work and modify them to avoid limitations and potential discriminations. However, I think there may be some ethical issues in reverse engineering. When some bad guys did reverse engineering to some applications, they can steal the ideas in the developed applications or algorithms. Or, they may bypass the security system of the application making use of the drawbacks they found using reverse engineering.

For the algorithmic transparency, I felt that I paid little attention to this principle before. I used to only consider whether the algorithm works or not. However, after reading this report, I felt that algorithmic transparency is an important aspect of system building and maintenance. Instead of letting researchers employing reverse engineering to find the limitations of systems, it is better to make some part of the algorithms, the use of the algorithms and some other data to the public. On one hand, this will raise the public trust of the system due to its transparency. On the other hand, experts from outside the company or the organization can make a contribution to the improvement and secure the system. However, currently, transparency is far from a complete solution to balancing algorithmic power. Apart from the author’s idea that researchers can apply reverse engineering to analyze the systems, I think both corporations and governments can pay more attention to the transparency of the algorithms.

Questions:

I am still confused about how to find the story behind the input-output relationship after reading the report. How can we find out how the algorithm operates with an input-output map?

How can we avoid crackers making use of reverse engineering to do attacks?

Apart from journalists, which groups of people should also employ reverse engineering to analyze systems?

Read More

04/15/20 – Jooyoung Whang – What’s at Stake: Characterizing Risk Perceptions of Emerging Technologies

In this paper, the authors conduct a survey with a listing of known technological risks, asking the participants to rate the severity of each risk. The authors state that their research is an extension of prior work done in the 1980s. The paper’s survey was taken between experts and non-experts, where experts were collected from Twitter and non-experts from Mturk. From the old work and their own, the authors found that people tend to rate voluntary risks low even if in reality they are high. They also found that many emerging technological risks were regarded as involuntary. It was also shown that non-experts tended to underestimate the risks of new technologies. The authors also introduce a risk-sensitive design based on their findings. The authors show a risk-perception graph that can be used to decide whether a proposed technology is perceived by non-experts as risky as experts think or are underestimated and whether the design is acceptable.

This paper nicely captures the user characteristics of technical risk perception. I liked that the paper did not end explaining the results but also went further to propose a tool for technical designers. However, it was a little unclear to me how to use the tool. The risk-perception graph that the authors show only has “low” and “high” on the axis’s labels, which are very subjective terms. A way to quantify risk perception would have served nicely.

This paper also made me think what’s the point of providing terms of use for a product if the users get the feeling that they have involuntarily exposed to risk. I feel like a better representation would be needed. For example, a short summary outlining the most important risks in a short sentence and providing details in a separate link would be more effective than throwing a wall of text at a (most likely) non-technical user.

I also think a way to address the gap of risk perception between designers and users is to involve users in the development process in the first place. I am unsure of the exact term, but I recall learning about the term users-in-the-loop development cycle from a UX class. This development method allows designers to fix user problems early in the process and end up with higher quality products. I feel it would also inform the designers more about potential risks.

These are the questions that I had while reading the paper:

1. What are some disasters that may happen due to the gap in risk perception between users and designers of a system? Would any additional risks occur due to this gap?

2. What would be a good way to reduce the gap in risk perception? Do you think using the risk-perception graph from the paper is useful for addressing this gap? How would you measure the risk?

3. Would you use the authors’ proposed risk-sensitive design approach in your project? What kind of risks do you expect from your project? Are they technical issues and do you think your users will underestimate the risk?

Read More

04/15/2020 – Myles Frantz – Algorithmic accountability

Summary

With the prevalence of technology, the mainstream programs that help the rise of it not only dictate the technological impact but also the direction of news media and people’s opinions. With journalists turning to various outlets and adapting to the efficiency created by technology, the technology used may introduce bias based on their internal sources or efficiencies and therefor introduce bias into their story. This team measured multiple algorithms against four different categories: prioritization, classification, association, and filtering. Using a combination of these different categories, these are then measured within a user survey to measure how different auto complete features bias their opinions. Using these measurements, it has also been determined by the team that popular search engines like Google specifically tailor results based on other information the user has previously searched. For a normal user this makes sense however for some investigative journalist these results may not accurately represent a source of truth. 

Reflection

Noted by the team, there is a strong conflict in the transparency used within an algorithm. These transparency discrepancies may be due to certain government concerns dependent on certain secrets. These creates a strong sense of resiliency and distrust against the use of certain algorithms based. Though these secrets are claimed for national security, there may be misuse of power or overstepping of definition that overuses the term for personal or political gain and are not correctly appropriated. These kinds of acts may be located at any level of government, from the lowest of actors to the highest of rankings.  

One of the key discussion points raised by the team to fix this potential bias in independent research is to teach journalists how better to use computer systems. This may only seem to bridge the journalist’s new medium they are not familiar with. This could also be seen as an attempt to create a handicap for the journalists to better understand a truly fragmented news system. 

Questions

  • Do you think introducing journalists into a computer science program would extend their capabilities or it would only further direct their ideas while potentially removing certain creativity? 
  • Since there is a type of monopolization throughout the software ecosystem, do you believe people are “forced” to use such technologies that tailor the results? 
  • Given how a lot of technology uses user information for potential misuse, do you agree with this information being introduced with a small disclaimer acknowledging the potential preference? 
  • There are a lot of services that offer you better insights to clean your internet trail and clear any biases internet services cache to ensure a faster and more tailored search results. Have you personally used any of these programs or step by step guides to clean your internet footprint? 
  • Many programs capture and record user usage with a small disclaimer at the end detailing their usage on data. It is likely many users do not read these for various reasons. Do you think if normal consumers of technology were to see how corrective and auto biasing the results could be that they would continue using the services? 

Read More

04/15/20 – Myles Frantz – Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking

Summary

Within this very politicly charged time, it is hard for the average person to decipher any accurate information from the news media. Making this even more difficult are all the media sources creating contradictory information. Despite the variety of companies running fact-checking sources, this team created a fact-checking system that is based on mixing both crowd source and machine learning. Using a machine learning algorithm with a user interface that allows mechanical turk workers to tweak the reputation and whether citations support a claim. These tools allow a user to tweak sources retrieved to read the raw information. The team also created a gamified interface allowing better and more integrated usage of their original system. Overall, the participants appreciated the ability to tweak the sources and to determine the raw sources supporting or not supporting the claim. 

Reflection

I think there is an inherent issue with the gaming experiment created by the researchers. Not part of the environment but based on the nature of humans. Using a gamified method, I believe humans will inherently try gaming the system. Using a smaller scale of this implemented within their research experiment while restricting it in other use cases. 

I believe a crowd worker fact checker service will not work. Given a fact checker service that is crowd sourced is an easy target for any group of malicious actors. Using a common of variety of techniques, actors have used Distributed Denial Of Service (DDOS) attacks to overwhelm and control the majority of responses. These kind of attacks have also been used for controlling block chain transactions and the flow of money. Utilizing a fully fledged crowd sourced fact-checker, this can easily be prone to being overridden through the various actors. 

In general I believe allowing users more visibility into the system encourages more usage. Using some program or some Internet of Things (IoT) device people are likely feeling as though they do not have much control over the flow of the internal programming. Creating this insight and slight control of the algorithm may help give the impression of more control to the consumers of these devices. This amount of control may help encourage people to put their trust back into programs. This is likely due to the the nature of machine learning algorithms and they’re iterative learning process. 

Questions

  • Measuring mechanical turk attention is done usually by creating a baseline question. This ensures if the work is not paying attention (I.e. clicking as fast as they can) they will not answer the baseline question accurately. Given the team did not discard these workers do you think the removal of their answers would support the theory of the team? 
  • Along the same lines of questioning, despite the team regarding the user’s other interactions as measuring their attentiveness, do you think it is wise they ignored the attention check? 
  • Within your project, are you planning on implementing a slide like this team did to help interact with your machine learning algorithm? 

Read More

04/15/2020 – Mohannad Al Ameedi – Believe it or not Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking

Summary

In this paper, the authors propose a mixed initiative approach for fact checking that combine both human knowledge and experience with the automated information retrieval and machine learning. The paper discusses the challenges of the massive amount of information available today on the internet that some of them might not be accurate which introduce a risk to the information consumers. The proposed system retrieve relevant information about a certain topic and use machine learning and natural language processing to assess the factuality of the information and provide a confidence level to the users and let the user decide wither to use the information or do a manual research to validate the claims. The partnership between the artificial intelligence system and human interaction can offer effective fact checking that can support the human decision in a salable and effective way.

Reflection

I found the approach used by the authors to be very interesting. I personally had a discussion with a friend recently about a certain topic that was mentioned in Wikipedia, and I thought the numbers and facts mentioned were accurate but it turns out the information were wrong and he asked me to check an accreditable source. If I had the opportunity to use the proposed system on the paper, then accredited source could have ranked higher than Wikipedia.

The proposed system is very important in our digital age where so much information is generated on a daily bias and we are not only searching for information, but we are also receiving so much information through social media related to current events and some of these events have high impact on our life and we need to assess the factuality of these information and the proposed system can help a lot on that.

The proposed system is like a search engine that not only rank document based on relevance to the search query but also based on the fact-checking assessment of the information. The human interaction is like the relevance feedback in search engine which can improve the retrieval of the information which leads to a better ranking.

Questions

  • The AI systems can be biased because the training data can be biased. How can we make the system unbiased?
  • The proposed system use information retrieval to retrieve relevant articles about a certain topic and then use the machine learning to validate the source of the information and then present the confidence level of each article. Do you think the system should filter out articles with poor accuracy as they might confuse the user? Or they might be very valuable?
  • With the increase usage of social networking, many individuals write or share fake news intentionally or unintentionally. Millions of people post information every day. Can we use the proposed system to assess the fake news? if yes, then can we scale the system to assess millions or billions of tweets or posts?

Read More

04/15/2020 – Subil Abraham – Nguyen et al., “Believe it or not”

In today’s era of fake news where new information is constantly spawning everywhere, the great importance of fact checking cannot be understated. The public has a right to remain informed and be able to obtain true information from accurate, reputable sources. But all too often, people are inundated with too much information and the cognitive load of fact checking everything would be too much. Automated fact checking has made strides but previous work has focused primarily on model accuracy and not on the people who need to use them. This paper is the first to study an interface for humans to use a fact checking tool. The tool is pretrained on the Emergent dataset of annotated articles and sources and uses two models, one that predicts article stance on a claim and the other that calculates the accuracy of the claim based on the reputation of the sources. The application works by taking a claim and retrieving articles that talk about the claim. It uses the article stance model to classify if the articles are for or against the given claim, and then predicts the claim’s accuracy based on the collective reputation of its sources. It conveys that its models are not accurate and provides confidence levels for its accuracy claims. It also provides sliders for the human verifiers to adjust the predicted stance of the articles and also to adjust the source reputation according to their beliefs or new information. The authors run three experiments to test the efficacy of the tool for human fact checkers. They find that the users tend to trust the system, which can be problematic when the system is inaccurate.

I find it interesting that for the first experiment, the System group’s error rate somewhat follows the stance classifiers error rate. The crowd workers are probably not going through the process of independently verifying the stance of the articles and simply trust the predicted stance they are shown. Potentially this could be mitigated by adding incentives (like extra reward) to have them actually read the articles in full. But on the flip side, we can see that their accuracy (supposedly) becomes better when they are given the sliders to modify the stances and reputation. Maybe that interactivity was the clue they needed to understand that the predicted values aren’t set in stone and could potentially be inaccurate. Though I find it strange that the Slider group in the second experiment did not adjust the sliders if they were questioning the sources. What I find even stranger though is the fact that the authors decided to keep the claim that allowing the users to use the sliders made them more accurate. This claim is what most readers would take away unless they were carefully reading the experiments and the riders. And I don’t like that they kept the second experiment results despite them not showing any useful signal. Ultimately, I don’t buy into their push that this tool is something that is useful for the general user as it stands now. And I also don’t really see how this tool could serve as a technological mediator for people with opposing views, at least not the way they described it. I find that this could serve as a useful automation tool for expert fact checkers as part of their work but not for the ordinary user, which is what they model by using crowdworkers. I like the ideas that the paper is going for, of having automated fact checking that helps for the ordinary user and I’m glad they acknowledge the drawbacks. But I think there are too many drawbacks that prevent me from fully buying into the claims of this paper. It’s poetic that I have my doubts about the claims of a paper describing a system that asks you to question claims.

  1. Do you think this tool would actually be useful in the hands of an ordinary user? Or would it serve better in the hands of an expert fact checker?
  2. What would you like to see added to the interface, in addition to what they already have?
  3. This is a larger question, but is there value in having the transparency of the machine learning models in the way they have done (by having sliders that we can manipulate to see the final value change)? How much detail is too much? What about for more complex models where you can’t have that instantaneous feedback (like style transfer) how do you provide explainability there?
  4. Do you find the experiments rigorous enough and conclusions significant enough to back up the claims they are making?

Read More

04/15/2020 – Dylan Finch – Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking

Word count: 567

Summary of the Reading

This paper presents the design and evaluation of a system that is designed to help people check the validity of claims. The system starts with a user entering a claim into the system. The system then shows the user a list of articles related to the claim with a prediction that is based on the articles, about whether or not the claim is true. The system will give a percentage chance for whether the article is true. Each article that the system shows also shows a reputation score for the source of the article and a support score for the article. The user can then adjust these if they don’t think that the system has accurate information. 

The system seemed to help users come to the right conclusions when it had the right data but also seemed to make human judgements worse when the system had inaccurate information. This shows the usefulness of such a system and also gives a reason to be careful about implementing it.

Reflections and Connections

I think that this article tackles a very real and very important problem. Misinformation is more prevalent now than ever, and it is getting harder and harder to find the truth. This can have real effects on people’s lives. If people get the wrong information about a medication or a harmful activity, they may experience a preventable injury or even death. Misinformation can also have a huge impact on politics and may get people to vote in a way they might not otherwise.

The paper brings up the fact that people may over rely on a system like this, just blindly believing the results of the system without putting more thought into it and I think that is the paradox of this system. People want correct information and we want it to be easy to find out if something is correct or not, but the fact of the matter is that it’s just not easy to find out if something is true or not. A system like this would be great when it worked and told people the truth, but it would make the problem worse when it came to the wrong conclusion and then made more people more confident in their wrong answer. No matter how good a system is, it will still fail. Even the best journalists in the world, writing for the most prestigious newspapers in the world, will get things wrong. And, a system like this one will get things wrong even more often. The fact of the matter is that people should always be skeptical and they should always do research before believing that something is true or not, because no easy answer like this can ever be 100% right and if it can’t be 100% right, we shouldn’t trick ourselves into trusting it any more than we should. This is a powerful tool, but we should not rely on it or anything like it.

Questions

  1. Should we even try to make systems like this if they will be wrong some of the time?
  2. How can we make sure that people don’t over rely on systems like this? Can we still use them without only using them?
  3. What’s the best way to check facts? How do you check your facts?

Read More

04/15/2020 – Dylan Finch – What’s at Stake: Characterizing Risk Perceptions of Emerging Technologies

Word count: 553

Summary of the Reading

This paper presents a review of expert and non-expert feelings toward risks with emerging technologies. The paper used a risk survey that was previously used to assess perceptions of risk. This survey was sent out to experts, in the form of people with careers related to technology, and non-experts, in the form of workers on MTurk. While MTurk workers might be slightly more tech-savvy than average, they also tend to be less educated. 

The results showed that experts tended to think more things were more risky. The non-experts tended to downplay the risks of many activities much more than the experts. The results also showed that more voluntary risks were seen as less risky than other forms of risk. It seems like people perceive more risk when they have less control. It also showed that both experts and non-experts saw many emerging technologies as non voluntary, even though these technologies usually get consent from users for everything.

Reflections and Connections

I think that this paper is more important than ever, and it will only continue to get more important as time goes on. In our modern world, more and more of the things we interact with everyday are data driven technologies that weld extreme power, both to help us do things better and for bad actors to hurt innocent people. 

I also think that the paper’s conclusions match up with what I expected. Many new technologies are abstract and the inner workings of them are never seen. They are also much harder to understand for laypersons than the technology of decades past. In the past, you could see that your money was secure in a vault, you could see that you had a big lock on you bike and that it would be hard to steal it, you would know that the physical laws of nature make it hard for other people to steal your stuff, because you had a general idea of how hard it was to break your security measures and because you could see and feel the things you had to protect yourself. Now, things are much different. You have no way of knowing what is protecting your money at the bank. You have no way of knowing, much less understanding the security algorithms that companies use to keep your data safe. Maybe they’re good, maybe they’re not, but you probably won’t know until someone hacks in. The digital world also disregards many of the limits that we experienced in the past and in real life. In real life, it is impossible for someone in India to rob me, without going through a lot of hassle. But, an online hacker can break into bank accounts all across the world and be gone without a trace. This new world of risk is just so hard to understand because we aren’t used to it and because it looks so different to the risks we experience in real life.

Questions

  1. How can we better educate people on the risks of the online world?
  2. How can we better connect abstract online security vulnerabilities to real world, easy to understand vulnerabilities?
  3. Should companies need to be more transparent about security risks to their customers?

Read More

04/15/2020 – Sushmethaa Muhundan – Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking

This work aims to design and evaluate a mixed-initiative approach to fact-checking by blending human knowledge and experience with the efficiency and scalability of automated information retrieval and ML. The paper positions automatic fact-checking systems as an assistive technology to augment human decision making. The proposed system fulfills three key properties namely model transparency, support for integrating user knowledge, and quantification and communication of model uncertainty. Three experiments were conducted using MTurk workers to measure the participants’ performance in predicting the veracity of given claims using the system developed. The first experiment compared users who perform the task with and without seeing ML predictions. The second compared a static interface with an interactive interface where the users were provided options to mend or override the predictions of the AI system. Results showed that the users were generally able to use the interface but this was of little use when the predictions were accurate. The last experiment compared a gamified task design with a non-gamified one, but no significant differences in performance were found. The paper also discusses the limitations of the proposed system and explores further research opportunities.

I liked the fact that the focus of the paper was more on designing automated systems that were user-friendly rather than focussing on improving prediction accuracy. The paper takes into consideration the human element of the human-AI interaction and focuses on making the system better and more meaningful. The proposed system aims to learn from the user and provide a personalized prediction based on the opinions and inputs from the user.

I liked the focus on transparency and communication. Transparent models help users better understanding the internal workings of the system and hence helps build trust. Regarding communication, I feel that conveying the confidence of the prediction helps the users make an informed decision. This is much better than a system that might have high precision but does not communicate the confidence scores. In cases where this system makes an error, the consequences are likely to be precarious since the user might blindly follow the prediction of the system.

The side-effect of making the system transparent was interesting. Not only would transparency lead to higher trust levels, but it would also help teach and structure the user’s own information literacy skills regarding the logical process to follow for assessing claim validity. Thereby, the system proposed truly leveraged the complementary strengths of the human and the AI.

  • Apart from the three properties incorporated in the study (transparency, support for integrating user knowledge, and communication of model uncertainty), what are some other properties that can be incorporated to better the AI systems?
  • The study aims to leverage the complementary strengths of humans and AI but certain results were inconclusive as noted in the paper. Besides the limitations enumerated in the paper, what are other potential drawbacks of the proposed system?
  • Given that the study presented is in the context of automated fact-checking systems, what other AI systems can these principles be applied to?

Read More

04/15/2020 – Sushmethaa Muhundan – What’s at Stake: Characterizing Risk Perceptions of Emerging Technologies

This work aims to explore the impact of perceived risk on choosing to use technology. A survey was conducted to assess the mental models of users and technologists regarding the risks of using emerging, data-driven technologies. Guidelines to develop a risk-sensitive design was then explored in order to address the perceived risk and mitigate it. This model aimed to identify when misaligned risk perceptions may warrant reconsideration. Fifteen risks relating to technology were devised and a total of 175 participants were recruited to process the perceived risks relating to each of the above categories. The participants comprised of 26 experts and 149 non-experts. Results showed that the technologists were more skeptical regarding using data-driven technologies as opposed to non-experts. Therefore, the authors urge designers to strive harder to make the end-users aware of the potential risks involved in the systems. The study recommends that design decisions regarding risk mitigation features for a particular technology should be sensitive to the difference between the public’s perceived risk and the acceptable marginal perceived risk at that risk level.

Throughout the paper, there is a focus on creating design guidelines that reduce risk exposure and increase public awareness relating to potential risks and I feel like this is the need of the hour. The paper focuses on identifying remedies to set appropriate expectations in order to help the public make informed decisions. This effort is good since it is striving to bridge the gap and keep the users informed about the reality of the situation.

It is concerning to note that the results found technologists to be more skeptical regarding using data-driven technologies as opposed to non-experts. This is perturbing since this shows that the risks relating to the latest technologies are perceived as stronger by the group who is involved in the creation of those risks than users of the technology.

Although the participant’s count of experts and non-experts was skewed, it was interesting that when the results were aggregated, the top three highest perceived risks were the same. The only difference was the order of ranking.

It was interesting to note that majority of both groups rated nearly all risks related to emerging technologies as characteristically involuntary. This strongly suggests that the consent procedures in place are not effective. Either the information is not being conveyed to the users transparently or the information is represented in a complex manner and hence the content is not understood the end-users.

  • In the context of the current technologies we use on a daily basis, which factor is more important from your point of view: personal benefits (personalized content) or privacy?
  • The study involved a total of 175 participants comprised of 26 experts and 149 non-experts. Given that there is a huge difference in these numbers and the divide is not even close to being equal, was it feasible to analyze and draw conclusions from the study conducted?
  • Apart from the suggestions in the study, what are some concrete measures that could be adopted to bridge the gap and keep the users informed about the potential risks associated with technology?

Read More