04/15/20 – Lulwah AlKulaib-RiskPerceptions

Summary

People’s choice in using technology is associated with many factors, one of them is the perception of associated risk. The authors wanted to study the influence of associated risk to technology used so they adapted a survey instrument from risk perception literature to assess mental models of users and technologists around risks of emerging, data-driven technologies, for example: identity theft, personalized filter bubbles.. Etc. The authors surveyed 175 individuals on MTurk for comparative and individual assessments of risk, including characterizations using psychological factors. They report their findings around group differences of experts (Tech employees) and non-experts (MTurk workers) in how they assess risk and what factors may contribute to their conceptions of technological harm. They conclude that technologists see these risks as posing a bigger threat to society than do non-experts. Moreover, across groups, participants did not see technological risks as voluntarily assumed. The differences in how participants characterize risk have a connection to the future of design, decision making, and public communications. The authors discuss those by calling them risk-sensitive design.

Reflection

This was an interesting paper. Being a computer science student has always been one of the reasons I question technology, why a service is being offered for free, what’s in it for the company, and what do they gain from my use? 

It is interesting to see that the author’s findings are close to my real life experiences. When talking to friends who do not care about risk and are more interested in the service that makes something easier for them and I mention those risks to them they usually don’t think of those risks so they don’t consider them when making those decisions. Some of those risks are important for them to understand since a lot of the available technology (apps at least) could be used maliciously against their users. 

I believe that risk is viewed differently in experts’ views and non experts’ views and that should be highlighted. This explains how problems like the filter bubble mentioned in the paper have become so concerning. It is very important to know how to respond when there’s such a huge gap in how experts and the public think about risk. There should be a conversation to bridge the gap and educate the public in ways that are easy to perceive and accept.

I also think that with the new design elements and how designers are using risk sensitive design techniques for technologies is important. It helps in introducing technology in a more comforting/socially responsible way. It feels more gradual than sudden which makes users more perceptive to using it. 

Discussion

  • What are your thoughts about the paper?
  • How do you define technology risk?
  • What are the top 5 risks that you can think of in technology from your point of view? How do you think that would differ when asking someone who does not have your background knowledge?
  • What are your recommendations for bridging the gap between experts and non-experts when it comes to risk?

Read More

04/15/2020 – Palakh Mignonne Jude – What’s at Stake: Characterizing Risk Perceptions of Emerging Technologies

SUMMARY

The authors of this paper adapt a survey instrument from existing risk perception literature to analyze the perception of risk surrounding newer emerging data-driven technologies. The authors surveyed 175 participants (26 experts and 149 non-experts). They categorize an ‘expert’ to be anyone working in a technical role or earning a degree in a computing field. Inspired by the original 1980’s paper ‘Facts and Fears: Understanding Perceived Risk’, the authors consider 18 risks (15 new risks and 3 from the original paper). These 15 new risks include ‘biased algorithms for filtering job candidates’, ‘filter bubbles’, and ‘job loss from automation’. The authors also consider 6 psychological factors while conducting this study. The non-experts (as well as a few who were later on considered to be ‘experts’) were recruited using MTurk. The authors borrowed quantitative measures that were used in the original paper and added two new open-response questions – describing the worst-case scenario for the top three risks (as indicated by the participant) and adding new serious risks to society (if any). The authors also propose a risk-sensitive design based on the results of their survey.  

REFLECTION

I found this study to be very interesting and liked that the authors adapted the survey from existing risk perception literature. The motivation the paper reminded me about a New York Times article titled ‘Twelve Million Phones, One Dataset, Zero Privacy’ and the long-term implications of such data collection and its impact on user privacy.

 I found it interesting to learn that the survey results indicated that both experts and non-experts rated nearly all risks related to emerging technologies as characteristically involuntary. It was also interesting to learn that despite consent processes built into software and web services; the corresponding risks were not perceived to voluntary.  I thought that it was good that the authors included the open-resource question on what the user’s perceived as the worst case scenario for the top three riskiest technologies. I liked that they provided some amount of explanation for their survey results.

The authors mention that technologists should attempt to allow more discussion around data practices and be willing to hold-off rolling out new features that raise more concerns than excitement. However, this made me wonder if any of the technological companies would be willing to perform such a task. It would probably cause external overhead and the results may not be perceived by the company to be worth the amount of time and effort that such evaluations may entail.

QUESTIONS

  1. In addition to the 15 new risks added by the authors for the survey, are there any more risks that should have been included? Are there any that needed to be removed or modified from the list? Are there any new psychological factors that should have been added?
  2. As indicated by the authors, there are gaps in the understanding of the general public. The authors suggest that educating the public would enable this gap to be reduced more easily as compared to making the technology less risky. What is the best way to educate the public in such scenarios? What design principles should be kept in mind for the same?
  3. Have any follow-up studies been conducted to identify ‘where’ the acceptable marginal perceived risk line should be drawn on the ‘Risk Perception Curve’ introduced in the paper?  

Read More

04/15/20 – Jooyoung Whang – What’s at Stake: Characterizing Risk Perceptions of Emerging Technologies

In this paper, the authors conduct a survey with a listing of known technological risks, asking the participants to rate the severity of each risk. The authors state that their research is an extension of prior work done in the 1980s. The paper’s survey was taken between experts and non-experts, where experts were collected from Twitter and non-experts from Mturk. From the old work and their own, the authors found that people tend to rate voluntary risks low even if in reality they are high. They also found that many emerging technological risks were regarded as involuntary. It was also shown that non-experts tended to underestimate the risks of new technologies. The authors also introduce a risk-sensitive design based on their findings. The authors show a risk-perception graph that can be used to decide whether a proposed technology is perceived by non-experts as risky as experts think or are underestimated and whether the design is acceptable.

This paper nicely captures the user characteristics of technical risk perception. I liked that the paper did not end explaining the results but also went further to propose a tool for technical designers. However, it was a little unclear to me how to use the tool. The risk-perception graph that the authors show only has “low” and “high” on the axis’s labels, which are very subjective terms. A way to quantify risk perception would have served nicely.

This paper also made me think what’s the point of providing terms of use for a product if the users get the feeling that they have involuntarily exposed to risk. I feel like a better representation would be needed. For example, a short summary outlining the most important risks in a short sentence and providing details in a separate link would be more effective than throwing a wall of text at a (most likely) non-technical user.

I also think a way to address the gap of risk perception between designers and users is to involve users in the development process in the first place. I am unsure of the exact term, but I recall learning about the term users-in-the-loop development cycle from a UX class. This development method allows designers to fix user problems early in the process and end up with higher quality products. I feel it would also inform the designers more about potential risks.

These are the questions that I had while reading the paper:

1. What are some disasters that may happen due to the gap in risk perception between users and designers of a system? Would any additional risks occur due to this gap?

2. What would be a good way to reduce the gap in risk perception? Do you think using the risk-perception graph from the paper is useful for addressing this gap? How would you measure the risk?

3. Would you use the authors’ proposed risk-sensitive design approach in your project? What kind of risks do you expect from your project? Are they technical issues and do you think your users will underestimate the risk?

Read More

04/15/2020 – Dylan Finch – What’s at Stake: Characterizing Risk Perceptions of Emerging Technologies

Word count: 553

Summary of the Reading

This paper presents a review of expert and non-expert feelings toward risks with emerging technologies. The paper used a risk survey that was previously used to assess perceptions of risk. This survey was sent out to experts, in the form of people with careers related to technology, and non-experts, in the form of workers on MTurk. While MTurk workers might be slightly more tech-savvy than average, they also tend to be less educated. 

The results showed that experts tended to think more things were more risky. The non-experts tended to downplay the risks of many activities much more than the experts. The results also showed that more voluntary risks were seen as less risky than other forms of risk. It seems like people perceive more risk when they have less control. It also showed that both experts and non-experts saw many emerging technologies as non voluntary, even though these technologies usually get consent from users for everything.

Reflections and Connections

I think that this paper is more important than ever, and it will only continue to get more important as time goes on. In our modern world, more and more of the things we interact with everyday are data driven technologies that weld extreme power, both to help us do things better and for bad actors to hurt innocent people. 

I also think that the paper’s conclusions match up with what I expected. Many new technologies are abstract and the inner workings of them are never seen. They are also much harder to understand for laypersons than the technology of decades past. In the past, you could see that your money was secure in a vault, you could see that you had a big lock on you bike and that it would be hard to steal it, you would know that the physical laws of nature make it hard for other people to steal your stuff, because you had a general idea of how hard it was to break your security measures and because you could see and feel the things you had to protect yourself. Now, things are much different. You have no way of knowing what is protecting your money at the bank. You have no way of knowing, much less understanding the security algorithms that companies use to keep your data safe. Maybe they’re good, maybe they’re not, but you probably won’t know until someone hacks in. The digital world also disregards many of the limits that we experienced in the past and in real life. In real life, it is impossible for someone in India to rob me, without going through a lot of hassle. But, an online hacker can break into bank accounts all across the world and be gone without a trace. This new world of risk is just so hard to understand because we aren’t used to it and because it looks so different to the risks we experience in real life.

Questions

  1. How can we better educate people on the risks of the online world?
  2. How can we better connect abstract online security vulnerabilities to real world, easy to understand vulnerabilities?
  3. Should companies need to be more transparent about security risks to their customers?

Read More

04/15/2020 – Sushmethaa Muhundan – What’s at Stake: Characterizing Risk Perceptions of Emerging Technologies

This work aims to explore the impact of perceived risk on choosing to use technology. A survey was conducted to assess the mental models of users and technologists regarding the risks of using emerging, data-driven technologies. Guidelines to develop a risk-sensitive design was then explored in order to address the perceived risk and mitigate it. This model aimed to identify when misaligned risk perceptions may warrant reconsideration. Fifteen risks relating to technology were devised and a total of 175 participants were recruited to process the perceived risks relating to each of the above categories. The participants comprised of 26 experts and 149 non-experts. Results showed that the technologists were more skeptical regarding using data-driven technologies as opposed to non-experts. Therefore, the authors urge designers to strive harder to make the end-users aware of the potential risks involved in the systems. The study recommends that design decisions regarding risk mitigation features for a particular technology should be sensitive to the difference between the public’s perceived risk and the acceptable marginal perceived risk at that risk level.

Throughout the paper, there is a focus on creating design guidelines that reduce risk exposure and increase public awareness relating to potential risks and I feel like this is the need of the hour. The paper focuses on identifying remedies to set appropriate expectations in order to help the public make informed decisions. This effort is good since it is striving to bridge the gap and keep the users informed about the reality of the situation.

It is concerning to note that the results found technologists to be more skeptical regarding using data-driven technologies as opposed to non-experts. This is perturbing since this shows that the risks relating to the latest technologies are perceived as stronger by the group who is involved in the creation of those risks than users of the technology.

Although the participant’s count of experts and non-experts was skewed, it was interesting that when the results were aggregated, the top three highest perceived risks were the same. The only difference was the order of ranking.

It was interesting to note that majority of both groups rated nearly all risks related to emerging technologies as characteristically involuntary. This strongly suggests that the consent procedures in place are not effective. Either the information is not being conveyed to the users transparently or the information is represented in a complex manner and hence the content is not understood the end-users.

  • In the context of the current technologies we use on a daily basis, which factor is more important from your point of view: personal benefits (personalized content) or privacy?
  • The study involved a total of 175 participants comprised of 26 experts and 149 non-experts. Given that there is a huge difference in these numbers and the divide is not even close to being equal, was it feasible to analyze and draw conclusions from the study conducted?
  • Apart from the suggestions in the study, what are some concrete measures that could be adopted to bridge the gap and keep the users informed about the potential risks associated with technology?

Read More

4/15/2020 – Nurendra Choudhary – What’s at Stake: Characterizing Risk Perceptions ofEmerging Technologies

Summary

In this paper, the authors study the associated risk perception of human mental models to AI systems. For analyzing risk perception, they study 175 individuals, both individually and comparatively, while also factoring in psychological factors. Additionally, they also analyze the factors that lead to people’s conceptions or misconceptions in risk assessment. Their analysis shows that technologists or AI experts consider the studied risks as posing more threat to society than non-experts. Such differences, according to the author, can be utilized to define system design and decision-making.

However, most of the subjects agree that such system risks (identity theft, personal filter bubbles) were not voluntarily introduced in the system but were a consequence or side-effects of integrating some valuable tools or services. The paper also discusses risk-sensitive designs that need to be applied when the difference between public and expert opinion on risk is high. They emphasize on the integration of risk-sensitivity earlier in the design process rather than the current process where it is an after-thought of a deployed system.

Reflection

Given the recent usability of AI technologies in everyday lives (Tesla cars, Google Search, Amazon Marketplace, etc.), this study is very necessary. The risks do not just involve test subjects but a much larger populace that is unable to comprehend these technologies that intrude in their daily lives. These leave them vulnerable to exploitation. Several cases of identity theft or spam treachery have already taken victims due to lack of awareness. Hence, it is very crucial to analyze the amount of information that can reduce such cases. Additionally, a system should provide a comprehensive analysis of its limitations and possible misuse.

Google Assistant records all conversations to detect its initiation phrases “OK Google”. It depends on the fact that the recording is a stream and no data is stored except a segment. However, a possible listener to extract the streams and utilize another program to integrate them into comprehensible knowledge that can be exploited. Users are confident in the system due to the speech segmentation. However, an expert can see-through the given ruse and imagine the listener scenario just based on the knowledge that such systems exist. This knowledge is not entirely expert-oriented and can be transferred to users, thus preventing exploitation.   

Questions

  1. Think about systems that do not rely or have access to user information (e.g. Google Translate, Duckduckgo). What information can they still get from users? Can this be utilized in an unfair manner? Would these be risk-sensitive features? If so, how should the system design change?
  2. Unethical hackers generally work in networks and are able to adapt to security reinforcements. Can security reinforcements utilize risk-sensitive designs to overcome hacker adaptability? What such changes could be thought of in the current system?
  3. Experts tend to show more caution towards technologies. What amount of knowledge introduces such caution? Can this amount be conveyed to all the users of a particular product? Would this knowledge help risk-sensitivity?
  4. Do you think the individuals selected for the task are a representative set? They utilized MTurk for their study. Isn’t there an inherent presumption of being comfortable with computers? How could this bias the study? Is the bias significant?

Word Count: 542

Read More