04/22/20 – Myles Frantz – The Knowledge Accelerator: Big Picture Thinking in Small Pieces

Summary

Maintaining a public and open source website can be difficult since the website is supported by individuals that are not paid. This team investigated using a crowd sourcing platform to not only support the platform but create articles. These articles (or tasks) were broken down into micro tasks there were manageable and scalable by the crowd sourced workers. These tasks were integrated throughout other HITs and were given extra contributions in order to relieve any extra reluctance on editing other crowd workers work. 

Reflection

I appreciate the competitive nature of comparing both the supervised learning (SL) and a reinforcement learning (RL) in the same type of game scenario of helping the human succeed by aiding the as best as it can. However as one of their contributions, I have issue with the relative comparison between the SL and RL bots. Within their contributions, they explicitly say they find “no significant difference in performance” between the different models. While they continue to describe the two methods performing approximately equally, their self-reported data describes a better model in most measurements. Within Table 1 (the comparison of humans working with each model), SL is reported as having a better (yet small) increase and decrease in Mean Rank and Mean Reciprocal Rank respectively (lower and then higher is better respectively). Within Table 2 (the comparison of the multitude of teams), there was only one scenario where the RL Model performed better than the SL Model. Lastly even in the participants self-reported perceptions, the SL Model only decreased performance in 1 of 6 different categories. Though it may be a small decrease in performance, they’re diction downplays part of the argument their making. Though I admit the SL model having a better Mean Rank by 0.3 (from Table 1 MR difference or Table 2 Human row) doesn’t appear to be a big difference, I believe part of their contribution statement “This suggests that while self-talk and RL are interesting directions to pursue for building better visual conversational agents…” is not an accurate description since by their own data it’s empirically disproven. 

Questions

  • Though I admit I focus on the representation of the data and the delivery of their contributions while they focus on the Human-in-the-loop aspect of the data, within the machine learning environment I imagine the decrease in accuracy (by 0.3 or approximately 5%) would not be described as insignificant. Do you think their verbiage is truly representative of the Machine Learning relevance? 
  • Do you think more Turk Workers (they used data from at least 56 workers) or adding requirements of age would change their data? 
  • Though evaluating the quality of collaboration is imperative between Humans and AI to ensure AI’s are made adequately, it seems common there is a disparity between comparing that collaboration and AI with AI. Due to this disconnect their statement on progress between the two collaboration studies seems like a fundamental idea. Do you think this work is more idealistic in its contributions or fundamental? 

Read More

04/22/20 – Myles Frantz – Opportunities for Automating Email Processing: A Need-Finding Study

Summary

Email is a formalized standard used throughout companies, college, and schools. It is also steadily used as documentation throughout companies, keeping track of requirements. Since emails are being used for increasingly more reasons, people have more usages for it. Through this the team has studied various usages of emails and a more integrated way to automate email rules. Using a thorough survey this team has created a domain specific language. Integrating this with the Internet Message Access Protocol (IMAP) protocol, users are also able to create more explicit and dynamic rules. 

Reflection

Working within a company I can greatly appreciate the granularity the provided framework. Within companies’ emails are used as a “rolling documentation”. This rolling documentation is in line with Agile, as it represents new requirements added later in the story. Creating very specific rules pertaining to certain scrum masters may be necessary to contain for reminders upon the rest of the team. Continuing the automation into tools could also lead further into a more streamlined deployment stream, enabling an email to signal a release from the release manager. Despite the wide acceptance of emails, there is the more available direct integration of tools like Mattermost. This availability is solely due to the being open for the application programmable interface that Mattermost provides. Despite the tools Google and Microsoft give throughout emails, the open source community provides a faster platform sharing this information. 

In addition to the rules provided through the interfaces, I believe the python email interface is an incredible extension throughout automating emails. The labeling system provided within many email interfaces is limited to rudimentary rules. The integration of such rules could potentially create better reminders through schools or an advisor advisee relationship. Using a reminder rule could create help issue reminds about grants or ETD issues. Since these rules are written in python, these can be shared and shared amongst group labs to ensure emails that are required are automatically managed. Instead of being limited to a single markdown based language, Python can the most popular language according to the IEEE top programming language survey. 

Questions

  • Utilizing a common standard ensures a there is a good interface for people to learn and get used to throughout the different technologies and companies. Do you think the python scripting is a common interface compared to the other markdown languages for the non-computer science-based users? 
  • The python language can be used in various platforms due to its libraries. In addition to the libraries, many python programs are extensible with to various platforms through an application programmable interface. Utilizing the potential of integrating with other systems throughout the python background, what other systems do you think this email system can be integrated with? 
  • This system was created while adapting current technology. Using the common Internet Message Access Protocol, this uses the fundamental mail protocol. This type of technology is adaptable to current usages within various servers. What kind of usages rules would you integrate with your university email? 

Read More

04/15/20 – Akshita Jha – Disinformation as Collaborative Work: Surfacing the Participatory Nature of Strategic Information Operations

Summary:
“Disinformation as Collaborative Work: Surfacing the Participatory Nature of Strategic Information Operations” by Starbird et. al. talks about strategic information operations like disinformation, political propaganda, conspiracy theories, etc.. They gather valuable insights about the functioning of these organizations by studying the online discourse in-depth, both qualitatively and quantitatively. The authors present three case studies: (i) Trolling Operations by the Internet Research Agency Targeting U.S. Political Discourse (2015-2016), (ii) The Disinformation Campaign Targeting the White Helmets, and (iii) The Online Ecosystem Supporting Conspiracy Theorizing about Crisis Events. These three case studies highlight the coordinated effort of several organizations to spread misinformation and influence the political discourse of the nation. Through these three case studies, the authors attempt to go beyond understanding online bots and trolls to move towards providing a more nuanced and descriptive perspective of these co-ordinated destructive online operations. This work also successfully highlights the challenging problem for “researchers, platform designers, and policy-makers — distinguishing between orchestrated, explicitly coordinated, information operations and the emergent, organic behaviors of an online crowd.”

Reflections:
This is an interesting work that talks about misinformation and the orchestrated effort that goes behind spreading it. I found the overall methodology adopted by the researchers particularly interesting. The authors use qualitative, quantitative and visual techniques to effectively demonstrate the spread of misinformation from the actors (twitter accounts and websites that initiate the discussion) to the target audience (the accounts that retweet and are connected to these actors either directly or indirectly). For example, the case study talking about the Internet Research Agency Targeting U.S. Political Discourse that greatly influenced the 2016 elections, highlighted the pervasiveness of the Russian IRA agents using network analysis and visual techniques. The authors noted that the “fake” accounts influenced both sides: left-leaning accounts criticized and demotivated the support for the U.S. presidential candidate, Hillary Clinton while promoting the now president, Donald Trump, on the right. Similarly, these fake Russian accounts were active on both sides of the discourse for the #BlackLivesMatter movement. It is commendable that the authors were able to successfully uncover the hidden objective of these misinformation campaigns and observe how these accounts presented themselves as both people and organizations in order to embed themselves in the narrative. The authors also mention that they make use of trace ethnography to track down the activities of the fake accounts. I was reminded of another work, “The Work of Sustaining Order in Wikipedia: The Banning of a Vandal”, that also made of trace ethnography to narrow down the rogue user. It would be interesting to read about a work where trace ethnography was used to track down a “good user”. I would have liked if the paper went into the details of their quantitative analysis and the exact methodology they adopted for their network analysis. I’m also curious to know if the accounts were cherry-picked to show the ones with the most destructive influence or the resulting graph we see in the paper covers all possible accounts. It would have helped if the authors had spoken about the limitations of their work and their own biases that might have had some influence on the results.

Questions:
1. What are your general thoughts on the paper?
2. Do you think machine learning algorithms can help in such a scenario? If yes, what role will they play?
3. Have you ever interacted with an online social media bot? What has that been like?

Read More

04/15/20 – Fanglan Chen – Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking

Summary

Nguyen et al.’s paper “Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking” explores the usage of automatic fact-checking, the task of assessing the veracity of claims, as an assistive technology to augment human decision making. Many previous papers propose automated fact-checking systems, but few of them consider the possibility to have humans as part of a Human-AI partnership to complete the same task. By involving humans in fact-checking, the authors study how people would understand, interact with, and establish trust with an AI fact-checking system. The authors introduce their design and evaluation of a mixed-initiative approach to fact-checking, leveraging people’s judgment and experience with the efficiency and scalability of machine learning and automated information retrieval. Their user study shows that crowd workers involved in the task tend to trust the proposed system with improved participant accuracy with the access to claims when exposed to correct model predictions. But sometimes the trust is so strong that getting exposure to the model’s incorrect predictions reduces their accuracy in the task. 

Reflection

Overall, I think this paper conducted an interesting study on how the proposed system actually influences human’s access to the factuality of claims in the fact-checking task. However, the model transparency studied in this research is different from what I expected. When talking about model transparency, I am expecting an explanation of how the training data is collected, what variables are used to train the model, and how the model works in a stepwise process. In this paper, the approach to increase the transparency of the proposed system is to release the source articles based on what the model provides a true or false judgment on the given claim. The next step is letting the crowd workers in the system group go through each source of news articles and see if that makes sense and whether they agree or disagree on the system’s judgment. In this task, I feel a more important transparency problem here is how the model gets the retrieved articles and how it ranks them in a presented way. Some noises in the training data may bring some bias in the model, but there is little we can tell merely based on checking the retrieved results. That makes me think that there might be different levels of transparency, at some level, we can check the input and output at each step, and at another level, we may get exposure to what attributes the model actually uses to make the prediction.

The authors conducted three experiments with a participant survey on how users would understand, interact with, and establish trust with a fact-checking system and how the proposed system actually influences users’ access to the factuality of claims. The experiments are conducted via a comparative study between a control group and a system group to show that the proposed system actually works. Firstly, I would like to know if the randomly recruited workers in the two groups have some differences among demographics that may potentially have an impact on the final results. Is there a better way to conduct such experiments? Secondly, the performance difference between the two groups in regard to human error is so small and there is no additional proof that the performance difference is statistically significant. Thirdly, the paper reports the experimental results on five claims, even with a claim that has incorrectly supportive articles (claim 3), which seems not to be representative. The task is kind of misleading. Would it be better with quality control of the claims in the task design?

Discussion

I think the following questions are worthy of further discussion.

  • Do you think with the article source presented by the system that the users develop more trust about the system?
  • What are the reasons behind that some claims with the retrieval results of the proposed system downgrade the human performance in the fact-checking task?
  • Do you think there is any flaw in the experimentation design? Can you think of a way to improve it?
  • Do you think we need personalized results in this kind of task where the ground truth is provided? Why or why not?

Read More

04/15/20 – Fanglan Chen – Algorithmic Accountability

Summary

Diakopoulos’s paper “Algorithmic Accountability” explores the broad question of how algorithms exert their potential power and are worthy of scrutiny by journalists and studies the role of computational approaches like reverse engineering in articulating algorithmic transparency. Nowadays, Automated decision-making algorithms are now used throughout businesses and governments. Given that such algorithmically informed decisions have the potential for significant societal impact, the goal of this paper is to address algorithmic accountability reported as a mechanism for articulating and elucidating the power structures, biases, and impacts that automated algorithms exercise in our society. Through the use of reverse engineering methods, the researcher conducted five cases of algorithmic accountability reporting, including autocompletion, autocorrection, political email targeting, price discrimination, and executive stock trading plans. Also, the applicability of transparency policies for algorithms is discussed along with the challenges of conducting algorithmic accountability as a broadly viable investigative method.

Reflection

I think this paper touches upon the important research question on the accountability of computational artifacts. Our society currently relies on automated decision-making algorithms on many different aspects, ranging from dynamic pricing to employment practices to criminal sentencing. It is important that developers, product managers, and company/government decision-makers are aware of the possible negative social impacts and necessity for public accountability when they design or implement algorithmic systems.

This research also makes me think about whether we need to be that strict with every algorithmic system. I think to answer the question we need to consider different application scenarios, which are not fully discussed in the paper. Take the object detection problem in the computer vision research area, for example, we have two application scenarios: one is to detect if there is a car in the image for automatic labeling, the other is to check if there is any tumor in the computed tomography for disease diagnosis. Apparently, the level of algorithm accountability is required to be much higher in the second scenario. Hence, in my opinion, the accountability of algorithms needs to be discussed under the application scenarios associated with the user’s expectations and potential consequences when the algorithms go wrong. 

The topic of this research is algorithmic accountability. As far as I am concerned, accountability is a wide scope of concept, including but not limited to an obligation to report, explain, and justify algorithmic decision-making as well as mitigate any potential harms. However, I feel this paper mainly focuses on the accountability aspect of the problem with little discussion on other aspects. There is no denying the fact that transparency is one-way algorithms can be made accountable, but just as the paper puts it, “[t]ransparency is far from a complete solution to balancing algorithmic power.” I think other aspects such as responsibility, fairness, and accuracy are worthy of further exploration as well. Considering these aspects throughout the design, implementation, and release cycles of algorithmic system development would lead to a more socially responsible deployment of algorithms.

Discussion

I think the following questions are worthy of further discussion.

  • What aspects other than transparency you think would be important in the big picture of algorithmic accountability?
  • Can you think about some domain applications that would hardly let automated algorithms make decisions for humans?
  • Do you think transparency potentially leaves the algorithm open to manipulation and vulnerable to adversarial attacks? Why or why not?
  • Who should be responsible if algorithmic systems make mistakes or have undesired consequences?

Read More

04/15/20 – Lulwah AlKulaib-BelieveItOrNot

Summary

Fact checking is important to be done in a timely manner, especially nowadays when it’s used on live TV shows. While existing work presents many automated fact-checking systems, the human-in-the-loop is neglected.This paper presents the design and evaluation of a mixed initiative approach to fact-checking. The authors combine human knowledge and experience with the efficiency and scalability of automated information retrieval and machine learning. The authors present a user study in which participants used the proposed system to help their own assessment of claims. The presented results suggest that individuals tend to trust the system especially that participant accuracy assessing claims improved when exposed to correct model predictions.Yet, the participants’ trust is overestimated when the model was wrong. The exposure to the system’s predictions often reduced human accuracy. Participants that were given the option to interact with these incorrect predictions were often able improve their own performance. This suggests that in order to have better models, they have to be transparent especially when it comes to human-computer interaction as AI models might fail and humans could be the key factor in correcting them. 

Reflection

I enjoyed reading this paper. It was very informative on the importance of transparent models in AI and machine learning. Also how transparent models could make the performance better when we include the human in the loop.

In their limitations, the authors discuss important points in relying on crowdworkers. They explain how MTurk participants should not all be given the same weight when looking at their responses since different participant demographics or incentives may influence findings. For example, non-US MTurk workers may not be representative of American news consumers or familiar with the latest and that could affect their responses. The authors also acknowledge that MTurk workers are paid by the task and that could cause some of them to respond by agreeing with the model’s response when in reality that is not the case, just so they could acquire the HIT and get paid. They found a minority of these responses and it made me think of ways to mitigate it. Like the papers from last week studying the behavior of an MTurk worker while completing the task might be an indicator if the worker actually agrees with the model or it is just to get paid.

The authors mention the negative impact that could potentially stem from their work and that could be as we saw in their experiment the model did a mistake but the humans over trusted it. The dependability on AI and technology makes users give them credit more than they should and such errors could impact the users perception of the truth.Addressing these limitations should be an essential requirement for further work.

Discussion

  • Where would you use a system like this most?
  • How would you suggest to mitigate errors produced by the system?
  • As humans, we trust AI and technology more than we should, how would you redesign the experiment to ensure that the crowdworkers actually check the presented claims? 

Read More

04/15/20 – Lulwah AlKulaib-RiskPerceptions

Summary

People’s choice in using technology is associated with many factors, one of them is the perception of associated risk. The authors wanted to study the influence of associated risk to technology used so they adapted a survey instrument from risk perception literature to assess mental models of users and technologists around risks of emerging, data-driven technologies, for example: identity theft, personalized filter bubbles.. Etc. The authors surveyed 175 individuals on MTurk for comparative and individual assessments of risk, including characterizations using psychological factors. They report their findings around group differences of experts (Tech employees) and non-experts (MTurk workers) in how they assess risk and what factors may contribute to their conceptions of technological harm. They conclude that technologists see these risks as posing a bigger threat to society than do non-experts. Moreover, across groups, participants did not see technological risks as voluntarily assumed. The differences in how participants characterize risk have a connection to the future of design, decision making, and public communications. The authors discuss those by calling them risk-sensitive design.

Reflection

This was an interesting paper. Being a computer science student has always been one of the reasons I question technology, why a service is being offered for free, what’s in it for the company, and what do they gain from my use? 

It is interesting to see that the author’s findings are close to my real life experiences. When talking to friends who do not care about risk and are more interested in the service that makes something easier for them and I mention those risks to them they usually don’t think of those risks so they don’t consider them when making those decisions. Some of those risks are important for them to understand since a lot of the available technology (apps at least) could be used maliciously against their users. 

I believe that risk is viewed differently in experts’ views and non experts’ views and that should be highlighted. This explains how problems like the filter bubble mentioned in the paper have become so concerning. It is very important to know how to respond when there’s such a huge gap in how experts and the public think about risk. There should be a conversation to bridge the gap and educate the public in ways that are easy to perceive and accept.

I also think that with the new design elements and how designers are using risk sensitive design techniques for technologies is important. It helps in introducing technology in a more comforting/socially responsible way. It feels more gradual than sudden which makes users more perceptive to using it. 

Discussion

  • What are your thoughts about the paper?
  • How do you define technology risk?
  • What are the top 5 risks that you can think of in technology from your point of view? How do you think that would differ when asking someone who does not have your background knowledge?
  • What are your recommendations for bridging the gap between experts and non-experts when it comes to risk?

Read More

4/15/20 – Lee Lisle – Algorithmic Accountability: Journalistic Investigation of Computational Power Structures

Summary

Diakopoulos’s paper makes the point that AI has a power over users that is not often clearly expressed to the users, even when those algorithms have massive amounts of power over users’ lives. The author then points out four different ways algorithms have power over users: prioritization, classification, association, and filtering. After a brief description of each, the author then speaks about how transparency is key to the balancing of these powers.

Then a series of AI implementations were discussed and showed how they exerted some amount of power without informing the user. The author used autocompletions on Google & Bing, autocorrections on iPhone, political emails, price discrimination, and stock trading as examples. The author then uses interviews in order to gain insight in how journalists better understand AIs and write stories on them. This is a form of accountability, and journalists use this information to allow users to understand the technology around them.

Personal Reflection

               I thought this paper brought up a good point that was seen in other readings this week: even if the user is given agency over the final decision, the AI biases them towards a particular set of actions. Even if the weaknesses of the AI are understood, like in the Bansal et al. paper on updates, the participant is still biased from the actions and recommendations of the AI. This power, when combined with the effect it can have on peoples’ lives, can greatly change the course of lives.

               The author also makes a point that interviews with designers is a form of reverse-engineering. I had not thought of it in this way before, so it was an interesting insight into journalism. Furthermore, the idea that AIs are black boxes, but their inputs and outputs can be manipulated such that the interior workings can be better understood was another thing I hadn’t thought of.

               I was actually aware of most of the cases the author presented as ways of algorithms exerting power. I have used different computers and safe modes on browsers in the past to ensure I was getting the best deal in travel or hotels before, for example.

               Lastly, I thought the idea of journalists having to uncover these AI (potential) malpractices was an interesting quandary. Once they do this, they must publish a story, but then most people will likely not hear about it. There’s an interesting problem here of how to warn people about potential red flags in algorithms that I felt the paper didn’t discuss well enough.

Questions

  1. Are there any specific algorithms that have biased you in the past? How did they? Was it a net positive, or net negative result? What type of algorithmic power did it exert?
  2. Which of the four types of algorithmic power is the most serious, in your opinion? Which is the least?
  3. Did any of the cases surprise you? Do they change how you may use technology in the future?
  4. What ways can the users abuse the AI systems?

Read More

04/15/20 – Lee Lisle – Believe it or Not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking

Summary

Ngyuen et al’s paper discusses the rise of misinformation and the need to combat it via tools that can verify claims while also maintaining users’ trust of the tool. They designed an algorithm that finds sources that are similar to a given claim to determine whether or not the claim is accurate. They also weight the sources based on esteem. They then ran 3 studies (with over 100 participants in each) where users could interact with the tool and change settings (such as source weighting) in order to evaluate their design. The first study found that the participants trusted the system too much – when it was wrong, they tended to be inaccurate, and when it was right, they were more typically correct. The second study allowed participants to change the inputs and inject their own expertise into the scenario. This study found that the sliders did not significantly impact performance. The third study focused on gamification of the interface, and found no significant difference.

Personal Reflection

               I enjoyed this paper from a 50,000 foot perspective, as they tested many different interaction types and found what could be considered negative results. I think papers that show that all work is not necessarily good have a certain amount of extra relevance – they certainly show that there’s more at work than just novelty.

I especially appreciated the study on the effectiveness of gamification. Often, the prevailing theory is that gamification increases user engagement and increases the tools’ effectiveness. While the paper is not conclusive that gamification cannot do this, it certainly lends credence to the thought that gamification is not a cure-all.

However, I took some slight issue with their AI design. Particularly, the AI determined that the phrase “Tiger Woods” indicated a supportive position. While their stance was that AIs are flawed (true), I felt that this error was quite a bit more than we can expect from normal AIs, especially ones that are being tweaked to avoid these scenarios. I would have liked to see experiment 2 and 3 improved with a better AI, as it does not seem like they cross-compared studies anyway.

Questions

  1. Does the interface design including a slider to adjust source reputations and user agreement on the fly seem like a good idea? Why or why not?
  2.  What do you think about the attention check and its apparent failure to accurately check? Should they have removed the participants with incorrect answers to this check?
  3. Should the study have included a pre-test to determine how the participants’ world view may have affected the likelihood of them agreeing with certain claims? I.E., should they have checked to see if the participants were impartial, or tended to agree with a certain world view? Why or why not?
  4. What benefit do you think the third study brought to the paper? Was gamification proved to be ineffectual, or is it a design tool that sometimes doesn’t work?

Read More

04/15/2020 – Palakh Mignonne Jude – What’s at Stake: Characterizing Risk Perceptions of Emerging Technologies

SUMMARY

The authors of this paper adapt a survey instrument from existing risk perception literature to analyze the perception of risk surrounding newer emerging data-driven technologies. The authors surveyed 175 participants (26 experts and 149 non-experts). They categorize an ‘expert’ to be anyone working in a technical role or earning a degree in a computing field. Inspired by the original 1980’s paper ‘Facts and Fears: Understanding Perceived Risk’, the authors consider 18 risks (15 new risks and 3 from the original paper). These 15 new risks include ‘biased algorithms for filtering job candidates’, ‘filter bubbles’, and ‘job loss from automation’. The authors also consider 6 psychological factors while conducting this study. The non-experts (as well as a few who were later on considered to be ‘experts’) were recruited using MTurk. The authors borrowed quantitative measures that were used in the original paper and added two new open-response questions – describing the worst-case scenario for the top three risks (as indicated by the participant) and adding new serious risks to society (if any). The authors also propose a risk-sensitive design based on the results of their survey.  

REFLECTION

I found this study to be very interesting and liked that the authors adapted the survey from existing risk perception literature. The motivation the paper reminded me about a New York Times article titled ‘Twelve Million Phones, One Dataset, Zero Privacy’ and the long-term implications of such data collection and its impact on user privacy.

 I found it interesting to learn that the survey results indicated that both experts and non-experts rated nearly all risks related to emerging technologies as characteristically involuntary. It was also interesting to learn that despite consent processes built into software and web services; the corresponding risks were not perceived to voluntary.  I thought that it was good that the authors included the open-resource question on what the user’s perceived as the worst case scenario for the top three riskiest technologies. I liked that they provided some amount of explanation for their survey results.

The authors mention that technologists should attempt to allow more discussion around data practices and be willing to hold-off rolling out new features that raise more concerns than excitement. However, this made me wonder if any of the technological companies would be willing to perform such a task. It would probably cause external overhead and the results may not be perceived by the company to be worth the amount of time and effort that such evaluations may entail.

QUESTIONS

  1. In addition to the 15 new risks added by the authors for the survey, are there any more risks that should have been included? Are there any that needed to be removed or modified from the list? Are there any new psychological factors that should have been added?
  2. As indicated by the authors, there are gaps in the understanding of the general public. The authors suggest that educating the public would enable this gap to be reduced more easily as compared to making the technology less risky. What is the best way to educate the public in such scenarios? What design principles should be kept in mind for the same?
  3. Have any follow-up studies been conducted to identify ‘where’ the acceptable marginal perceived risk line should be drawn on the ‘Risk Perception Curve’ introduced in the paper?  

Read More