04/08/2020 – Dylan Finch – Agency plus automation: Designing artificial intelligence into interactive systems

Word count: 667

Summary of the Reading

This paper focuses on the problem of how humans interact with AI systems. The paper starts with a discussion of automation and how fully automated systems are a long way off and are currently not the best way to do things. We do not have the capabilities to make fully automated systems now, so we should not be trying to. Instead, we should make it easier for humans and machines to interact and work together.

The paper then describes three systems that try to use these principles to make systems that have humans and machines working together. All of these systems give the human the most power. They always have the final say on what to do. The machine will give suggestions, but the humans decide whether or not to accept these ideas. The systems include a tool to help with data analytics, a tool to help with data visualization, and a tool to help with natural language processing.

Reflections and Connections

This paper starts with a heavy dose of skepticism about automation. I think that many people are too optimistic about automation. This class has shown me how people are needed to make these “automated” systems work.Crowd workers often fill in the gaps for systems which pretend to be fully automated, but are not actually fully automated. Rather than pretend that people aren’t needed, we should embrace it and build tools to help the people who make these systems possible. We should be working to help people, rather than replace them. It will take a long time to fully replace many human jobs. We should build tools for the present, not the future.

The paper also argues that a good AI should be easily accessed and easy to dismiss. I completely agree. If AI tools are going to be commonplace, we need easy ways to get information from them and dismiss them when we don’t need them. In this way, they are much like any other tool. For example, suggesting software should give suggestions, but should get out of the way when you don’t like any of them. 

This paper brings up the idea that users prefer to be in control when using many systems. I think this is something many researchers miss. People like to have control. I often do a little more work so that I don’t have to use suggested actions, so that I know exactly what is being done. Or, I will go back through and try to check the automated work to make sure I did it correct. For example, I would much rather make my own graph in Excel than use the suggested ones. 

Questions

  1. Is the public too optimistic about the state of automation? What about different fields of research? Should we focus less on fully automating systems and instead on improving the systems we have with small doses of automation?
  2. Do companies like Tesla need to be held responsible for misleading consumers about the abilities of their AI technologies? How can we, as computer scientists, help people to better understand the limitations of AI technologies?
  3. When you use software that has options for automation, are you ever skeptical? Do you ever do things yourself because you think the system might not do it right? When we are eventually trying to transition to fully automated systems, how can we get people to trust the systems?
  4. The natural language translation experiment showed that the automated system made the translators produce more homogenous translations. Is this a good thing? When would having more similar results be good? When would it be bad?
  5. What are some other possible applications for this type of system, where an AI suggests actions and a user decides whether to accept those actions or not? What are some applications where this kind of system might not work? What limitations cause it not to work there?

Read More

04/08/2020 – Nan LI – CrowdScape: Interactively Visualizing User Behavior and Output

Summary:

This paper demonstrates a system called CrowdScape that support human to evaluate the quality of crowd work outputs by presenting an interactive visualization about the information of worker behavior and worker outputs through mixed-initiative machine learning(ML). This paper made the point that quality control for complex and creative crowd work based on either the post-hoc output or behavioral traces alone is insufficient. Therefore, the author proposed that we could gain new insight into crowd worker performance by combining both behavioral observations and knowledge of worker output. CrowdScape system presents the visualization of each individual traces that include mouse movement, keypresses, visual scrolling, focus shifts, and clicking through an abstract visual timeline. The system also aggregates these features use a combination of 1-D and 2-D matrix scatter plots to show the distribution of the features to enable dynamic exploration. The system also explores the worker output by recognizing the worker submissions pattern and providing means. Finally, CrwodScape enables users to analyze the mental models of tasks and worker behaviors, and use these models for the verification of worker output and majority or gold standards. The author also demonstrates four experiment results to illustrate the practical operation and also prove the effectiveness of the system.

Reflection:

I think the author made a great point regarding address the quality control issue of crowdsourcing. Quality control approaches are limited and even not guaranteed for most of the system, which using crowdsourcing as part of the component. The most frequently used approach I have seen so far based on the policy that worker’s salary determined by the quality of their work. It is the most reasonable approach to encourage workers to provide high-quality results. Besides, another straightforward approach is to choose the similar or the same work(such as tag, count numbers) provided by most workers.

Nevertheless, the author proposed that we should consider the quality control for more complex and creative work because these type of tasks has appeared more often. However, there is no appropriate quality control mechanism exists. I think this mechanism is essential in order to utilize crowdsourcing better.

I believe the most significant advantage of this CrowdScape is that the system can be used very flexibly based on the type of task. From the scenario and case studies presented in the paper, the user could evaluate the worker’s output using different attributes as well as interactive visualization method based on the type of the task. Further, the types of visualization are varied, and each of them can interpret and detect differences and patterns in workers’ behavior and their work. The system design is impressed because the interface of the system is userfriendly based on the figures in the paper combine with the explanation.

The only concern is that as the increase in the number of workers, the points and lines on the visualization interface will become so dense that no pattern can be detected. Therefore, the system might need data filter tools or interactive systems to deal with this problem.

Questions:

  1. What are the most commonly used quality control approaches? What is the quality control approach that you will apply to your project?
  2. There are many kinds of HITS on MTurk, do you think what type of work requires quality control and what kinds of work do not.
  3. For information visualization, one of the challenges is to deal with a significant amount of data. How we should deal with this problem regarding the CrowdScape system?

Word Count: 588

Read More

04/08/2020 – Myles Frantz – Agency plus automation: Designing artificial intelligence into interactive systems

Summary

Throughout the field of artificial intelligence, many recent research efforts have been in effort to fully automate issues, ignoring the jobs that would be automated and closed. To ensure there is still progression between the two fields, this team has bootstrapped their previous work to create three popular and different technologies that work together to both visualize and aid the collaboration between workers and machine learning. Within data analysts, since there have been efforts to create automation in cleaning the raw data, one of the teams projects was adapted to visualizing the data in a loose Excel like table and suggesting transformations throughout the various cells. Further delving into the data analysts opportunities, they adapted more of their tools in order to copy data and automatically suggest automatic visualization and tables to better graph the information. Providing further information into the predictive suggestion, the team was able to produce multiple suggestions in which the users can choose which one they believe is the correct suggestion and further enhances the algorithm.

Reflection

Being a proponent of simplicity of design, I do appreciate how simplistic and connected their applications can be. Throughout the regularized and modularized programs, unless through Application Programming Interfaces it seems connecting various applications has to be done through standardized outputs that can be edited or adapted by someone or another external application. Being able to directly enable suggestions in data validation and connect it to an advanced graphing utility that also suggests new graphing rules and tools.

I do appreciate how applicable their research is. Though not completely unique, creating usable applications greatly expands how far a project will stretch and be used. If it can be directly and easily used the likelihood it will be extended and used throughout public projects.

Questions

  • Within the data analyst role, this may have aided but may not have completely alleviated all of the tasks that they have to do throughout an agile cycle, let alone a full feature. What other positions could be alleviated through these sort of tools?
  • Within all of the tool sets available, this may be one of many available on GitHub. Having a published paper may improve the program’s odds of being used throughout the future, however it does not necessarily translate to a well used and publicly used project. Ignoring any of the technical information (such as technical expertise, documentation, and programming styles), will this program be an upcoming project on GitHub?
  • Within the various use of standardized languages, the teams were able to make a higher abstraction that allows direct communication between the applications. Though making it easier on the development team, this may make it more restrictive on any tools looking to extend or communicate with the team’s set of tools. Do you think their created domain specific languages were required for their set of tools or if the languages was only created to help aid the developers for the connectivity between their applications?

Read More

04/08/2020 – Myles Frantz – CrowdScape: Interactively visualizing user behavior and output

Summary

Crowd Sourcing provides a quick and easily scalable way to request help from people, but how do you ensure they are properly paying attention instead of cheating in some way? Since tasks are handed off through some platform that handles the abstraction of assigning work to the workers, the requesters cannot guarantee the participants full attention. This is where this team has created CrowdScape, to keep better track of the attention and focus of the participants. Utilizing various Javascript libraries, CrowdScape is able to keep track of the participants through their interaction or lack of interactions. This program is able to track participants’ mouse clicks, key strokes, and browser focus changes. Since Amazon Turk is a web-based platform, Javascript libraries are perfectly able to track this information. Through the various visualization libraries retrieved, the team is able to demonstrate the visualization that provides extra insight information to the requestors. Through these advanced visualizations it’s demonstrated how the team is able to determine the workers behavior, including if they only have rapid clicks and shift windows fast or stay on the same window and stay focused on the window.

Reflection

I do appreciate the kind of insight this provides through delegating work. I have worked with mentoring various workers in some of my past internships, and it has provided various stress. With some of the more professional workers they are easier to manage, however with others it usually takes more time to manage them and teach them then doing the work themselves. Being able to automatically do this and discard the work of participants provides a lot of freedom to discard lacking participants since creators can not necessarily oversee the participants working.

I do however strongly disagree with how much information is being tracked and requested. A strong proponent of privacy, browser technology is not the best of domains to track and inject programs to watch the session and information of the participant. Though this is limited to the browser, any other session information, such as cookies, ids, or uids, could potentially be accessed. Though not necessarily able to be tracked from the app, other live Javascript could track the information via the CrowdScape program.

Questions

  • One of my first initial concerns with this type of project is the amount of privacy invasion. Though it makes sense to ensure the worker is working, there could always be the potential of leaked issues of confidential information. Though they could limit the key tracking to the time when the participant is focused on the browser window, do you think this would be a major concern for participants?
  • Throughout the cases studied through the team’s experiments, it seemed most of the participants were able to be discarded since they were using some other tool or external help. Do you think as many people would be discarded within real experiments for similar reasons?
  • Alongside the previous question, is it over reaching in a sense as to potential discredit workers if they have different working habits then expected?

Read More

04/08/2020 – Nurendra Choudhary – Agency plus automation: Designing artificial intelligence into interactive systems

Summary

In this paper, the authors study system designs that included different kinds of interaction between human agency and automation. They utilize human control and their complementary strengths over algorithms to build a more robust architecture that absorbs from both their strengths. They share case studies of interactive systems in three different problems – data wrangling, exploratory analysis and natural language translation. 

To achieve synchronization between automation and human agency, they propose a design of shared representations for augmented tasks with predictive models for human capabilities and actions. The authors criticize the need for the AI community’s push towards complete automation and argue that the focus should rather be on systems that are augmented with human intelligence. In their results, they show that such models are more usable in current situations. They depict the utilization for interactive user interfaces to integrate human feedback into AI and improve the systems and also provided correct results for that problem instance. They utilize shared representations for AI that can be edited by humans for removing inconsistencies, thus, integrating human capability for those tasks.

Reflection

This is a problem we have discussed in class several times. However, the outlook of this paper is really interesting. It shows shared representation as a method for integrating human agency. Several papers we have studied utilize human feedback as part of augmenting the learning process. However, this paper discusses auditing the output of the AI system. Representations for AI is a very critical attribute. Its richness decides the efficiency of the system and its lack of interpretability is generally the reason several AI applications are considered black-box models. I think shared representations in a broader sense, also, suggests broader AI understanding akin to unifying human and AI capabilities in the most optimal way. 

However, such representations might limit the capability of the AI mechanisms behind them. The optimization in AI is with respect to a task and that basic metric decides the representations in AI models. The models are effective because they are able to detect patterns in multi-dimensional spaces that humans cannot comprehend. The paper aims to make that space comprehensible, thus, eliminating the very basic complication that makes an AI effective. Hence, I am not sure if it is the best idea for long-term development. I believe we should stick to current feedback loops and only accept interpretable representations with statistically insignificant results differences.  

Questions

  1. How do we optimize for quality of shared representations versus quality of system’s results?
  2. Humans that are needed to optimize shared representations may be fewer when compared to the number of people who can complete the task. What would be the cost-benefit ratio for shared representations? Do you think the approach will be worth it in the long-term?
  3. Do we want our AI systems to be fully automatic at some point? If so, how does this approach benefit or limit the move towards the long-term goal?
  4. Should there be separate workflows or research communities that work on independent AI and AI systems with human agency? What can these communities learn from each other? How can they integrate and utilize each other’s capabilities? Will they remain independent and lead to other sub-areas of research?

Word Count: 545

Read More

04/08/2020 – Nurendra Choudhary – CrowdScape: Interactively Visualizing User Behavior and Output

Summary

In this paper, the authors solve the problem of large-scale human evaluation through CrowdScape, a system for large-scale human evaluation based on interactive visualizations and mixed-initiative machine learning. They track two major previous approaches of quality control – worker behaviour and worker output.  

The contributions of the paper include an interactive interface for crowd worker results, visualization for crowd behavior, techniques for exploration of crowd worker products and mixed initiative machine learning for bootstrapping user intuitions. The previous work includes analyzing the crowd worker behavior and output independently, whereas, CrowdScape provides an interface for analyzing them together.  CrowdSpace utilizes mouse movement, scrolling, keypresses, focus events, and clicks to build worker profiles. Additionally, the paper also points out its limitations such as neglected user behaviors like focus of fovea. Furthermore, it shows the potential of CrowdSpace in other experimental setups which are primarily offline or cognitive and don’t require user movement on the system to analyze their behavior.

Reflection

CrowdSpace is a very necessary initiative as the number of users increases for evaluation. Another interesting aspect is that this also increases developers’ creativity and possibilities as he can now evaluate more complex and focus-based algorithms. However, I feel the need for additional compensation here. The crowd workers are being tracked and this is an intrusion to their privacy. I understand that this is necessary for the process to function but given that it makes focus an essential aspect of worker compensation, they should be awarded fairly for it. 

Also, the user behaviors tracked here fairly cover most significant problems in the AI community. However, more inputs should cover a better range of problems. Adding more features would not only increase problem coverage but also lead to increase in development effort. There could be several instances when a developer does not build something due to lack of evaluation techniques or popular measures. Increasing features would help get rid of this concern. For example, if we are able to track the fovea of user’s, developers could not study the effect of different advertising techniques or build algorithms to predict and track interest in different varieties of videos (business of YouTube).

Also, I am not sure of the effectiveness of tracking the movements given in the paper. The paper considers effectiveness as a combination of worker’s behavior and output. But in several tasks you need mental models that do not require the movements tracked in the paper. In such cases, the output needs to have more weightage. I think the evaluator should be given the option to change the weights of different parameters, so that he could vary the platform for different problems making it more ubiquitous. 

Questions

  1. What kind of privacy concerns could be a problem here? Should the analyzer have access to such behavior? Is it fair to ask the user for his information? Should the user be additionally compensated for such intrusion to privacy?
  2. What other kinds of user behaviors are traceable? The paper mentions fovea’s focus. Can we also track listening focus or mental focus in other ways? Where will this information be useful in our problems?
  3. CrowdSpace uses the platform’s interactive nature and visualization to improve user experience. Should there be an overall focus on improving UX at the development level? Or should we let them be separate processes?
  4. CrowdSpace considers worker behavior and output to analyze human evaluation. What other aspects could be used to analyze the results?

Word Count: 582

Read More

04/08/2020 – Yuhang Liu – CrowdScape: Interactively Visualizing User Behavior and Output

Summary:

This article proposes that the crowdsourcing platform is a very good tool and can help people solve quite a few problems. It can help people quickly allocate work and complete tasks on a large scale. Therefore, the work quality of the workers on the crowdsourcing platform is very important. In the previous research, other researchers have developed algorithms on the inspection of the workers ’work quality and the workers’ behavior to detect the workers ’work quality, but these algorithms are all It has limitations, especially those complex tasks. The manual assessment of the quality of tasks completed by workers can solve this problem, but when many workers are needed, it cannot be scaled well. Therefore, based on this background, the author created CrowdScape, which can support the inspection of the work quality of crowdsourced workers through interactive visualization and hybrid active machine learning. The system’s approach is to combine information about worker behavior and worker output to better explain the work of the workers. This system mainly completes the exploration of workers’ work quality through the development and application of the following functions. : 1 Build an interface that can interactively browse the results of crowd workers, and explain the performance of workers by combining the information of workers’ behavior and output 2 Can visualize the behavior of mass workers 3 Can explore the products of mass workers 4 Tools for grouping and classifying workers 5 Combine machine learning to guide users’ intuition to the masses.

Reflection:

The system introduced in this article has many innovations, the most important of which is the ability to combine the behavior of workers with the output of workers, which is beneficial to study the behavior of workers with different performances. This is what we often say that the result is determined by the process. The author incorporates this into the research. I think it is a very innovative point, through this vigorous research, we can get the working behavior of workers with different results, and in the subsequent research, we can not only focus on workers with good output. The more important meaning is that the behavior guidance can be summarized through the behavior of workers with good output, and the behavior guidance can be used to guide the workers to complete the task better. And another innovation point I think is to visualize the interactive process. As we all know, people can better receive visual information. I think this is also feasible when evaluating the work effect of crowdsourced workers. Visualizing the interaction process of crowdsourced workers can help people better study the behavior of workers, and at the same time can improve people’s understanding of the performance of crowdsourced workers at work, and it also help us design ideas for crowdsourcing tasks. At the same time, the system’s dynamic query system can quickly analyze large data sets by providing users with immediate feedback. I think Crowddscape is based on these points in order to better discover people’s work patterns and understand the essence of crowdsourcing. And constantly adapt to more complex and innovative crowdsourcing tasks.

Question:

  1. Whether the quality of workers ’work must be related to behavior, and how the system prevents some workers achieving good results with inappropriate behavior?
  2. When workers know their behavior will be recorded, if it will affect their work?
  3. Is there any other method to lead workers to have a better output?

Read More

04/08/2020 – Yuhang Liu – Agency plus automation: Designing artificial intelligence into interactive systems

Summary:

This article discusses the relationship between artificial intelligence and staff. First of all, in modern society, with the rapid development of technology, the continuous development of artificial intelligence makes people more and more inclined to use the latest artificial intelligence to replace real people, but These ideas are usually based on very optimistic situations and underestimate the challenges of applying artificial intelligence. In general, artificial intelligence cannot complete these tasks without human resources. Therefore, in this case, we need to change our thinking. If we are not able to completely constitute automated technology to liberate human resources, then we can build computing auxiliary functions to strengthen and enrich people’s intellectual work. Therefore, the author of this article proposes to build a system that can realize rich interaction between people and algorithms. The system in this article is not intended to liberate the manpower in some systems. On the contrary, it is hoped that these manpower can be strengthened to better play the role of man in a system. This system balances the advantages of all aspects while promoting human control and action. The author applies this system to three aspects, and integrates the active computing in the interactive system to explain that this system is useful and can achieve the purpose of enhancing manpower. Finally, the author discusses the possibility of its future application.

Reflection:

I think the author ’s idea is very innovative. First of all, I have to admit that it is very difficult to fully implement the automation function. In my recent project, I have obvious feelings. In the natural scene, there is no labeled tweets whether it is a rumor. To implement supervised learning to automatically discover rumors in social networks, it is necessary to have marked data, which also reflects the importance of humans, so I need to use crowdsourcing platforms in my project. The system introduced in the article is to help people to play a greater role in a system and help people find better ways. I think this is very effective. Enhancing the role of people can undoubtedly increase efficiency. At the same time, I think that this system has also contributed to the subsequent research, not only providing new research ideas, when a field encounters major challenges or bottlenecks, we can consider changing the line of defense to study the problem. At the same time, I think that strengthening the role of people in a system can also better understand the role of people in an application, so that it can better study how to use artificial intelligence to replace humans in computer interaction in the future. In the end, the article also raises a deeper question, should we accept computers help us think, the application of such systems has no intention to raise the workers’ level, and to a large extent can help new hands adapt to a system, starting work more quickly. Such a system can largely allow people control system instead of passively accepting algorithms, so how to define the application direction of artificial intelligence in the future is also a problem that we need to think about.

Question:

  1. Do you think the system mentioned in the paper can help new hands adapt a new system?
  2. Do you think extend people’s ability by computer can help us research the role of people in human and computer interaction?
  3. Should we accept the idea that “computers help people think”, will it lead other problems?

Read More

04/08/20 – Ziyao Wang – Agency plus automation: Designing artificial intelligence into interactive systems

The author proposed that currently much developers and researchers are too optimistic as they do not see human labor under automated services. This kind of long-standing focus on only the AI side results in the lack of researches about the interaction between AI and humans. The author proposed that it is better to let AI enrich human’s intellectual work rather than let AI replace human. The author introduced interactive systems in three areas, which are data wrangling, exploratory analysis, and natural language translation. The author integrated integrate proactive computational support into these systems and described the performance of these hybrid systems using predictive models. In conclusion, he proposed that only a fluent interleaving of both automated suggestions and direct manipulation can enable more productive and flexible work.

Reflections:

As a student who had only one class about human-computer interaction before attending this course, I kept the wrong view that we should focus on the AI side of the data analytic side. Instead, there is a huge improvement in user experience if the system has a good design of human-computer interaction. The author of this paper reminds me of the importance of the interface and how AI can do for human works.

It is not correct to let a human do what they can do, let AI do want they can do and combine both sides of work simply. Instead, the interaction between humans and AI can be designed to leverage the strength of both sides and make the whole system have better performance. I like the idea about let AI to suggest for human when human is dealing with some work. This can let the results to be highly accurate and let the processing speed to decrease. Human does not need to search for information online and AI does not need to predict what is human thinking. They can just cooperate to let humans ensure accuracy while AI provides quick background searching and complex calculation. They reached harmony in the designed system.

Additionally, the author mentioned that it is not always efficient to let AI support human. For some simple job, AI can complete on itself. In these cases, it would be more efficient to let AI do the task on itself without asking advice from human workers. For a simple job, for example, some simple labeling, even human gives no advice, AI can still reach high accuracy. If we let AI wait for human response, time would be wasted. As a result, a fluent interleaving of both automated suggestions and direct manipulation can reach the highest performance of a system. We should consider both suggestion and automatic operations in our system designs.

Questions:

The author proposed that systems should have a fluent interleaving of both automated suggestions and direct manipulation. Is there any situation in which AI assistants would decrease the performance of humans and humans should complete the tasks on themselves?

What’re the criteria for determining whether the AI should make suggestions to human or it should direct manipulation?

Compared with letting humans correct AI results, what’re the advantages of letting AI suggest humans?

Read More

04/08/2020 – Nan LI – Agency plus automation: Designing artificial intelligence into interactive systems

Summary:

The main objective of this paper is to present the significance and benefit of integrating the interactive system with artificial intelligence. The author made a point that the current focus on AI limited on full automation, which may lead to a mislead due to inappropriate assumptions or biased training data. As a result, users may rely excessively on computational advice, which may result in the loss of critical participation and domain expertise. To address this issue, the author proposed the idea that integrates AI agency and automation to enhance human ability instead of replacing human work. This approach aims to increase human productivity while preserving the human sense of control and responsibility. In order to investigate the most effective way of integrating the automated method into user-centric interactive systems, the paper examined the strategies. Designing shared representations of possible actions that enable people to perform computational reasoning around tasks so that people can view, select, modify, or cancel algorithm suggestions. Then, the author review three practical implementations that applied principles proposed as above: domain-specific language (DSL) for data transformation called Wrangle; an interactive system for data visualization and exploration, predictive translation memory (PTM) project. Finally, the author discussed the design property and user studies of these three projects.

Reflection:

I really like the idea that the primary goal of AI should be enhancing humans not replace humans. I think the author made a good point that currently, people tend to focus too much on the fully automated implementation of AI. The claim that AI can replace humans is even more exaggerated. Humans indeed make mistakes in judgment due to insufficient information or cognitive biases. However, human has the irreplaceable creativity and in-depth understanding of professional domain knowledge. Even with the help of dominant computing power and exhaustive data form computers, AI institutions cannot achieve it. Thus, I cannot more agree with that “need well-thought-out interactions of humans and computers to solve our most pressing problems.”

The reason why I am interested and passionate about HCI is that the elegant of HCI is that the subtle and simple design idea could have a huge impact on the overall performance. We are not seeking to develop a new interactive system or fancy interface, but design from a humble direction to make subtle and rational adjustments to improve user experience without causing any interruption. Just as the example demonstrated in the paper: spelling and grammar checking routines included within the word processor.

On the other hand, user feedback in the article caught my attention: “These related views are so good, but it’s also spoiling that I start thinking less. I am not sure if that’s really a good thing”. This is really impressive. For a long time, we have been pursuing how to facilitate human work efficiently and easily through using AI or interactive design to replace some tedious and later even developed to let machines to learn adaptively instead of human thinking. However, this comment just pointed out the question, which also stated in the paper that should we accept having the computer ‘think for us’? This is indeed a problem that we need to consider when designing.

Questions:

  1. What do you think about the opinion that AI should “enhancing us, not replacing us”? Are you agree with the idea that we should integrate AI with IA perspectives?
  2. Which do you support more? Direct manipulation, or interface agents? Why?
  3. Take the data visualization system as an example. Would you like the system to provide adaptive recommendations? Do you think it’s helpful or annoying? Do you think it is spoiling and stop you from thinking initiative?

Read More