02/05/2020 – Sushmethaa Muhundan – Principles of Mixed-Initiative User Interfaces

There has been a long-standing debate in the field of user-interface research regarding which area to focus on: building entirely automated interface agents or building tools that enhance direct manipulation by the users. This paper reviews key challenges and opportunities for building mixed-initiative user interfaces that enable users and intelligent agents to collaborate efficiently by combining the above-mentioned areas into one hybrid system that would help leverage the advantages of both the areas. The expected costs and benefits of the agent’s action are studied and compared to the costs and benefits of the agent’s inaction. A large portion of the paper deals with managing uncertainties that agents may have regarding the needs of the user. Three categories of actions are adopted according to the inferred probability the agent calculates regarding the user’s intent. No action is taken if the agent deems it correct, the agent engages the user in a dialog to understand the user’s intent if the intent is unclear or the agent goes ahead and provides the action if the inferred probability of the user wanting the action is high. The LookOut system for scheduling and meeting management, which is an automation service integrated with Microsoft Outlook, has been used as an example to explain this system.


The paper describes multiple interaction modalities and their attributes which I found interesting. These range from manual operation mode to basic automated-assistance mode to social-agent model. An alerting feature in the manual operation mode made the user aware that the agent would have taken some action at that point if it was in an automatic mode. I particularly found this interesting since this gives the users a sense of how the system reacts. If that happened to be helpful to the user, their trust level would automatically increase and as a result, the likelihood of them using the system the next time increases. I also liked the option of engaging in a dialog with the user as an option for action. During times of uncertainty, rather than guessing the intent of the user and risk damaging the trust of the user on the system, I feel that it is better to clarify the user’s intent by asking the user. It was also good to know that the automated systems analyze the user’s behavior and learn from it. The system takes into account past interactions and modifies its parameters to make a more informed decision the next time around. 

While the agent learns from the relationship established between the length of the email and the time taken by the user to respond to it, I feel that a key factor missing is the current attention span of the user. While this is hard to gauge and learn from, I feel that it plays an equally important role in determining the time taken to respond to an email thereby affecting the agent’s judgment. 

  • Does the system account for scenarios involving the absence of the user immediately after the opening of an email? Would the lack of response have a negative impact on the feedback loop of the system?
  • Apart from the current program context, the user’s sequence of actions and choice of words used in queries, what are some additional attributes that can be considered by the agent while inferring the goal of the user?
  • I found the concept of context-dependent changes to utilities extremely interesting. Have there been studies conducted that involve a comprehensive set of scenarios that can affect the utility of a system based on context?

Read More

02-05-2020 Yuhang Liu -Principles of Mixed-Initiative User Interfaces

This paper discusses the prospect of human-computer interaction. The paper proposes that the current prospect of human-computer interaction is controversial. Some people believe that people should focus on developing new metaphors and tools that enhance users ’abilities to directly manipulate objects. Others believe that directing effort toward developing interface agents that provide automation is much more important. So this paper later proposed a new system-outlook. This system is an email system, which creates a new interaction mode based on combining two ideas. The paper specifically describes the factors of the system, the realization ideas of each factor, and the realization principle and mathematical model of these factors are described in the second half of the paper. And these factors include: Value-added automation, Considering user’s goals, Considering user’s attention in the timing of services, Inferring ideal action, using dialog to resolve key uncertainties, Allowing invocation and termination, Minimizing the cost of poor guesses, Scoping precision of service to match uncertainty, variation in goals, Providing mechanisms for efficient agent−user collaboration to refine results, Employment socially appropriate behaviors for agent−user interaction, Maintaining working memory, Continuing to learn by observing.

Aiming at these factors and methods mentioned in this article, I would like to put forward my views on several of them.

  1. Continuing to learn by observing. I think this will be the inevitable development direction of human-computer interaction, and even the development direction of other computer science fields. Because people have different habits and education backgrounds, a universal interaction system, Not enough good for everyone. In a interaction, people are required to adapt to the system, but at the same time, the system should also learn human habits to serve people more conveniently.
  2. Maintaining working memory. I think this is came from the idea of computer caching. The introduction of the concept of caching in the system will undoubtedly serve people better. At the same time, I think there are two other benefits. 1. Recording the previous operations can become a training set for machine learning. Since we want the system to become more intelligent, and we hope the system can adapt to different people, so it is necessary to record the operations of each person. 2. This idea also inspired us to bring other successful functions that have been practiced and applied to other fields, it might bring some great impact.
  3. Most of the remaining factors are designed for human uncertainty. In my opinion, in the development of human-computer interaction, these factors can be merged into one factor, that is, human uncertainty. However, it can be seen from these factors that the difficulty of human-computer interaction is focused on human diversity and uncertainty, so overcoming this problem will directly affect the interaction experience.

questions:

1.For the two directions mentioned in the paper, which one do you think is more important, or the combination of the two?

2.Do you think it is urgent to add the concept of machine learning in human-computer interaction to better serve human beings.

3.Besides the factors of OutLook system mentioned in the paper, which factor do you think can be extended.

Read More

02/05/20 – Runge Yan – Principles of Mixed-Initiative User Interfaces

A propriate collaboration of user and machine is promising in efficiency and the effort for developing “agents” is persuasive. With the real-world case of LookOut system, several principles and methods are demonstrated.

To make sure an additional interface is worth using, the system should follow certain principles in design and implementation. significant value should be added into automation; user’s attention directly influences the effect of service; negotiation between costs and benefits often determines the action; a variety of goals should be understood; maintain a continuous learning process, etc.

LookOut system provides calendaring and scheduling services based on emails and user behaviors: in an interactive situation, the system goes through a 2-phase analysis to decide whether the assistance is needed & what level of service would suit best (manual operation, automated assistance or social agent). The possibility of user having a goal and the likelihood of machine providing service (action/dialog) comprise determine the threshold of best practice.

With the principles and problems addressed, a combination of reasoning machine and direct user operation is likely to be improved further on.

Reflection

I’m quite surprised that this paper is published in 1999. By that time, the concepts and guidelines of HCI are clearly addressed. Some of the points are exactly what we have today while other are somewhat developed into today’s idea. Although it’s a simple task compared to the interaction we encounter today, details on “agents” and both sides of interaction are quite comprehensive. The principles take several crucial elements into consideration: additional value, user attention, decision threshold, learning process, etc. These are basically what in my mind when I think about the complicated interaction process. A 2-phase analysis is essential for an “agent” we’d like to count on; the several modalities fit well in real-time situation; a failure recover mechanism and a evolving learning process.

When I’m using windows 98 and XP on my father’s PC, I’ve seen a cute lion icon on desktop provided by a popular antivirus software “Rising (Rui Xing)”. The lion was quite smart, as I look back at it: It won’t bother you when your mouse is navigating through another software window; if your mouse pass and stay around it, it will gently ask you if you need any service or you just want to play a bit; also, it is draggable and will stay in a scope where you often let it stay. The most amazing thing is that if I stopped what I’m working on and stared a little bit at the lion, it would become all sleepy and began snoring in a really cute way!

I’ve known several basic ideas about HCI and now I got so many “That was amazing” lookback. I don’t know my (indirect, meaningless) behavior largely determined the action of the machine.

Question:

  1. If (as I see it) this paper addresses such important guidelines in HCI, what holds back the (fast) development of the entire system? / What can we do better to accelerate this process?
  2. How important it is to make user feel natural as they interact with machine? Should users be notified about what’s going on? (Like “If you play a lot with the lion it will infer what you want at a certain time based on your behavior”) Is that one of the reasons why the companies collect our data and we are uncomfortable with it?

Read More

2/5/20 – Lee Lisle – Principles of Mixed-Initiative User Interfaces

Summary

               The author, Horvitz, proposes a list of twelve principles for mixed-initiative, or AI-assisted programs that should underlie all future AI-assisted programs. He also designs a program called LookOut, which focuses on email messaging and scheduling. It will automatically parse emails (and it seems like other messaging services) and extracts possible event data for the user to add the event to their calendar, inferring dates and locations when needed. It also has an intermediary step where the user can edit the suggested event fields (time/location/etc.). In the description of LookOut’s benefits, the paper clearly lays out some probability theory of how it guesses what the user wants. It also lays out why each behind-the-scenes AI function is performed the way it is in LookOut.

Personal Reflection

               I was initially surprised about this paper’s age; I had thought that this field was defined later than it apparently was. For example, Google was founded only a year before this paper was published. It was even more jarring to see Windows 95 (98?) in the figures. Furthermore, when the author starts describing LookOut, I realized that this is baked into a lot of email systems today, such as Gmail and the Apple Mail application, where they automatically can create links that will add events to your various calendars. The other papers we have read for this class tend to stay towards overviews or surveys of literature rather than a single example and deep dive into explaining its features.

It is interesting that “poor guessing of user’s goals” has been an issue this long. This problem is extremely persistent and speaks to how hard it is to algorithmically decide or understand what a user wants or needs. For example, Lookout was trained on 1000 messages, while (likely) today’s services are trained on millions, if not orders of magnitude more. While I imagine the performance is much better today, I’m curious what the comparative rates of false positives/negatives there are.

This paper was strong overall, with a deep dive into a single application rather than an overview of many. Furthermore, it made arguments that are, for the most part, still relevant in the design of today’s AI-assisted programs. However, I would have liked the author to specifically mention the principles as they came up in the design of his program. For example, he could have said that he was fulfilling his 5th principle in the “Dialog as an Option for Action” section. However, this is a small quibble in the paper.

               Lastly, while AI assistants should likely have an embodiment occasionally, the Genie metaphor (along with ClippyTM style graphics) is gladly retired now, and should not be used again.

Questions

  1. Are all of the principles listed still important today? Is there anything they missed with this list, that may have arisen from faster and more capable hardware/software?
  2. Do you think it is better to correctly guess what a user wants or is it better to have an invocation (button, gesture, etc.) to get an AI to engage a dialog?
  3. Would using more than one example (LookOut, in this case) strengthened the paper’s argument of what design principles were needed?  Why or why not?
  4. Can an AI take action incorrectly and not bother a user?  How, and in what instances for LookOut might this be performed?

Read More

02/05/2020-Donghan Hu-Principles of Mixed-Initiative User Interfaces.

Principles of Mixed-Initiative User Interfaces.

Some researchers are aiming at the development and application of automated services, which can sense users’ activities and then take automated actions. Other researchers are focusing on metaphors and conventions that can enhance users’ ability to directly manipulate information to invoke specific services. This paper stated principles that can provide a method for integrating research indirect manipulation with work on interface agents.

The author listed 12 critical factors while combining automated services with direct manipulation interfaces: 1) developing significant value-added automation. 2) considering uncertainty about a user’s goals. 3) considering the status of a user’s attention in the timing of services. 4) inferring ideal action in light of costs, benefits, and uncertainties. 5) employing dialog to resolve key uncertainties. 6) allowing efficient direct invocation and termination. 7) minimizing the cost of poor guesses about action and timing. 8) scoping precision of service to match uncertainty, variation in goals. 9) providing mechanisms for efficient agent-user collaboration to refine results. 10) employing socially appropriate behaviors for agent-user interaction. 11) Maintaining the working memory of recent interactions. 12) continuing to learn by observing. As a result, the author designed mixed-initiative user interfaces which can enable users and intelligent agents to collaborate efficiently on the LookOut system. This system can elucidate difficult challenges and promising opportunities for improving HCI through the elegant combination of reasoning machinery and direct manipulation.

I have read several papers about ubiquitous computing recently. One of the core features of the ubiquitous computing is that users are gradual cannot feel the existed computing and technologies which are surrounding them. Hence, I think that applications and systems which can sense users’ activities and then provide users with specific services will become prevalent in the future. Especially with the development of machine learning and artificial intelligence, we may not even need mixed interfaces or software anymore in the future. According to this, I consider that “Considering uncertainty about a user’s goals” is the most important factor. Humans are complex. It is impossible to fulfill everyone’s motivations and goals with several common services. Hence, customizing various features by considering a user’s uncertainty goal is really significant. What’s more, I think that maintaining the working memory of recent interactions is a great design claim which can assist users with the process of self-reflection. Users should know and understand what they did in the past.

Among these 12 critical factors for leveraging off automated services and direct manipulation interfaces, which factor do you consider as the most important factor?

If you are an HCI researcher, do you prefer to focus on developing applications that can sense users’ activities and offer services, or concentrating on designing tools that allow users to manipulate interfaces directly to access information and then invoke services?

How do you think about the “Dialog” interaction between the user and the system? Do you think it is useful or not?

Read More

02/04/20 – Mohannad Al Ameedi – Principles of mixed-initiative user interfaces

Summary

In this paper, the author presents first two research efforts related to the human interaction. The first effort focuses on user direct manipulation and the second focuses on automated services. The author suggests an approach that can integrated both by offering a way to allow the user to directly manipulate user interface elements, and also use automation to decrease the amount of interaction necessary to finish a task. The author presents factors that can make the integration between the automation and direct manipulation more effective. These factors include developing added value automation, considering user goals uncertainty, considering the user attention in the timing of the automated service, considering the cost and benefit of the action uncertainty, involving direct dialog with the user, and other factors.

The author proposes a system called lookout which can help users to schedule meetings and appointments by reading the user emails and extract useful information related to meetings and auto populate some fields, like the meeting time and meeting recipients, which can save the user mouse and keyboard interaction. The uses probabilistic classification system to help reduce the amount of interaction necessary to accomplish scheduling tasks. The system also uses sound recognition feature developed by Microsoft research to offer additional help to the users during the direct manipulation of the suggested information. The lookout system combines both automated services and ability to allow direct manipulation of the system. The system also asses the user interaction to decide when the automated service will not be helpful to the user to make sure that the uncertainty is well considered. The lookout system improves the human-computer interaction through the combination of reasoning machinery and direct manipulation.

Reflection

I found the idea of maintaining of a working memory of user interaction interesting. This approach to learn from the user experience as stated in the paper, but it can also use machine learning methods to predict the next required action for a new user or an existing user by learning from all other users.

I also found the lookout system very interesting since it is integrating the direct user input with automated service. The sound recognition features that was developed by Microsoft research is not only allow the user to interact via mouse and keyboard but it also give another interaction media which is extremely important to users with disabilities.

Cortana, which is a Microsoft intelligent agent, does parsing to the email too and check if the user send an email that contains a keyword like “I will schedule a meeting” or “I will follow up on that later” and then will remind the user to follow up and will present two buttons asking the user to directly interact with the alert by either dismiss the alert or by asking the system to send a follow up again next day.

Questions

  • Can we use the human interaction data used in lookout as a labeled data and develop a machine learning algorithm that can predict the next user interaction?
  • Can we use LookOut idea in a different domain?
  • The author suggests dozen factors to integrate the automation services with direct manipulation, which factors you think can be useful to the crowdsourcing users?  

Read More

02/05/2020 – Bipasha Banerjee – Guidelines for Human-AI Interaction

Summary

The paper was published at the ACM CHI Conference on Human Factors in Computing Systems in 2019. The main objective of the paper was to propose 18 general design guidelines for human-AI interaction. The authors consolidated more than 150 design recommendations from multiple sources into a set of 20 guidelines and then revised them to 18. They also performed a user study of 49 participants to evaluate the clarity and relevance of the guidelines. This entire process was done in four phases, namely, consolidating the guidelines, modified evaluation, user study, and expert evaluation of revision. For the user study portion, they recruited people from the HCI background with at least a year of experience in the HCI domain. They evaluated all the guidelines based on their relevance, clarity, and clarifications. Then they had experts review the revisions, and it helped in the detection of problems related to wording and clarity. Experts are people with work experience to UX or HCI, which are familiar with heuristic evaluations. Eleven experts were recruited, and they preferred the revised versions for most. The paper highlights that there is a tradeoff between specialization and generalization.

Reflection

The paper did an extensive survey on existing AI-related designs and proposed 18 applicable guidelines. This is an exciting way to reduce 150 current research ideas to 18 general principles. I liked the way they approached the guidelines based on clarity and relevance. It was interesting to see how this paper referenced the “Principles of Mixed-Initiative User Interfaces”, published in 1999 by Eric Horowitz. The only thing I was not too fond of is the paper was a bit of a monotonous read about all the guidelines. Nonetheless, the guidelines are extremely useful in developing a system that aims to use the human-AI interaction effectively. I liked how they used users and experts to evaluate the guidelines, which suggest the evaluation process is dependable. I do agree with the tradeoff aspect. To make a guideline more usable, the specialization aspect is bound to suffer. It was interesting to learn that the latest AI research is more dominantly found in the industry as they have up-to-date guidelines about the AI design. However, there was no concrete evidence produced in the paper to support this theory.

Questions

  1. They mentioned that “errors are common in AI systems”. What kind of errors are they referring to? What is the percentage of error these systems encounter on an average?
  2. Was there a way to incorporate ranking of guidelines? (During both the user evaluation as well as the expert evaluation phase)
  3. The paper indicates that “the most up-to-date guidance about AI design were found in industry sources”. Is it because the authors are biased in their opinion or do, they have a concrete evidence to state this?

Read More

02/05/2020 – Sukrit Venkatagiri – Power to the People: The Role of Humans in Interactive Machine Learning

Paper: Power to the People: The Role of Humans in Interactive Machine Learning

Authors: Saleema Amershi, Maya Cakmak, W. Bradley Knox, Todd Kulesza

Summary:
This paper talks about the rise of interactive machine learning systems, and how to improve these systems while also as users’ experiences through a set of case studies. Typically, a machine learning workflow involves a complex, back-and-forth process of collecting data, identifying features, experimenting with different machine learning algorithms, tuning parameters, and then having the results be examined by practitioners and domain experts. Then, the model is updated to take in their feedback, which can affect performance and start the cycle anew. In contrast, feedback in interactive machine learning systems are iterative, rapid, andare explicitly focused on user interaction and the ways in which end users interact with the system. The authors present case studies that explore multiple interactive and intelligent user interfaces, such as gesture-based music systems as well as visualization systems such as CueFlick and ManiMatrix. Finally, the paper concludes with a discussion of how to develop a common language across diverse fields, distilling principles and guidelines for human-machine interaction, and a call to action for increased collaboration between HCI and ML.

Reflection:
The several case studies are interesting in that they highlight the differences between typical machine learning workflows and novel IUIs, as well as the differences between humans and machines. I find it interesting that most workflows often leave the end-user out of the loop for “convenience” reasons, but it is often the end-user who is the most important stakeholder.

Similar to [1] and [2], I find it interesting that there is a call to action for developing techniques and standards for appropriately evaluating interactive machine learning systems. However, the paper does not go into much depth into this. I wonder if it is because of the highly contextual nature of IUIs that make it difficult to develop common techniques, standards, and languages. This in turn highlights some epistemological issues that need to be addressed within both the HCI and ML communities. 

Another fascinating finding is that people valued transparency in machine learning workflows, but that this transparency did not always equate to higher (human) performance. Indeed, it may just be a placebo effect where humans feel that “knowledge is power” but that it would not have made any difference. Transparency has other benefits, other than how it relates to accuracy, however. For example, transparency in how a self-driving car works can help debug whom to exactly blame in the case of a self-driving car accident. Perhaps the algorithm was at fault, a pedestrian was, the driver was, the developer was, or it was due to unavoidable circumstances, a la, a force of nature. With interactive systems, it is crucial to understand human needs and expectations.

Questions:

  1. This paper also talks about developing a common language across diverse fields. We notice the same idea in [1] and [2]. Why do you think this hasn’t happened yet?
  2. What types of ML systems might not work for IUIs? What types of systems would work well?
  3. How might these recommendations and findings change with systems where there is more than one end-user, for example, an IUI that helps an entire town decide zoning laws, or an IUI that enables a group of people to book a vacation together.
  4. What was the most surprising finding from these case studies?

References:
[1] R. Jordon Crouser and Remco Chang. 2012. An Affordance-Based Framework for Human Computation and Human-Computer Collaboration. IEEE Transactions on Visualization and Computer Graphics 18, 12: 2859–2868. https://doi.org/10.1109/TVCG.2012.195
[2] Jennifer Wortman Vaughan. 2018. Making Better Use of the Crowd: How Crowdsourcing Can Advance Machine Learning Research. Journal of Machine Learning Research 18, 193: 1–46. http://jmlr.org/papers/v18/17-234.html

Read More

02/05/20 – Runge Yan – Power to the People: The Role of Humans in Interactive Machine Learning

When given a pattern and clear instruction on classification, a machine can learn quickly on a certain task. Cases are addressed to provide a sense of what’s users’ role in interactive machine learning: how does the machine influence the users and vice versa. Then, several features of people involving in interactive machine learning are stated as guidelines to understand the end-user effect on learning process:

People are active, tend to give positive awards and they want to be a model for the learner. Also, with the nature of human intelligence, they want to provide extra information in a rather simple decision, which lead to another feature that proper transparency in the system is valued by people and therefore help reduce error rate in labeling.

Several guidelines are presented in the interactivity. Instead of a small number of professionals designing the system, people can involve more in the process and collect the data they want. A novel interactive machine learning system should be flexible on input and output: User could try input with reasonable variation, assess the quality of the model and ask even the model directly; the output can be evaluated by the users rather than “experts”, a possible explanation of error case can be provided by users and the modification of models is no longer forbidden for users.

Details are discussed to further suggest what methods are better fit in a more interactive system: Common language, principles and guidelines, techniques and standards, volume handling, algorithm and collaboration between HCI, etc. This paper laid a comprehensive foundation for future research on this topic.

Reflection

I once contributed to a dataset on sense making and explanation. My job is to write two similar sentences where only one word (phrase) is different – one of them is common sense and the other is nonsense. For further information, I wrote three sentences trying to explain why the nonsense is appropriate, with only one of them best describe the reason. The model should understand the five sentences, pick the nonsense and find the best explanation. I was asked to be kind of extreme, for example, to write down a pair of “I put an eggplant in the fridge” and “I put an elephant in the fridge”. A mild difference is not allowed, for example, “I put a TV in the fridge.” A model will learn quickly for extreme comparison, however, I’d prefer an iterative learning process where the difference narrows down (Still one of them is nonsense and the other is common sense).

When I try to be a contributor on Figure Eight (previously CrowdFlower), the tutorial and intro task is quite friendly. I was asked to identify a LinkedIn account – whether he/she ’s still working in the same position/ same company. The assistance of decision makes me feel comfortable – I know what’s my job and some possible obstacles along the way, and I can tell the difficulties increase in a reasonable way. When there’s more information in that cannot be described by selecting options, I’m able to provide additional notes to the system, which makes me feel that my work is valuable.

More interactivity is needed to improve the model into next level, but with a previous restricted rule and its restricted output, the openness of the system is a crucial point to determine.

Question

  1. More flexibility means more workload on the system and more requirement on users. How to balance user contribution? For example, if this user wants to do an experiment input and that user is unwilling to do. Will the system accept both input or only the qualified users?
  2. How do we address the contribution of the users?

Read More

2/5/2020 – Jooyoung Whang – Principles of Mixed-Initiative User Interfaces

This paper seeks to find when it is good to allow direct user manipulation versus automated services (agents) for a human-computer interaction system. The author ends up with the concept of mixed-initiative user interfaces, a system that seeks to pull out maximum efficiency using both sides’ perks and collaboration. In the proposal, the author claims that the major factor to consider when providing automated services is addressing the performance uncertainty and predicting the user’s goals. According to the paper, many poorly designed systems fail to gauge when to provide automated service and misinterpret user intention. To overcome these problems, the paper addresses that automated services should be provided when it is certain it can give additional benefits than when doing it manually by the user. The author also writes that effective and natural transfer of control to the user should be provided so that the users can efficiently recover and step forward towards their goals upon encountering errors. The paper also provides a use case of a system called, “LookOut.”

I greatly enjoyed and appreciated the example that the author provided. I personally have never used LookOut before, but it seemed like a good program from reading the paper. I liked that the program gracefully handled subtleties such as recognizing phrases like “Hmm..” to sense that a user’s thinking. It was also interesting that the paper tries to infer a user’s intentions using a probabilistic model. I recognized keywords such as utility and agents that also frequently appear in the machine learning context. In my previous machine learning experience, an agent acted according to policies leading to maximum utility scores. The paper’s approach is similar except it involves user input and the utility is the user’s goal achievement or intention. The paper was a nice refresher for reviewing what I learned in AI courses as well as putting humans into the context.

The followings are the questions that I’ve come up with while reading the paper:

1. The paper puts a lot of effort in trying to accurately acquire user intention. What if the intention was provided in the first place? For example, the user could start using the system by selecting their goal from a concise list. Would this benefit the system and user satisfaction? Would there be a case where it won’t (such as even misinterpreting the provided user goal)?

2. One of the previous week’s readings provided the idea of affordances (what a computer or a human is each better at doing than the other). How does this align with automated service versus direct human manipulation? For example, since computers are better at processing big data, tasks related to this would preferably need to be automated.

3. The paper seems to assume that the user always has a goal in mind when using the system. How about purely exploratory systems? In scientific research settings, there are a lot of times when the investigators don’t know what they are looking for. They are simply trying to explore the data and see if there’s anything interesting. One could claim that this is still some kind of a goal, but it is a very ambiguous one as the researchers don’t know what would be considered interesting. How should the system handle these kinds of cases?

Read More