Faculty Talks

Faculty talks will be presented on both workshop days in
Torgersen Hall room 1100. On Thursday, April 11, 2019, talks will be presented from 11:00-12:00pm and 1:00 – 2:00pm. On Friday, April 12, 2019, talks will be presented from 10:00 – 12:00pm.


Long Faculty Talks

Semantic Interaction for the Two-Black-Box Problem, (Thursday, April 11, 2019, 11:00-11:30am)

Chris North, Dept. of Computer Science

In data analytics, the “black box” problem denotes the fact that artificial intelligence algorithms in general, and neural network models in particular, suffer from opaqueness. These algorithms can supply useful results, such as finding novel latent structure in otherwise difficult to comprehend data. However, the algorithms typically to not provide any justification or rationale behind the provided results. Users of these algorithms are therefore faced with the decision of whether to accept the results at face value only, without the ability to question or understand the underlying process. This problem has resulted in the “Explainable AI” (XAI) research agenda, which seeks to open the black box of these algorithms, enabling the algorithms to explain their results to human analysts. Analysts could potentially peer inside the algorithms and gain some appreciation for how the analytical results were discovered by the algorithm, the process trail, analytical provenance, and data support.

However, this is only half the problem in human-AI interaction for data analytics. There is another black box in the equation – the black box of the human cognitive mind. Analysts conduct cognitive sensemaking activity, and as a result of this thought process, also want to be able to influence the algorithms to produce alternative results of interest to their sensemaking. However, from the perspective of the algorithm, the human mind is a black box that is equally (or perhaps even more) difficult to interpret. How can the human “explain” herself to the algorithm, so that the algorithm can respond to her internal thought processes, goals, motivations, expert domain knowledge, and intents? How can the machine learn from user interaction (MLUI)? This is represented by the right-pointing arrow in the diagram. Taken together, we call this the “two black-box” problem.

We propose Semantic Interaction (SI) as a design philosophy to address the two black-box problem in human-AI interactive data analytics.


“Electricity Will Do The Counting”: The 1890 US Census, Algorithms, and Data in Social Context,
(Thursday, April 11, 2019, 11:30-12:00pm)

Tom Ewing, Dept. of History

This provocation examines two instances of algorithmic thinking in the 1890 census: first, the use of computing machines for tabulation purposes, and second, the use of fractions to define racial classifications. In the 1890 US Census, algorithms were used for the first time to tell a machine how to count human beings based on information encoded on a punch card. Humans collected the data, used a machine to enter the data on cards, and then the tabulating machine calculated the distribution of people at a much higher rate than could be done by humans. Once the calculations were completed, however, the data was published in charts, tables, and graphs that were readable only by humans, so the automated section of the census was limited. The 1890 Census marked the first time calculating machines were used for tabulation purposes, and thus generated considerable attention in popular newspapers. The title of this provocation, “electricity will do the counting,” appeared in a widely circulated newspaper article describing how the counting machines promised greater speed and accuracy in comparison to counting done by humans. The machines depicted on the cover of Scientific American on August 31, 1890 illustrated the new process for collecting, categorizing, and counting data. These images, which have been used to illustrate this important stage in the history of computing and data analytics, provide graphic evidence of how algorithms were celebrated at their introduction as an instrument for removing the human from the calculating process in ways that promised only improvements in speed, accuracy, and efficiency.


Aspirational Cyber Human Systems (Friday, April 12, 2019, 10am)

Aisling Kelliher, Dept. of Computer Science 

The promise and threat of artificial intelligence is of growing significance for creative industries. Technical advancements in machine learning aim to better serve, engage, and retain consumers through highly targeted content presentation and personalized recommendations. Other advancements in automated content creation and AI assistive tools are heralded for their ability to efficiently generate highly optimized content at scale. Depending on one’s perspective, these developments can be understood as beauty or menace, with a vast landscape of complex issues and implications in between. As designers, creators, and researchers, how can we best conceptualize and handle artificial intelligence as a fundamental material building block in our work? How can the role of the human be elevated through encounters with AI systems that place the experience of the human agent as being as important or even more important than the process of data collection or algorithmic improvement? Using examples from domains such as design, healthcare, and the arts, this talk will examine issues of power, failure, resistance, trust, and social control within the entangled realm of creative AI.


Lightning Faculty Talks

Workshop Day 1 (Thursday,
April 11, 2019, 1:00-2:00pm)

Governance in the Age of Artificial Intelligence

Andrea Kavanaugh, Dept. of Computer Science

I am interested in how artificial intelligence (AI), especially algorithms and machine learning, affect participation in governance by citizens and non-profit organizations in democratic societies. People will not participate in governance if they do not trust the system. Government -even “by the people” in democratic societies –  is involved one way or another in almost all social and economic aspects of life: education, health, agriculture, land use, energy, security, public safety, communication, transportation, industry, defense, and strategic planning, as well as the legislation and regulation related to all these areas.


Bending Awareness and Motivation with FitAware: Algorithms for Fair Comparison of Activity Levels

D. Scott McCrickard and Andrey Esakia, Dept. of Computer Science

Motivating people to exercise is challenging, and the availability and the sharing of personal fitness data from step counters, heart rate monitors, altimeters, and similar devices presents opportunities to craft communities for encouragement and inspiration. However, variations in personal motivations and goals combined with a lack of transparency about many of the metrics can lead to a frustrating and discouraging experience for many people­­ particularly those who might most benefit from an exercise community. Based on our experiences creating and deploying a community fitness tracking system, this position paper identifies challenges in fairly representing individual and team progress in meaningful and truthful ways, and suggests approaches to identify adaptive methods that motivate program participants.


When Algorithms Are Smarter than Users: Protections May Be Needed

Reza Barkhi and Steven D. Sheetz, Dept. of Accounting and Information Systems

Algorithms of the future will read our emotions, evaluate our communications, measure our vital statistics, assess our intelligence related to a task, and respond to our interactions based on this knowledge. AI enabled algorithms also will be knowledgeable of consumer and psychological research that enables the presentation of information in the most influential manner. For example, information framing effects are widely known to influence decisions (Morana et al, 2017; Nunes and Jannach, 2017), e.g., 80% lean seems better that 10% fat. However, many users would prefer the latter yet choose the former due to the way the choice is presented which would be an irrational decision. It seems that we should expect our algorithms to make rational decisions for the benefit he user, i.e., to act like a fiduciary making decisions to benefit the user and recognize and respond properly to the framing effects, human biases, and emotional decisions that do not have rational basis. However, human experience is replete with examples where parties engaged in transactions have incongruent incentives and they may not always view their interaction as a positive-sum game. Hence, the algorithms play a critical role today given that many interactions are online and built into smart contracts that run according to their underlying logic that the user may not understand.


The Challenges of Ethical and Safe Smart Built Environments

Denis Gračanin, Dept. of Computer Science, Virginia Tech, with Mohamed Eltoweissy, Virginia Military Institute; Liang Cheng, Lehigh University, and Krešimir Matković, VRVis Research Center, Austria

A smart built environment (SBE) can be viewed as a collection of connected, interactive smart objects imbued with sensing and actuating capabilities. The Internet of Thing (IoT) infrastructure, supporting interactions of smart objects, can change the way how SBEs behave and how the inhabitants interact with them. If SBE components, such as walls and furniture pieces, are mobile and controllable, the geometry and interior design of SBE spaces can be changed in response to inhabitants’ actions and social activities. An added complexity is the mobility of smart objects and their ability to reconfigure. Reconfigurable IoT-based SBEs can provide significant benefits by enabling mobile, flexible and collaborative spaces to improve the lives of individuals, groups, and the broader community.

Through information fusion from smart meters, smartphones, and specialized sensors (e.g., inexpensive occupancy sensors), the inhabitants’ Activities of Daily Living and their energy impacts (e.g., switching on/off a particular appliance) can be construed. In light of the fact that residential and social environments are diversely heterogeneous, the data collection incorporates and adapts multiple data collection solutions, inducing trade-offs among costs, ease of deployment, and back-end complexities.


Synergetic Human-Machine Interaction in Mining and Explaining Complex Biological Data

Pavel Kraikivski, Academy of Integrated Science, Division of Systems Biology.

Deep learning and other AI algorithms seem to be ideal for analyzing and understanding Big Biological Data that are collected at an explosive speed in databases. However, it has also been reported that we cannot explain precisely how the AI algorithms are succeeding in the classification of data and that the results do not have any acceptable biological interpretation. Despite the fact that the machine-learning algorithms can be impressively accurate at making predictions, they don’t offer an explanation that would help us to understand dynamic behavior of the complex systems. Involving humans in the algorithmic decision-making pipeline can be a solution to the problem.


Object-Oriented Natural Language for Human-Algorithm Communication

Michael S. Hsiao, Dept. of Electrical and Computer Engineering

Technology has made tremendous strides over the years. We have seen that today’s gadgets are easier and more user-friendly than just a decade ago. However, programming these devices remain an arduous task. A small bug can break a program and/or system. Since the dawn of computing, programmers have been required to think like the computer when designing software. With the advances in technology, particularly in AI and NLP, we believe that perhaps the designers do not always need to think like a computer. Rather, the machine can be made to think more like a human and accept more human-centric algorithms. Describing an algorithm in a natural language allows the designer to think at a much higher level of abstraction. Further, the natural-language program code is much more readable and easier to comprehend, unlike Java or Python that would sound unnatural to an average human. Needless to say, debugging would also be much simpler. It has been long understood that “debugging is a skill which does not immediately follow from the ability to write code.” As the number of sentences would be orders of magnitude smaller than the number of lines in a comparable program-language-based code, the task of debugging would likewise be vastly simpler.

The discussion above begs the question: Is it even possible to design a software directly with a natural language (NL), such as English? If so, the designer can write plainly and allow the compiler to convert the natural language text into executable code.


Workshop Day 2 (Friday,
April 12, 2019, 10:00-12:00pm)

Fairness, Accountability, Transparency and Ethics network at VT

Ellington Graves, Africana Studies Program

This session will introduce an initiative for a Fairness, Accountability, Transparency and Ethics at Virginia Tech, which will be linked to the Equity and Social Disparity Transdisciplinary Community.


A Human-in-the-loop Deep-learning based Document Tagging for Stance Detection

Srijith Rajamohan, Alana Romanella, and Amit Ramesh, Advanced Research Computing

In this work, we seek to finetune a weakly-supervised expert-guided Deep Neural Network for the purpose of stance detection in Political Science. In this context, stance detection is used for determining political affiliation which is framed in the form of relative proximities between entities in a low-dimensional space. An overview of the pipeline from data ingestion, processing and generation of visualization is given here. A web-based framework created to facil- iate this interaction and exploration is presented here. Preliminary results of this study are summarized and future work is outlined.


You Can’t Play 20 Questions With an Algorithm and Win: How To Break Deep Networks Productively

Anthony D. Cate, Dept. of Psychology

The early proponent of artificial intelligence Allen Newell famously proposed that “You can’t play 20 questions with nature and win.”  He meant that while it is straightforward to investigate binary questions about cognitive phenomena, a series of hypothesis tests won’t often give true understanding. He proposed that it is more productive to formulate explicit models of cognitive processes. He gave three courses of action for understanding human cognition via experiments, which are  also paraphrased here: 1. Complete processing models, 2. Analyze a complex task, and 3. One program for many tasks. This framework is relevant to the question of how humans can understand algorithms. Humans reason well and understand complex processes easily when they can make models, both the formal kind suggested by Newell and the more general form of mental representation known as a mental model.


The Role of Agent-based Models in Understanding Human Agency and its Implications on Artificial Intelligence

Bianica Pires, Biocomplexity Institute

Agent-based modeling (ABM) is a type of computational method that allows for the modeling of the individual localized behavior of agents and at the same time observe the meso- and macro-outcomes that emerge. Within an “artificial” society, dynamic interactions over physical and social spaces can be created with relative ease allowing us to model agent-to-agent and agent-to-environment interactions spatiotemporally (Axtell, 2000). It is the ability to model these interactions at the level of the individual that makes ABM distinct from other computer simulation techniques. Given this, I see at least three opportunities for which ABM can complement research in Artificial Intelligence, particular concerning human agency and decision making.


Computer-Mediated Empathy

Sang Won Lee, Dept. of Computer Science

Today, we live in an era in which we can communicate via computers more than ever before.  While novel social networks and emerging technologies help us transcend the spatial and temporal constraints inherent to in-person communication, the trade-off is a loss of natural expressivity. While empathetic interaction is already challenging in in-person communication, computer-mediated communication makes such empathetically rich communication even more difficult. Are technology and intelligent systems opportunities or threats to more empathic   interpersonal communication? My future research vision is to build computational systems that facilitate understanding and empathy. Realizing empathy is suggested not only as a way to communicate with others, but also to design products for users and facilitate creativity.


Creating Transparent Search and Discovery Algorithms

Chreston Miller, University Libraries

There is growing interested in discovering how to incorporate human expertise in the analysis process. One specific question centers on how to support an expert analyzing multimodal data of human behavior over time in a way that facilitates locating subjectively relevant behavior patterns. The process of supporting such an expert begins with a seed idea that evolves over time as one investigates and searches the data. We call this kind of analysis Interactive Relevance Search and Modeling. Many search and discovery algorithms apply statistical analysis and machine learning to identify trends in datasets. While the effectiveness of such approaches has been shown, they can alienate the user from how trends and pattern structure are discovered. They may also hide pattern details necessary to learn and identify additional trends. A transparent understanding of the underlying patterns may aid the user in the identification and discovery of behavioral trends. This can also be used to verify the functionality and accountability of the algorithm(s). Depending on a researcher’s area of expertise, the functionality and limitations of the patterns used and operations of the algorithm(s) may not be apparent. IRSM puts the expert in a position to steer the search and discovery process and open the blackbox of trend/pattern identification.