Graduate Student Posters

Graduate student posters will be presented during the community reception and poster session, Thursday, April 11, 2019, from 5:30-7:00pm in the Moss Arts Center Atrium.


Auditing YouTube for Misinformation

Eslam Hussein, Dept. of Computer Science

Advised by Dr. Tanu Mitra, Dept. of Computer Science

In this work we audit YouTube recommendation system for recommending and spreading misinformation such as conspiracy theories, fake news, rumors and falsified information. In our work we experiment YouTube from different angles and study its recommendation of misinformation from different perspectives (demographically/geographically).


FeedCred: Supporting News Credibility Assessment on Social Media Through Nudges

MD Momen Bhuiyan, Dept. of Computer Science

Advised by Dr. Tanu Mitra, Dept. of Computer Science

With an expanding number of online outlets disseminating news on social media, a user’s ability to evaluate news credibility is severely time-constrained. We employ nudges, a choice-preserving technique that steers people into making conscious credibility judgments of news tweets. Guided by cue-based approaches to credibility evaluation, we designed a browser extension for Twitter, called FeedCred. By emphasizing mainstream news content that had previously been identified as reliable and by de-emphasizing non-mainstream content, FeedCred directs users’ attention to two peripheral cues: the authority of the source and other users’ opinion of the news item. Through a two-week field deployment with 36 participants, we find that credibility perception of nonmainstream content significantly decreased for the participants using the extension compared to the control group. However, post-hoc analyses show that political ideology inhibited this effect. Complementing our quantitative findings with interviews revealed that FeedCred made users evaluate news more carefully. Our results inform site designers seeking to enhance news literacy via nudge-based interventions.


Framing Hate with Hate Frames: Designing the Codebook

Shruti Phadke, Dept. of Computer Science

Advised by Dr. Tanu Mitra, Dept. of Computer Science

Hate groups increasingly use social media to promote extremist ideologies. They frame their online communications to appeal to potential recruits. Informed by sociological theories of framing, we develop the “Hate Frames Codebook”, a hand-coding scheme for analyzing online hate. The “Hate Frames Codebook” offers a two-fold outlook on hateful communications. First, it adopts a Collective Action perspective to analyze how hate groups identify problems in the social groups they target, suggest solutions to the problems, and motivate their supporters. Then, the codebook highlights strategies of influence through the lens of Propaganda Devices. We validate our codebook by applying it to a sample of 250 publicly available tweets sent by 15 Southern Poverty Law Center-designated hate groups. The codebook fosters future research by outlining the dimensions of framing in hate group communications, thus laying theoretical grounds for curating datasets and building computational models of hateful language.


Characterizing the Social Media News Sphere through User Co-Sharing Practices

Vartan Kesiz Abnousi, Dept. of Computer Science

Advised by Dr. Tanu Mitra, Dept. of Computer Science

A fundamental step in curating news spreading through social media is to assess whether the source of that news complies with basic journalistic norms of authenticity and accountability. However, how do these assessments align with the types of sources that users engage with on social media? In this paper, we describe the landscape of news sources which share social media audience. We focus on 639 news sources, both credible and questionable, and characterize them according to the audience that shares their articles on Twitter. First, we compare the sharing practices of news sources to two broad journalistic norms of authenticity and accountability. While authenticity is represented by expert assessment of factuality and bias, accountability is comprised of journalistic integrity. We find that the Twitter audience separates sources according to facets of factuality, better than according to partisan bias. For example, Breitbart, a far-right outlet, is closer (in cosine similarity) to the left-wing outlet Slate than either is to the Corbett Report, a conspiratorial right-wing source. Based on user co-sharing practices, what communities of news sources emerge?  We find four groups: one is home to mainstream, high-circulation sources from all sides of the political spectrum; one to satirical, left-leaning sources; one to bipartisan conspiratorial, pseudo-scientific sources; and one to right-leaning, deliberate misinformation sources. Finally, we show how articles shared on Twitter differ across the four groups. We characterize the sentiment, psycholinguistics, style, and content of the articles. We find that the mainstream group follows journalistic best practices, like the use of objective, formal, emotionally neutral language, whereas the conspiratorial group expresses anger and certainty when reporting. We leverage findings from our content analysis to classify the articles with high accuracy. Our data-driven categorization of news sources will help to navigate the complex landscape of online news and has implications for social media platforms as well as for journalism scholars. To enable new research questions on the social news sphere, we release our data, which connect over 30M tweets to over 1M articles from online sources’ websites, and expert assessments of the sources. 


Vauquois

Dillon Cutaiar, Dept. of Computer Science

Advised by Dr. Todd Ogle

Funded by a 2016-17 ICAT SEAD grant, a transdisciplinary team from Virginia Tech comprising Thomas Tucker (Creative Technology, School of Visual Arts), Todd Ogle (Technology-enhanced Learning and Online Strategies), David Hicks (History and Social Science Education, School of Education), DongSoo Choi (Creative Technology, School of Visual Arts), and David Cline (Public History, Department of History) created a learning environment that is a hands-on, mixed-reality exhibit of the human experience in the contested landscape of World War I. In collaboration with French partners Celine Beauchamp and Adrien Arles from the archaeology firm Arkimene, the team created an immersive experience which leveraged the ICAT Cube, a virtual reality walkthrough of a tunnel recreation, and a 360 degree video documentary to explore the experiences of French, German and American troops and civilians at Vauquois Hill overlooking the French city of Verdun – where the shift from streeting fighting to trench warfare to tunnel and underground warfare on the Western Front lasting 4 years destroyed the village of Vauquois, scarring both the landscape and historical consciousness of those who were there.


CrowdIA: Human-Algorithm Collaboration in Large-Scale Sensemaking

Tianyi Li, Dept. of Computer Science affiliation with InfoVis Lab and Crowd Intelligence Lab

Co-advised by Dr. Kurt Luther and Dr. Chris North, Dept. of Computer Science

While the increasing volume of text data is challenging the cognitive capabilities of human analysts, deciphering the rich information encoded in human language is still AI-hard. Can human and AI collaborate to transform the deluge of information into opportunities to improve our wisdom? How can many distributed agents contribute to suitable components asynchronously and meaningfully to a holistic sensemaking process? We describe a pipeline of modularized steps connected by clearly defined inputs and outputs to address this challenge. We implemented CrowdIA, a software platform that enables distributed, transient, novice human analysts to think collectively via asynchronous collaboration. By applying CrowdIA to solve mysteries of different difficulties, we found that the pipeline is effective in facilitating large-scale sensemaking and the human-algorithm collaboration produced meaningful analyses. We also identified influencing factors that bottleneck the performance with more difficult mysteries. We conclude with lessons learned and suggestions on design considerations for hybrid systems that augment human and machine intelligence.


Second Opinion

Vikram Mohanty, Dept. of Computer Science

Advised by Dr. Kurt Luther, Dept. of Computer Science

Softwares often infuse intelligent elements in their system, which are built to interact with users, and subsequently assist them in the form of final results or intermediate recommendations. While this kind of intelligence augmentation is often constructive to the goal of the task, interaction with human users can lead to possibilities of bias, misinformation or confusion. Civil War Photo Sleuth (CWPS) is a web platform, that was built on top of a novel person identification pipeline for identifying unknown soldier photos from the American Civil War era (1861-65). Even though this has shown considerable success in identifying faces, while preventing misinformation through design interventions, users have been found to share screenshots on social networks to seek second opinion about similar-looking identifications. We address this “last mile” problem of person identification i.e. helping a user pick the correct match from a set of very-similar looking photos suggested by the face recognition algorithm through a new interface design- “Second Opinion”, which combines the complementary strengths of online crowd workers and face recognition algorithm. Through our evaluations, we explore the implications for crowd-AI interaction and how interface design can help a user interpret the complementary information provided by crowds and algorithms to correctly identify a person.


Augmented Reality Immersive Analytics with Semantic Interaction

Lee Lisle, Dept. of Computer Science

Advised by Dr. Doug Bowman, Dept. of Computer Science

Data analytics helps people understand, learn, and disseminate ideas from large datasets. In addition, immersive analytics improves upon data analytics through using virtual and augmented reality to immerse the user in interactable data points that they can then organize in 3D space. Both processes can utilize machine learning and suggestion algorithms to afford the user new data points to organize or discard. Semantic interaction is an algorithm that detects users’ inputs and data manipulation and suggests ways or additional artifacts to improve or incorporate in the user’s analysis of the dataset. We propose a multi-user adaptation to this algorithm that supports two or more people working collaboratively in a shared space analyzing a dataset in augmented reality. We expect these updates to increase collaboration and provide a deeper shared understanding of the dataset, as well as creating better ways to synthesize and export the collected findings. We also want to explore the differences between single-user and multi-user variants and find potential improvements to the semantic interaction algorithm to better support both cases. Lastly, we want to better understand the impact of the real world on the workflow of the users as compared to a completely virtual space.


Flud: a hybrid crowd-algorithm approach for visualizing biological networks

Aditya Bharadwaj, Dept. of Computer Science

Advised by Dr. Kurt Luther and Dr. T. M. Murali, Dept. of Computer Science

Many fields of science require meaningful and visually appealing representation of their data. A prominent example is the discipline of network biology where scientists use graphs to understand the chemical reactions and protein interactions that underlie processes that take place in the cell. However, the problem remains challenging due to multiple conflicting aesthetic criteria and complex domain-specific constraints. In this research, we present a gamified graph layout task where the goal of the players is to create a layout that optimises a score based on user-defined priorities. We propose a novel hybrid approach wherein non-experts and a simulated annealing algorithm build on each other’s progress. To facilitate this collaborative process, we have developed Flud, an online game with a purpose (GWAP) that leverages the combination of cognitive abilities of humans to observe patterns and the computational accuracy of simulated annealing to draw graph layouts that can help scientists visualize and understand complex networks.


Political Economy of Human Computer Interactions

Yulong Liu, Dept. of Communication

Advised by Dr. Michael Horning, Dept. of Communication

Derived from Marxist thought and democratic politics, this study asks questions about power in human-computer interactions and the effects for democratic civic engagement. Political economy of human-computer interaction is a critical realist approach that investigates problems connected with the political and economic organization of human communication resources through computer facilities. Different ways of organizing and financing computer interface, mobile applications, and algorithm development have implications for the range and nature of these communications, and the ways in which these are consumed and used. Recognizing that the goods produced by technological giants are at once economic and cultural, this approach calls for attention to the interplay between the symbolic and economic dimensions of the production of meaning. This requires careful study of how giant tech corporates addressing content ownership, finance and support mechanisms (such as advertising); labour and the social organization of cultural production (such as patent/copyright ownership and employee/sponsor satisfaction); and how governance regulations and sponsorship affect online product development, user behavior and self-reported contents. This opens onto the second main topic, the influence of different ways of organizing the new technology interfaces and accesses: commercial, state, public, interest group and their complex combinations. In turn, this connects with the third main concern: the relationships between human and technology and the ways socio-cultural systems are organized.


Geolocating Images with Expert-Led Crowdsourcing and Shared Representations

Sukrit Venkatagiri, Dept. of Computer Science, Crowd Intelligence Lab

Advised by Dr. Kurt Luther, Dept. of Computer Science

Expert investigators bring advanced skills and deep experience to analyze visual evidence, but even experts face limits on their time and attention, and automated solutions are ineffective. In contrast, crowds of novices are highly scalable, parallelizable, and easy to mobilize, but lack expertise in leading investigations. Here, we introduces the novel concept of expert-led crowdsourcing, in which the complementary strengths of experts and novice crowds are combined to perform complex tasks with greater speed and quality than either alone. We focus on the complex task of image geolocation performed by professional investigators in domains like journalism and human rights. We built GroundTruth, an online system that uses shared representations—diagramming, grids, and feedback—to allow experts and crowds to collaborate in real time to geolocate images. Our mixed-methods evaluation with 11 experts and 567 crowd workers found that GroundTruth helped experts geolocate images and revealed challenges and success strategies for expert-crowd interaction. We also discuss broader implications for crowdsourcing other complex tasks.