ISMAR 2023 and SUI 2023 Accepted Papers

October 11, 2023

The 3DI group would like to congratulate several of its members for their recent paper acceptances at ISMAR 2023 and SUI 2023.

SUI 2023

In-the-Wild Experiences with an Interactive Glanceable AR System for Everyday Use” – Feiyu Lu, Leonardo Pavanatto, and Doug Bowman


Augmented reality head-worn displays (AR HWDs) of the near future will be worn all day every day, delivering information to users anywhere and anytime. Recent research has explored how information can be presented on AR HWDs to facilitate easy acquisition without intruding on the user’s physical tasks. However, it remains unclear what users would like to do beyond passive viewing of information, and what are the best ways to interact with everyday content displayed in AR HWDs. To address this gap, our research focuses on the implementation of a functional prototype that leverages the concept of Glanceable AR while incorporating various interaction capabilities for users to take quick actions on their personal information. Instead of being overwhelmed and continuously attentive to virtual information, our system centers around the idea that virtual information should stay invisible and unobtrusive when not needed but is quickly accessible and interactable. Through an in-the-wild study involving three AR experts, our findings shed light on how to design interactions in AR HWDs to support everyday tasks, as well as how people perceive using feature-rich Glanceable AR interfaces during social encounters.

ISMAR 2023

Gestures vs. Emojis: Comparing Non-Verbal Reaction Visualizations for Immersive Collaboration” – Alexander Giovannelli, Jerald Thomas, Logan Lane, Francielly Rodrigues, and Doug Bowman


Collaborative virtual environments afford new capabilities in telepresence applications, allowing participants to co-inhabit an environment to interact while being embodied via avatars. However, shared content within these environments often takes away the attention of collaborators from observing the non-verbal cues conveyed by their peers, resulting in less effective communication. Exaggerated gestures, abstract visuals, as well as a combination of the two, have the potential to improve the effectiveness of communication within these environments in comparison to familiar, natural non-verbal visualizations. We designed and conducted a user study where we evaluated the impact of these different non-verbal visualizations on users’ identification time, understanding, and perception. We found that exaggerated gestures generally perform better than non-exaggerated gestures, abstract visuals are an effective means to convey intentional reactions, and the combination of gestures with abstract visuals provides some benefits compared to their standalone counterparts.

AMP-IT and WISDOM: Improving 3D Manipulation for High-Precision Tasks in Virtual Reality” – Francielly Rodrigues, Alexander Giovannelli, Leonardo Pavanatto, and Doug Bowman


Precise 3D manipulation in virtual reality (VR) is essential for effectively aligning virtual objects. However, state-of-the-art VR manipulation techniques have limitations when high levels of precision are required, including the unnaturalness caused by scaled rotations and the increase in time due to degree-of-freedom (DoF) separation in complex tasks. We designed two novel techniques to address these issues: AMP-IT, which offers direct manipulation with an adaptive scaled mapping for implicit DoF separation, and WISDOM, which offers a combination of Simple Virtual Hand and scaled indirect manipulation with explicit DoF separation. We compared these two techniques against baseline and state-of-the-art manipulation techniques in a controlled experiment. Results indicate that WISDOM and AMP-IT have significant advantages over best-practice techniques regarding task performance, usability, and user preference.

Evaluating the Feasibility of Predicting Information Relevance During Sensemaking with Eye Gaze Data” – Ibrahim Tahmid, Lee Lisle, Kylie Davidson, Kirsten Whitley, Chris North, and Doug Bowman


Eye gaze patterns vary based on reading purpose and complexity, and can provide insights into a reader’s perception of the content. We hypothesize that during a complex sensemaking task with many text-based documents, we will be able to use eye-tracking data to predict the importance of documents and words, which could be the basis for intelligent suggestions made by the system to an analyst. We introduce a novel eye-gaze metric called ‘GazeScore’ that predicts an analyst’s perception of the relevance of each document and word when they perform a sensemaking task. We conducted a user study to assess the effectiveness of this metric and found strong evidence that documents and words with high GazeScores are perceived as more relevant, while those with low GazeScores were considered less relevant. We explore potential real-time applications of this metric to facilitate immersive sensemaking tasks by offering relevant suggestions.

Uncovering Best Practices in Immersive Space to Think” – Kylie Davidson, Lee Lisle, Ibrahim Tahmid, and Doug Bowman


As immersive analytics research becomes more popular, user studies have been aimed at evaluating the strategies and layouts of users’ sensemaking during a single focused analysis task. However, approaches to sensemaking strategies and layouts are likely to change as users become more familiar/proficient with the immersive analytics tool. In our work, we build upon an existing immersive analytics approach–Immersive Space to Think–to understand how schemas and strategies for sensemaking change across multiple analysis tasks. We conducted a user study with 14 participants who completed three different sensemaking tasks during three separate sessions. We found significant differences in the use of space and strategies for sensemaking across these sessions and correlations between participants’ strategies and the quality of their sensemaking. Using these findings, we propose guidelines for effective analysis approaches within immersive analytics systems for document-based sensemaking.

Spaces to Think: A Comparison of Small, Large, and Immersive Displays for the Sensemaking Process” – Lee Lisle, Kylie Davidson, Leonardo Pavanatto, Ibrahim Tahmid, and Doug Bowman


Analysts need to process large amounts of data in order to extract concepts, themes, and plans of action based upon their findings. Different display technologies offer varying levels of space and interaction methods that change the way users can process data using them. In a comparative study, we investigated how the use of single traditional monitor, a large, high-resolution two-dimensional monitor, and immersive three-dimensional space using the Immersive Space to Think approach impact the sensemaking process. We found that user satisfaction grows and frustration decreases as available space increases. We observed specific strategies users employ in the various conditions to assist with the processing of datasets. We also found an increased usage of spatial memory as space increased, which increases performance in artifact position recall tasks. In future systems supporting sensemaking, we recommend using display technologies that provide users with large amounts of space to organize information and analysis artifacts.

CoLT: Enhancing Collaborative Literature Review Tasks with Synchronous and Asynchronous Awareness Across the Reality-Virtuality Continuum“, ISMAR Competition – Ibrahim Tahmid, Francielly Rodrigues, Alexander Giovannelli, Lee Lisle, Jerald Thomas, and Doug Bowman


Collaboration plays a vital role in both academia and industry whenever we need to browse through a big amount of data to extract meaningful insights. These collaborations often involve people living far from each other, with different levels of access to technology. Effective cross-border collaborations require reliable telepresence systems that provide support for communication, cooperation, and understanding of contextual cues. In the context of collaborative academic writing, while immersive technologies offer novel ways to enhance collaboration and enable efficient information exchange in a shared workspace, traditional devices such as laptops still offer better readability for longer articles. We propose the design of a hybrid cross-reality cross-device networked system that allows the users to harness the advantages of both worlds. Our system allows users to import documents from their personal computers (PC) to an immersive headset, facilitating document sharing and simultaneous collaboration with both colocated colleagues and remote colleagues. Our system also enables a user to seamlessly transition between Virtual Reality, Augmented Reality, and the traditional PC environment, all within a shared workspace. We present the real-world scenario of a global academic team conducting a comprehensive literature review, demonstrating its potential for enhancing cross-reality hybrid collaboration and productivity.