As the new semester begins, the 3DI group would like to acknowledge the recent accomplishments of some of its members from the previous semester. Lee Lisle defended his dissertation work titled, “Immersive Space to Think: Immersive Analytics for Sensemaking with Non-Quantitative Datasets” in December of 2021. The abstract of his dissertation follows:
“Analysts often work with large complex non-quantitative datasets in order to better understand concepts, themes, and other forms of insight contained within them. As defined by Pirolli and Card, this act of sensemaking is cognitively difficult, and is performed iteratively and repetitively through various stages of understanding. Immersive analytics has purported to assist with this process through putting users in virtual environments that allows them to sift through and explore data in three-dimensional interactive settings. Most previous research, however, has focused on quantitative data, where users are interacting with mostly numerical representations of data. We designed Immersive Space to Think, an immersive analytics approach to assist users perform the act of sensemaking with non-quantitative datasets, affording analysts the ability to manipulate data artifacts, annotate them, search through them, and present their findings. We performed several studies to understand and refine our approach and how it affects users sensemaking strategies. An exploratory virtual reality study found that users place documents in 2.5-dimensional structures, where we saw semicircular, environmental, and planar layouts. The environmental layout, in particular, used features of the environment as scaffolding for users’ sensemaking process. In a study comparing levels of mixed reality as defined by Milgram-Kishino’s Reality-Virtuality Continuum, we found that an augmented virtuality solution best fits users’ preferences while still supporting external tools. Lastly, we explored how users deal with varying amounts of space and three-dimensional user interaction techniques in a comparative study comparing small virtual monitors, large virtual monitors, and a seated-version implementation of Immersive Space to Think. Our participants found IST best supported the task of sensemaking, with evidence that users leveraged spatial memory and utilized depth to denote additional meaning in the immersive condition. Overall, Immersive Space to Think affords an effective sensemaking three-dimensional space using 3D user interaction techniques that can leverage embodied cognition and spatial memory which aids the users understanding.”
Next, we would like to congratulate Shakiba Davari for successfully completing her preliminary exam in February.
Finally, we would like to congratulate Feiyu Lu for successfully defending his dissertation work titled, “Glanceable AR: Towards a Pervasive and Always-On Augmented Reality Future” in May. The abstract of his dissertation follows:
Augmented reality head-worn displays (AR HWDs) have the potential to assist personal computing and the acquisition of everyday information. With advancements in hardware and tracking, these devices are becoming increasingly lightweight and powerful. They could eventually have the same form factor as normal pairs of eyeglasses, be worn all-day, overlaying information pervasively on top of the real-world anywhere and anytime to continuously assist people’s tasks. However, unlike traditional mobile devices, AR HWDs are worn on the head and always visible. If designed without care, the displayed virtual information could also be distracting, overwhelming, and take away the user’s attention from important real- world tasks. In this dissertation, we research methods for appropriate information displays and interactions with future all-day AR HWDs by seeking answers to four questions: (1) how to mitigate distractions of AR content to the users; (2) how to prevent AR content from occluding the real-world environment; (3) how to support scalable on-the-go access to AR content; and (4) how everyday users perceive using AR systems for daily information acquisition tasks. Our work builds upon a theory we developed called Glanceable AR, in which digital information is displayed outside the central field of view of the AR display to minimize distractions, but can be accessed through a quick glance. Through five projects covering seven studies, this work provides theoretical and empirical knowledge to prepare us for a pervasive yet unobtrusive everyday AR future, in which the overlaid AR information is easily accessible, non-invasive, responsive, and supportive.
Congratulations to Lee, Shakiba, and Feiyu!