The 3DI group would like to take a moment to acknowledge the contributions of its members and its alumni at ISMAR 2024.
Dr. Doug Bowman served on the ISMAR Career Impact Award committee, as a mentor in the doctoral consortium, as a panelist in the Future Faculty forum, and as an organizer of the Workshop on Intelligent XR: Harnessing AI for Next-Generation XR User Experiences (iXR). Dr. Bowman also served as a co-author on Investigating Object Translation in Room-scale, Handheld Virtual Reality. The abstract of the paper follows:
Handheld devices have become an inclusive alternative to head-mounted displays in virtual reality (VR) environments, enhancing accessibility and allowing cross-device collaboration. Object manipulation techniques in 3D space with handheld devices, such as those in handheld augmented reality (AR), have been typically evaluated on a tabletop scale, and we currently need to understand how these techniques perform in larger-scale environments. We conducted two studies, each with 30 participants, to investigate how different techniques impact usability and performance for room-scale handheld VR object translations. We compared three translation techniques that are similar to commonly studied techniques in handheld AR: 3DSlide, VirtualGrasp, and Joystick. We also examined the effects of target size, target distance, and user mobility conditions (stationary vs. moving). Results indicated that the Joystick technique, which allowed translation in relation to the user’s perspective, was the fastest and most preferred, without difference in precision. Our findings provide insights for designing room-scale handheld VR systems, with potential implications for mixed reality systems involving handheld devices.
Dr. Shakiba Davari served as an organizer of the iXR workshop.
Dr. Lee Lisle acted as co-chair of the doctoral consortium as well as an organizer of the Inclusion, Diversity, Equity, Accessibility, Transparency, and Ethics in XR (IDEATExR) workshop. Dr. Lisle also served on the Science and Technology Committee.
Dr. Ryan McMahan served on the Science and Technology committee and the Future Faculty forum. Ryan also hosted a tutorial during the Future Faculty forum. Dr. McMahan also served as a co-author on several papers at the conference. The titles and abstracts of the papers follow.
Cultural Reflections in Virtual Reality: The Effects of User Ethnicity in Avatar Matching Experiences on Sense of Embodiment
Matching avatar characteristics to a user can impact the sense of embodiment (SoE) in VR. However, few studies have examined how participant demographics may interact with these matching effects. We recruited a diverse and racially balanced sample of 78 participants to investigate the differences among participant groups when embodying both demographically matched and unmatched avatars. We found that participant ethnicity emerged as a significant factor, with Asian and Black participants reporting lower total SoE compared to Hispanic participants. Furthermore, we found that user ethnicity significantly influences ownership (a subscale of SoE), with Asian and Black participants exhibiting stronger effects of matched avatar ethnicity compared to White participants. Additionally, Hispanic participants showed no significant differences, suggesting complex dynamics in ethnic-racial identity. Our results also reveal significant main effects of matched avatar ethnicity and gender on SoE, indicating the importance of considering these factors in VR experiences. These findings contribute valuable insights into understanding the complex dynamics shaping VR experiences across different demographic groups. \
Cross-Domain Gender Identification Using VR Tracking Data
Recently, much work has been done to research the personal identifiability of extended reality (XR) users. Many of these prior studies are task-specific and involve identifying users completing a specific XR task. On the other hand, some studies have been domain-specific and focus on identifying users completing different XR tasks from the same domain, such as watching 360° videos or assembling structures. In this paper, we present one of the few studies investigating cross-domain identification (i.e., identifying users completing XR tasks from different domains). To facilitate our investigation, we used open-source datasets from two different virtual reality (VR) studies—one from an assembly domain and one from a gaming domain—to investigate the feasibility of cross-domain gender identification, as personal identification is not possible between these datasets. The results of our machine learning experiments clearly demonstrate that cross-domain gender identification is more difficult than domain-specific gender identification. Furthermore, our results indicate that head position is important for gender identification and demonstrate that the k-nearest neighbors (kNN) algorithm is not suitable for cross-domain gender identification, which future researchers should be aware of.
Addressing Human Factors Related to Artificial Intelligence Integrated Visual Cueing (iXR Workshop Paper)
A variety of assistive extended reality (XR) visual cueing techniques have been explored over the years. Many of these tools provide significant benefits to tasks such as visual search. However, when the cueing system is erroneous, performance may instead suffer. Factors such as automation bias, where an individual trusts the cueing system despite errors in the cueing, and cognitive overload, where individuals are presented with too much information by the system, may affect task efficacy (i.e., completion time, accuracy, etc.). In some cases, such as with automation bias, these hindrances may be the product of artificial intelligence (AI) integration. Despite this, there may be benefits to using adaptive AI-based cueing systems for XR tasks. However, aspects such as the flow of information, automation accuracy, communication of confidence, or the refusal of output must be considered to build effective AI adaptive cueing systems.