{"id":2608,"date":"2021-09-02T09:54:42","date_gmt":"2021-09-02T14:54:42","guid":{"rendered":"https:\/\/wordpress.cs.vt.edu\/3digroup\/?p=2608"},"modified":"2024-04-01T10:12:32","modified_gmt":"2024-04-01T15:12:32","slug":"context-aware-ar","status":"publish","type":"post","link":"https:\/\/wordpress.cs.vt.edu\/3digroup\/2021\/09\/02\/context-aware-ar\/","title":{"rendered":"Socially Adaptive Context-Aware AR"},"content":{"rendered":"\n<p>It is widely believed that AR glasses will be the next generation of personal information access devices. Lightweight AR glasses have the potential to give users hands-free access to any information, anytime, anywhere without the need for any physical displays. An intelligent AR interface will need to handle challenges such as avoiding the <a rel=\"noreferrer noopener\" href=\"https:\/\/wordpress.cs.vt.edu\/3digroup\/2020\/02\/12\/glanceable-ar-occlusion-management\/\" data-type=\"URL\" data-id=\"https:\/\/wordpress.cs.vt.edu\/3digroup\/2020\/02\/12\/glanceable-ar-occlusion-management\/\" target=\"_blank\">occlusion of important real-world objects<\/a>, using real-world surfaces when appropriate, and determining how and when content should move along with the users. This shows the importance of an <a rel=\"noreferrer noopener\" href=\"https:\/\/wordpress.cs.vt.edu\/3digroup\/2020\/02\/12\/context-aware-adaptive-ar-interfaces\/\" data-type=\"URL\" data-id=\"https:\/\/wordpress.cs.vt.edu\/3digroup\/2020\/02\/12\/context-aware-adaptive-ar-interfaces\/\" target=\"_blank\">Adaptive AR Interface<\/a>. However, to have such adaptive AR Interfaces, we need proper knowledge of the user&#8217;s context. A real-world object can be important to the user in one context, and completely unimportant in another. For example, if you are in a conversation, the content of their speech and where their face is are considered important. However, if you are reading a book in a caf\u00e9, the face or content of the speech of the person sitting in front of you at another table is of little importance to you (usually).<br>In this project, we designed an interface that detects when the user is in a conversation with someone else and based on the content of this conversation makes different virtual content available to the user to support their conversation. The interface also avoids occlusion of the other person&#8217;s face to ensure the user&#8217;s awareness of the surrounding social cues.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"299\" src=\"https:\/\/wordpress.cs.vt.edu\/3digroup\/wp-content\/uploads\/sites\/141\/2021\/11\/teaser3-1024x299.png\" alt=\"\" class=\"wp-image-2693\" style=\"width:768px;height:224px\" srcset=\"https:\/\/wordpress.cs.vt.edu\/3digroup\/wp-content\/uploads\/sites\/141\/2021\/11\/teaser3-1024x299.png 1024w, https:\/\/wordpress.cs.vt.edu\/3digroup\/wp-content\/uploads\/sites\/141\/2021\/11\/teaser3-300x88.png 300w, https:\/\/wordpress.cs.vt.edu\/3digroup\/wp-content\/uploads\/sites\/141\/2021\/11\/teaser3-768x224.png 768w, https:\/\/wordpress.cs.vt.edu\/3digroup\/wp-content\/uploads\/sites\/141\/2021\/11\/teaser3-1536x448.png 1536w, https:\/\/wordpress.cs.vt.edu\/3digroup\/wp-content\/uploads\/sites\/141\/2021\/11\/teaser3.png 1632w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-large is-resized is-style-default\"><img loading=\"lazy\" decoding=\"async\" width=\"939\" height=\"350\" src=\"https:\/\/wordpress.cs.vt.edu\/3digroup\/wp-content\/uploads\/sites\/141\/2021\/11\/app1.png\" alt=\"\" class=\"wp-image-2696\" style=\"width:704px;height:263px\" srcset=\"https:\/\/wordpress.cs.vt.edu\/3digroup\/wp-content\/uploads\/sites\/141\/2021\/11\/app1.png 939w, https:\/\/wordpress.cs.vt.edu\/3digroup\/wp-content\/uploads\/sites\/141\/2021\/11\/app1-300x112.png 300w, https:\/\/wordpress.cs.vt.edu\/3digroup\/wp-content\/uploads\/sites\/141\/2021\/11\/app1-768x286.png 768w\" sizes=\"auto, (max-width: 939px) 100vw, 939px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"Demo: Socially Adaptive Context-Aware Glanceable AR\" width=\"1778\" height=\"1000\" src=\"https:\/\/www.youtube.com\/embed\/eY74XfPjVXQ?modestbranding=1\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-4-3 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"Validating the benefits of Glanceable and Context-Aware AR for Everyday Information Access\" width=\"1333\" height=\"1000\" src=\"https:\/\/www.youtube.com\/embed\/s0tcx_aY5yg?modestbranding=1\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n<div class=\"teachpress_pub_list\"><form name=\"tppublistform\" method=\"get\"><a name=\"tppubs\" id=\"tppubs\"><\/a><\/form><div class=\"teachpress_publication_list\"><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_info\"><p class=\"tp_pub_author\">Shakiba Davari; Feiyu Lu; Doug A Bowman<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('194','tp_links')\" style=\"cursor:pointer;\">Validating the Benefits of Glanceable and Context-Aware Augmented Reality for Everyday Information Access Tasks<\/a> <span class=\"tp_pub_type tp_  conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_publisher\">2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), <\/span><span class=\"tp_pub_additional_year\">2022<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_194\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('194','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_194\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('194','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_194\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('194','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_194\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{Davari2022validate,<br \/>\r\ntitle = {Validating the Benefits of Glanceable and Context-Aware Augmented Reality for Everyday Information Access Tasks},<br \/>\r\nauthor = {Shakiba Davari and Feiyu Lu and Doug A Bowman},<br \/>\r\nurl = {https:\/\/wordpress.cs.vt.edu\/3digroup\/validatepaper\/},<br \/>\r\ndoi = {10.1109\/VR51125.2022.00063},<br \/>\r\nyear  = {2022},<br \/>\r\ndate = {2022-03-16},<br \/>\r\npublisher = {2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},<br \/>\r\nabstract = {Glanceable Augmented Reality interfaces have the potential to provide fast and efficient information access for the user. However, where to place the virtual content and how to access them depend on the user context. We designed a Context-Aware AR interface that can intelligently adapt for two different contexts: solo and social. We evaluated information access using Context-Aware AR compared to current mobile phones and non-adaptive Glanceable AR interfaces. We found that in a solo scenario, compared to a mobile phone, the Context-Aware AR interface was preferred, easier, and significantly faster; it improved the user experience; and it allowed the user to better focus on their primary task. In the social scenario, we discovered that the mobile phone was slower, more intrusive, and perceived as the most difficult. Meanwhile, Context-Aware AR was faster for responding to information needs triggered by the conversation; it was preferred and perceived as the easiest for resuming conversation after information access; and it improved the user\u2019s awareness of the other person's facial expressions. },<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('194','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_194\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Glanceable Augmented Reality interfaces have the potential to provide fast and efficient information access for the user. However, where to place the virtual content and how to access them depend on the user context. We designed a Context-Aware AR interface that can intelligently adapt for two different contexts: solo and social. We evaluated information access using Context-Aware AR compared to current mobile phones and non-adaptive Glanceable AR interfaces. We found that in a solo scenario, compared to a mobile phone, the Context-Aware AR interface was preferred, easier, and significantly faster; it improved the user experience; and it allowed the user to better focus on their primary task. In the social scenario, we discovered that the mobile phone was slower, more intrusive, and perceived as the most difficult. Meanwhile, Context-Aware AR was faster for responding to information needs triggered by the conversation; it was preferred and perceived as the easiest for resuming conversation after information access; and it improved the user\u2019s awareness of the other person's facial expressions. <\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('194','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_194\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/wordpress.cs.vt.edu\/3digroup\/validatepaper\/\" title=\"https:\/\/wordpress.cs.vt.edu\/3digroup\/validatepaper\/\" target=\"_blank\">https:\/\/wordpress.cs.vt.edu\/3digroup\/validatepaper\/<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1109\/VR51125.2022.00063\" title=\"Follow DOI:10.1109\/VR51125.2022.00063\" target=\"_blank\">doi:10.1109\/VR51125.2022.00063<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('194','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><\/div><\/div>\n","protected":false},"excerpt":{"rendered":"<p>It is widely believed that AR glasses will be the next generation of personal information access devices. Lightweight AR glasses have the potential to give users hands-free access to any information, anytime, anywhere without the need for any physical displays. An intelligent AR interface will need to handle challenges such as avoiding the occlusion of <a href=\"https:\/\/wordpress.cs.vt.edu\/3digroup\/2021\/09\/02\/context-aware-ar\/\" class=\"more-link\">&#8230;<\/a><\/p>\n","protected":false},"author":254,"featured_media":2691,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[11],"tags":[20,24,27,469,467,472,474,473,44,471,51,475,470],"ppma_author":[400,394,391],"class_list":["post-2608","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-projects","tag-adaptive-ar","tag-ar","tag-augmented-reality","tag-context-detection","tag-context-intelligent","tag-context-intelligent-ar","tag-conversation-detection","tag-glanceable-ar","tag-glanceable-content","tag-intelligent-interface","tag-interface-design","tag-social-ar","tag-user-context"],"jetpack_featured_media_url":"https:\/\/wordpress.cs.vt.edu\/3digroup\/wp-content\/uploads\/sites\/141\/2021\/09\/teaser2.png","authors":[{"term_id":400,"user_id":254,"is_guest":0,"slug":"sdavari","display_name":"Shakiba Davari","avatar_url":{"url":"https:\/\/wordpress.cs.vt.edu\/3digroup\/wp-content\/uploads\/sites\/141\/2020\/11\/sdavari.jpg","url2x":"https:\/\/wordpress.cs.vt.edu\/3digroup\/wp-content\/uploads\/sites\/141\/2020\/11\/sdavari.jpg"},"0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""},{"term_id":394,"user_id":269,"is_guest":0,"slug":"feiyulu","display_name":"Feiyu Lu","avatar_url":{"url":"https:\/\/wordpress.cs.vt.edu\/3digroup\/wp-content\/uploads\/sites\/141\/2020\/10\/Feiyu.jpg","url2x":"https:\/\/wordpress.cs.vt.edu\/3digroup\/wp-content\/uploads\/sites\/141\/2020\/10\/Feiyu.jpg"},"0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""},{"term_id":391,"user_id":331,"is_guest":0,"slug":"dbowman","display_name":"Doug Bowman","avatar_url":{"url":"https:\/\/wordpress.cs.vt.edu\/3digroup\/wp-content\/uploads\/sites\/141\/2021\/01\/professional_photo2_2019-cropped-square-smaller-scaled.jpg","url2x":"https:\/\/wordpress.cs.vt.edu\/3digroup\/wp-content\/uploads\/sites\/141\/2021\/01\/professional_photo2_2019-cropped-square-smaller-scaled.jpg"},"0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""}],"_links":{"self":[{"href":"https:\/\/wordpress.cs.vt.edu\/3digroup\/wp-json\/wp\/v2\/posts\/2608","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/wordpress.cs.vt.edu\/3digroup\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/wordpress.cs.vt.edu\/3digroup\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/wordpress.cs.vt.edu\/3digroup\/wp-json\/wp\/v2\/users\/254"}],"replies":[{"embeddable":true,"href":"https:\/\/wordpress.cs.vt.edu\/3digroup\/wp-json\/wp\/v2\/comments?post=2608"}],"version-history":[{"count":14,"href":"https:\/\/wordpress.cs.vt.edu\/3digroup\/wp-json\/wp\/v2\/posts\/2608\/revisions"}],"predecessor-version":[{"id":3282,"href":"https:\/\/wordpress.cs.vt.edu\/3digroup\/wp-json\/wp\/v2\/posts\/2608\/revisions\/3282"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/wordpress.cs.vt.edu\/3digroup\/wp-json\/wp\/v2\/media\/2691"}],"wp:attachment":[{"href":"https:\/\/wordpress.cs.vt.edu\/3digroup\/wp-json\/wp\/v2\/media?parent=2608"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/wordpress.cs.vt.edu\/3digroup\/wp-json\/wp\/v2\/categories?post=2608"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/wordpress.cs.vt.edu\/3digroup\/wp-json\/wp\/v2\/tags?post=2608"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/wordpress.cs.vt.edu\/3digroup\/wp-json\/wp\/v2\/ppma_author?post=2608"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}