While geometric models are important in every Augmented Reality (AR) application, acquiring such model information is not an easy task, especially at long distances. We are proposing different 3D User Interfaces to help users mark a point’s position in a wide-area both precisely and accurately.
Building on our successful work on Mixed Reality Simulation and AR simulation to enable informed design decisions, we present this novel AR project that can accomplish a 3D marking task where the 3D position of a real-world object is indicated by placing a virtual marker in the AR scene. To improve marking precision and accuracy at a large distance, we explore different techniques, such as Geometric, Perceptual, VectorCloud, and Image Refinement, that can help with the marking task when the geometry of real-world environment is unknown.
Geometric
The user is asked to look directly at the marking target and indicate current head orientation. Then after moving with sufficient distance, the user needs to gaze at the same target and report the new gaze direction. Then the system can obtain the target’s 3D position via triangulation. The accuracy for estimation depends heavily on the distance between two observation locations.
Perceptual
The user is asked to look directly at the marking target and indicate current head orientation. Then a virtual marker will appear in the AR scene immediately and the user needs to move the marker to the desired distance along the direction. The accuracy is related to the user’s distance perception and can be improved by providing depth cues like relative size, linear perspective atmospheric attenuation.
VectorCloud
This technique is a variation of the Geometric technique using progressive refinement. As its name indicates, instead of only gathering two samples, it allows the user to send multiple direction samples. Since one single sample might not be accurate due to factors like user head trimmering, refinement can be achieved by using the average of multiple data samples.
Image Refinement
This technique intends to achieve high marking accuracy by decoupling direction sampling from head tremors. When the user starts marking, the system records the user’s current field of view along with the camera pose and present as a static image. The user then picks the very point on the image to define a direction. The system calculates a precise ray in 3D space given user-defined direction and camera pose information. Head tremor is completely avoided with this technique.
Collaborative Communication – Augmented Reality Rays
This technique serves for model-free marking in with multiuser collaboration. When two distant users try to mark a common point simultaneously, communication is required to reach a consensus on marking the target. A virtual ray is a less effective tool to indicate either direction or select an object when geometry information of the environment is not available. Thus we explore different visual enhancements to help separated users communicate and exchange spatial information.
Drone Assisted Marking
This technique deploys a drone with an onboard camera to take direction samples from a much larger space. By combining with the Image Refinement marking technique, the drone’s great mobility can help to tackle situations where the target location is either physically not approachable, or visually not available from the ground.
Conferences
Enhanced geometric techniques for point marking in model-free augmented reality Conference
2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), IEEE 2019.