Idea 1,
Nowadays, a lot of HCI applications rely on ubiquitous computing devices binded on individual persons. However, this approach can sometimes be limited and needs device reset when user is changed. Now with computer vision algorithms, we can use simple cameras to find and track person in an area, this result can be used to store personal preferences and parameters. This approach is non-intrusive and passive sensing, and enables more possibilities with less constraints.
Idea 2,
Also, computer vision algorithms can extract human pose with simple camera setup, versus traditional ways of extracting human pose often requires external sensors like Kinect and motion capture systems that also limits area size and occlusion at low density. When combined with modern AR/VR devices, or in a smart environment, we can know people’s pose or identify their actions. Compared to hand holding externals sensors, this provides a natural way for HCI related computation.
Intro: I am a second year PHD student. My research focus is computer vision in indoor human identification and its applications.