The use of hand gestures offers an alternative to cumbersome interface devices for human-computer interaction(HCI). It becomes more attractive with the emergence of virtual reality. In particular, vision-based approaches to interpret hand gestures provide much ease and naturalness than glove-based approaches.
In this thesis, we propose a vision-based 3D mouse system which interprets hand gestures from sequential video inputs and produces differences of translation and orientation in the 3D space.
Since the user poses hand gestures over a keyboard to locate 3D positions and backs to his key-typing with little movement of his hand, conversions between key-typing and mouse-pointing can be fast and easy.
This system works in real-time without any special hardwares on a PC and uses monocular images from single video camera for practical reason. To show the system feasibility, we develop 3D virtual world navigation system with hand gesutres. This system can be applied to manipulators for the computer-controlled 3D object such as robot arm.