In this thesis, we implemented a new interface system between human and robot for teleoperation. robots are often required function in environments which would be extremely dangerous or expensive when using direct human labour, however, computer control and intelligence are not sufficiently developed to permit the robots to perform these advanced technical tasks under their own initiative, and there is always a human operative in the loop. Ideally the operator would wish to input body motions(from legs, hand, arm, and head) which the robot would duplicate, and receive from the remote sensors 3-D visual of a quality and form comparable with that normally produced by the eyes.
We implement a interface system between human and robot controlled by natural motion of human(change the view point of head and walk pace). That can help the operator work more friendly rather than by some joystick or keyboard, and by using head mount display the operator feel reality from 3-D visual rather than naive monitor.
This work considers the development of input, control and feedback(visual) system for a twin camera equipped intelligent mobile robot to be used in the teleoperation applications. This multi-purpose human-robot interface provides the operator with an enhanced degree of true control of and 'feel' for the task.