As robots become more prevalent in human daily life, situations requiring interaction between humans and robots will occur more frequently. Therefore, human-robot interaction (HRI) is becoming increasingly important. Although robotics researchers have made many technical developments in their field, intuitive and easy ways for most common users to interact with robots are still lacking.
In the proposed approach, each semantic symbol represents knowledge about either the environment or an action that a robot can perform. Users’ intentions are expressed by symbolized multimodal information. An ideal robot would have the ability to understand a great variety of user expressions. Although it is difficult to achieve perfect understanding of all kinds of user input and all kinds of work domain, a human user wants a robot should understand his or her exact intention. A robot also should understand a user command whether it is grammatically correct or not.
To enable a robot to have such understanding, a semantic symbol-based human robot interaction method is proposed. By representing multimodal information as semantic symbols, the user’s intentions can be converted into a form that can be understood by the robot. These semantic symbols are given to the robot in word level meaning. Besides, for the interpreting users’ commands, a probabilistic approach is used, which is appropriate for interpreting a freestyle user expression or insufficient input information. Therefore, a first-order Markov model is constructed as a probabilistic model, and a questionnaire survey is conducted to obtain state transition probabilities for these Markov models. After evaluating these models, we could get acceptable result for the various types of user expressions.
In this thesis a robot could notice what task is specified by a user, and also could estimate how many tasks are commanded by using the probabilistic Markov model recursively.
To execute the specified task, furthermore, a robot initiate interaction dialogue to grasp a user’s command more precisely. Interaction is needed to make clear the task and to supplement a user’s command. Finally, simulation is conducted to check the proposed interpretation method. After interpreting the user’s command, the input command form is converted to single function (verb) and single argument (object) form. This form is very efficient to apply script base simulation environment. Virtual 3D environment based on real environment is constructed as a test environment for the virtual robot. Therefore, the robot execution of a task (tasks) could be simulated in this environment.
산업용 용도로만 사용되던 로봇이 서비스 로봇의 형태로 일상생활에서 많이 사용됨에 따라 보다 사용하기 편리한 인간 로봇 상호작용 기술들이 요구되게 된다. 로봇을 사용하기 위해 사용자는 다양한 형식의 명령을 입력하게 되며 음성이나 시각 정보들을 일관되게 처리하여 로봇의 태스크로 이해하기 위한 방법으로서 아이콘 언어를 제안하였다. 아이콘 언어를 사용하여 입력을 하고 이 입력을 로봇의 태스크로 이해하기 위한 모듈을 개발하였다. 또한 다중의 태스크 수행명령을 이해할 수 있는 모듈을 제작하였고 이러한 모듈의 검증작업을 위해서 3D 시뮬레이터를 사용하여 로봇의 수행동작을 검증하였다.