Main tasks of the indoor mobile robot navigation are to determine the position of the robot in a global map and to identify the geometric relations between the current location and the target location, and to navigate. The appropriate modeling of robot environment, the acquisition of sensor data through a variety of sensors, and the inference of geometric relations are essential to accomplish given tasks successfully. There have been a number of studies on robot navigation over the past decade. One of the typical scheme among them is model-based navigation which mainly applies stereo vision system or range finder. But this scheme needs a lot of memory space as well as processing time. Also, it has been proved to be very complex and time-consuming even in a well-known environment. As it is difficult, in general, to extract sufficient three-dimensional information through sensors, the model-based method is not appropriate for the indoor mobile robot navigation. On the other hand, motion analysis, associative homing, and landmark-based navigation are more practical and efficient since it needs not build a 3-dimensional model of the environment. However, this scheme is applicable only to the spatially limited space. So to overcome the above limitations, Hong et al. proposed an image-based local homing algorithm to navigate between neighboring target locations. Although this approach requires less computation time and memory size than the others, it has the following disadvantages: First, it is impossible to obtain the unique solution of spatial relation because this algorithm uses only one-dimensional intensity data. Second, a robot may move many times to reach the target location because the obtained relation is imprecise.
Recently, various types of sensors are used to obtain more accurate spatial relation and to improve the intelligence for the robot. Especially, an omnidirectional sensor is very suitable for navigation because a mobile robot can move to any direction. Ishiguro et al. and Bang et al. obtained sensor data using a scanner, but the data could not be rapidly obtained because of the rotation time of the scanner. Hong et al. and Yagi et al. developed a rapid omnidirectional-image sensing system using spherical mirror and conical mirror respectively. However, they could not take advantage of different types of data since they dealt with only a vision sensor. Since every sensor generally has its own merit, one sensor can compensate for the deficiencies of other sensors. In case of a CCD sensor, though it can easily acquire image data with high resolution, it requires a lot of computation time to get the range information. On the other hand, an ultrasonic sensor can directly obtain range information but with very low resolution. So we need an effective sensor fusion method that utilizes the advantageous characteristics of the sensors.
In this paper, we propose a novel local homing algorithm for the indoor mobile robot navigation. In the algorithm, we divide the whole navigation task into simple local tasks in order to reduce the computational burden and the required memory size. The main task of local homing is to find out the geometric relations between the target location and the robot's current location. We develop a new environment model based on the omnidirectional sensor data obtained from the Omnidirectional Range and Intensity Sensing System(ORISS) which consists of a set of ultrasonic sensors and a vision sensor. In order to enhance the reliability of the sensor information, we fuse the sensor data by means of the characteristics of the indoor environment structure and the sensor model. To verify the proposed algorithm, experiments with a mobile robot are carried out in a corridor.