Even though the usage of a single sensor is an inexpensive way to recognize the environment for a mobile robot, multi-sensor systems have been studied by researchers to obtain better information on the environment for recent years. A combination of different types of sensors has the advantage of complementarity. Cooperation of multiple sensors is also more precision and robuster so that a mobile robot can achieve improved performance with the higher levels of information on the environment. Due to these advantages, the following various combinations of sensors have been studied so far: a ultrasonic sensor array and a CCD camera; a laser range finder and a stereo camera system; a trinocular vision system; a laser structured light system and a ultrasonic sensor array; and so on.
In this thesis, a procedure of the map-based localization for the mobile robot is researched, and a camera and a laser structured light system are used to obtain the data on environment and to be fused for mobile robot localization. From images acquired by the camera, vertical lines are extracted, which are meaningful features in the indoor environment, and geometric landmarks composed of line segments are obtained by the laser structured light sensor. Although each of them is usable to localize mobile robots, the mobile robot gets lost occasionally. For successful and reliable localization in general cases, the data obtained by the two sensors are fused. The proposed sensor fusion algorithms are based on the weighted average method. Especially, this concerns with illumination-robustness. Illumination condition has influence upon environment recognition using a camera. Under the bad illumination situation, it is not easy to extract appropriate landmarks for localization from an image acquired by the camera. Therefore, the extracted landmarks have poor reliability. Consequently, the landmarks with poor reliability could cause the mobile robot to get lost in the worst case. To overcome this sort of situations, the illumination-robust sensor system and operation algorithms are proposed in this research. Firstly, the error characteristics of each sensor are analyzed, and the sensor reliability is modeled. Secondly, the localization methods using each sensor are described and fundamental experiments are performed. Thirdly, the fusion algorithms, based on the weighted average method and the sensor reliability model, are proposed. Finally, for evaluating the proposed algorithms, the experiments are performed in indoor navigation environments under the various illumination conditions. The experiment results are presented and discussed to show the advantage of using multi-sensor fusion.