In recent years, intelligent autonomous mobile robots have drawn tremendous interests as service robots for serving or industrial robots for replacing human being To carry out given tasks successfully, first of all, robots must be able to sense 3D indoor space where they live or work and to build a three-dimensional map of navigation environments autonomously using its own sensor systems. The ultimate goal of this thesis is to endow mobile robots with functions to sense 3D navigation spaces robustly and to build a 3D map autonomously without human intervention.
For this purpose, first, we propose a novel 3D sensing system for intelligent mobile robots to sense environments during autonomous navigation. The proposed sensor system classified into an active trinocular vision system is composed of the flexible multi-stripe laser projector and two cameras arranged with a triangular shape. By modeling the laser projector as a virtual camera and using trinocular epipolar constraints that three cameras constitute, the correspondences between pairs of line features observed into two real camera images are established. Especially, we propose a robust correspondence matching technique based on line grouping and probabilistic voting. The probabilistic voting method is composed of 'voting phase' and 'ballot counting phase'. With a detail description of the sensor principle, a series of experimental tests is performed to show the simplicity, efficiency, and accuracy of this proposed sensor system for 3D environment sensing and recognition, and the proposed sensor system is implemented on a test-bed mobile robot, LCAR Ill.
Secondly, we deal with local map building problem from three-dimensional sensing data for mobile robot navigation. In particular, the problem to be dealt with is how to extract and model obstacles which are not represented on the map but exist in the real environment, so that the map can be newly updated using the modeled obstacle information. To achieve this, we propose a three-dimensional map building method, which is based on a self-organizing neural network technique called 'growing neural gas network'. Using the obstacle data acquired from the 3D data acquisition process of the active trinocular vision system, learning of the neural network is performed to generate a graphical structure that reflects the topology of the input space. For evaluation of the proposed method, a series of simulations and experiments are performed to build 3D maps of some given environments surrounding the robot. The usefulness and robustness of the proposed method are investigated and discussed in detail.
Thirdly, to solve the autonomous map-building problem, we proposed a next-view planning algorithm for efficient visual perception guidance of the developed visual sensor with limited sensing range and viewing angle. Different from conventional view-planning methods for mobile robots, we effectively integrate view-planning algorithms for map exploration and for self-localization, and solve this problem based on a concept of 'visual tendency'. Using various visual tendencies for autonomous map building, we generate some of potential candidates for next view position and orientation of robot sensors. The generated candidates are evaluated on navigation goals defined according to exploration purposes, and the best one is selected as a next view position using a fuzzy-decision making technique. Through a series of simulations and experiments, it is shown that 2D navigation map of environments is autonomously and successfully built with this algorithms, without any human intervention.
Lastly, we integrate the developed algorithms to acquire 3D map for mobile robot navigation and environment recognition. Based on 2D navigation map built during navigation, a next-view position of satisfying map-exploration purpose and self-localization purpose simultaneously is determined, and the mobile robot senses local 3D environment at the planned position. A local 3D map built at each sensing step is iteratively combined with current global map based on localization information of the mobile robot. Experiments performed on indoor environments with cluttered objects shows the usefulness and effectiveness of the 3D map constructed on real navigation situations.