서지주요정보
3차원 능동 거리 센서 시스템을 이용한 이동로봇의 환경 모델링과 자율 지도 작성 = Environment modeling and autonomous map building for mobile robots using a 3D active vision sensor
서명 / 저자 3차원 능동 거리 센서 시스템을 이용한 이동로봇의 환경 모델링과 자율 지도 작성 = Environment modeling and autonomous map building for mobile robots using a 3D active vision sensor / 김민영.
발행사항 [대전 : 한국과학기술원, 2004].
Online Access 원문보기 원문인쇄

소장정보

등록번호

8015769

소장위치/청구기호

학술문화관(문화관) 보존서고

DME 04038

휴대폰 전송

도서상태

이용가능

대출가능

반납예정일

리뷰정보

초록정보

In recent years, intelligent autonomous mobile robots have drawn tremendous interests as service robots for serving or industrial robots for replacing human being To carry out given tasks successfully, first of all, robots must be able to sense 3D indoor space where they live or work and to build a three-dimensional map of navigation environments autonomously using its own sensor systems. The ultimate goal of this thesis is to endow mobile robots with functions to sense 3D navigation spaces robustly and to build a 3D map autonomously without human intervention. For this purpose, first, we propose a novel 3D sensing system for intelligent mobile robots to sense environments during autonomous navigation. The proposed sensor system classified into an active trinocular vision system is composed of the flexible multi-stripe laser projector and two cameras arranged with a triangular shape. By modeling the laser projector as a virtual camera and using trinocular epipolar constraints that three cameras constitute, the correspondences between pairs of line features observed into two real camera images are established. Especially, we propose a robust correspondence matching technique based on line grouping and probabilistic voting. The probabilistic voting method is composed of 'voting phase' and 'ballot counting phase'. With a detail description of the sensor principle, a series of experimental tests is performed to show the simplicity, efficiency, and accuracy of this proposed sensor system for 3D environment sensing and recognition, and the proposed sensor system is implemented on a test-bed mobile robot, LCAR Ill. Secondly, we deal with local map building problem from three-dimensional sensing data for mobile robot navigation. In particular, the problem to be dealt with is how to extract and model obstacles which are not represented on the map but exist in the real environment, so that the map can be newly updated using the modeled obstacle information. To achieve this, we propose a three-dimensional map building method, which is based on a self-organizing neural network technique called 'growing neural gas network'. Using the obstacle data acquired from the 3D data acquisition process of the active trinocular vision system, learning of the neural network is performed to generate a graphical structure that reflects the topology of the input space. For evaluation of the proposed method, a series of simulations and experiments are performed to build 3D maps of some given environments surrounding the robot. The usefulness and robustness of the proposed method are investigated and discussed in detail. Thirdly, to solve the autonomous map-building problem, we proposed a next-view planning algorithm for efficient visual perception guidance of the developed visual sensor with limited sensing range and viewing angle. Different from conventional view-planning methods for mobile robots, we effectively integrate view-planning algorithms for map exploration and for self-localization, and solve this problem based on a concept of 'visual tendency'. Using various visual tendencies for autonomous map building, we generate some of potential candidates for next view position and orientation of robot sensors. The generated candidates are evaluated on navigation goals defined according to exploration purposes, and the best one is selected as a next view position using a fuzzy-decision making technique. Through a series of simulations and experiments, it is shown that 2D navigation map of environments is autonomously and successfully built with this algorithms, without any human intervention. Lastly, we integrate the developed algorithms to acquire 3D map for mobile robot navigation and environment recognition. Based on 2D navigation map built during navigation, a next-view position of satisfying map-exploration purpose and self-localization purpose simultaneously is determined, and the mobile robot senses local 3D environment at the planned position. A local 3D map built at each sensing step is iteratively combined with current global map based on localization information of the mobile robot. Experiments performed on indoor environments with cluttered objects shows the usefulness and effectiveness of the 3D map constructed on real navigation situations.

서지기타정보

서지기타정보
청구기호 {DME 04038
형태사항 xii, 294 p. : 삽화 ; 26 cm
언어 한국어
일반주기 저자명의 영문표기 : Min-Young Kim
지도교수의 한글표기 : 조형석
지도교수의 영문표기 : Hyung-Suck Cho
수록잡지명 : "Three-dimensional map building for mobile robot navigation environments using a self-organizing neural network". Journal of robotic systems, v.21 no.6, pp. 323-343(2004)