Since the neural network model requires a large amount of computational resources, there have been a lot of researches on the efficient implementation methods to reduce the long computation time. Among these researches, as the virtual implementation of neural network, in which a processing element simulates several neurons in time-sharing fashion, provides the facilities to simulate various neural models via programming, it has been a primary method to implement the neural network models.
Since the introduction of the Self-Organizing Feature Maps (SOFMs), they have been used for a variety of applications. Virtual implementations of SOFM on conventional computers are rather slow for large problems; direct VLSI or special purpose hardware implementations are rather expensive.
In this thesis, we propose an acceleration method for the virtual implementation of SOFM algorithm. During the learning of the map, the search phase, in which the closest neuron to the input pattern is located, is very time-consuming process. By exploiting the topological ordering property of neuron's weights, we can reduce the search time. This method can be implemented not only in the single-processor system but also in the multi-processor system to gain further speedup. In the parallel version of this method, communication overhead can be reduced in comparison with the exsisting methods.