In the training of neural networks, a trade-off between their size and find training error is need. In general, small-sized networks, even though they show good generalization performance, tend to fail to learn the training data within a given error bound, whereas large-sized networks learn easily the training data but yield poor generalization. Therefore, the way of achiving good generalization is to find the smallest network that can learn the data, called optimal-sized network. It is, how-ever, difficult to determine whether the network size is the smallest or not, because the smallest feasible networks are very sensitive to initial condition and usually fail to learn the data being trapped in local minima. One of the methods to obtain a optimal network is a pruning algorithm, which starts with a trainable large network, reduces the size of network by removing unnecessary neurons and interconnection weights one by one with retraining, and finally achieve the optimal-sized network. In this thesis, a pruning algorithm of neural networks using impact factor regularization is described to train network without overfitting and to achieve a small-sized network. In order to achieve this goal, the automatic determination method of the regularization parameter and the extended Levenberg-Marquardt algorithm are developed as a learning algorithm of neural networks. The effectiveness of the proposed method is tested with four regression problems. Simulation results show that it is very effective in regression.