For good generalization in neural networks trained by examples one should use the smallest system that will fit the data. Unfortunately, it usually is not obvious what size is best: a system that is too small will not be able to learn that data while one that is just big enough may learn very slowly and be very sensitive to initial condition and learning parameter.
This paper proposed a new scheme for network pruning with 'impact factor', which is defined as multiplication of square of weight and variance of neuron output. This scheme can be used as simple pruning algorithm after learning, or as new learning algorithm with modified penalty term in learning. It is showed that this approach can make generalization ability better.