In this thesis, the learning progresses of the conventional algorithms for multilayer feedforward neural networks, momentum algorithm and Deltabar-Delta algorithm, are studied by analyzing their learning trajectories on the mean squared error surface. The study explains the stagnation of convergence empirically observed in the learning progress of the conventional algorithms. Also a new learning algorithm for multilayer feedforward neural networks is propsed. The proposed algorithm adaptively updates learning rates and momentum coefficients of momentum algorithm, according to time change of cost function. The proposed algorithm is motivated from the novel shape of the mean squared error surface. Some simulation results are given to validate its superiority over the conventional learning algorithms.