In this thesis, an algorithm of feature extraction for invariant pattern recognition and lip reading is presented. We used neural networks as a classifier. In order to recognize patterns which are different in shape, we removed noise and converted them to binary images. And then from the edges of the image, we extracted feature points which present characteristics of the object. For invariant pattern recognition which is regardless of translatiions, scaling, and rotation off objects, third order neural networks extract invariant features and classify objects. Computer simulation shows that third order neural networks with corner point extraction decrease processing time, are robust to noise, and give better recognition rate. For lip reading using still images, we detected boundaries of the lip and extracted feature points which characterize the shape of the lip when a speaker speaks. From distances between these points, we could extract feature vector for high recognition rate.