The demand of realistic facial animation has been increased to offer friendly user interface. In this thesis, we propose a methodology to deform a generic three-dimensional face model using real facial images for mapping those images to the 3D model and to animate the deformed model.
First, we project 3D model to the image plane of each facial image and match the projected model with each image. The matching results are combined to generate a deformed 3D model. We use the feature-based image metamorphosis to manually match the projected model with images. And we create a 3D synthetic image from 2D images of a specific person's face that were taken from several view angles. Using view morphing and panoramic stitching techniques generates the 3D synthetic image. This synthetic image is texture-mapped to the cylindrical projection of 3D model in three-dimensional space.
To animate facial expressions with the deformed 3D model, we use the modified Waters' muscle model based on anatomy. In our system, we deform the 3D model with four images taken from front, back, left and right directions, and generate a panoramic image from eight images which is used as a texture. Lastly, we synthesize the six representative facial expressions proposed by Ekman.