In this thesis, we present a novel example-based approach for applying captured motion data to a new model which is called 'motion translation'. While previous researches were focused on editing the source motion to satisfy the kinematic constraints of the target model, our emphasis is to effectively reflect the animator`s intention expressed through a set of target postures.
Our method comprises two major parts: preprocessing and motion synthesis. In the preprocessing part, we extract the source key-postures in a given source example animation, while their corresponding target key-postures are created by an animator. We parameterize the target key-postures using the source key-postures and predefine the weight functions based on radial basis functions. In the synthesis part, at each frame of input animation, we first compute the parameter vector of the target motion and then evaluate the weight values for blending the target key postures. We finally obtain the target motion at that frame by blending the target key-postures with those weight values. The resulting animation preserves the motion of source model as well as the characteristic features of the target model specified by the animator.