Since the early age of computer graphics, facial animation has been applied to various fields, and nowadays it has found several novel applications such as virtual reality(for representing virtual agents), teleconference, and man-machine interface.
When we want to apply facial animation to the system with multiple participants connected via network, because of the size of information to maintain an efficient communication, it is hard to animate facial expression as we desire in real-time. This paper's major contribution is to adapt 'Level-of-Detail' to the facial animation in order to solve the above problem.
Level-of-Detail has been studied in the field of computer graphics to represent the appearance of complicated objects in efficient and adaptive way, but until now no attempt has mode in the field of facial animation. In this paper, we present a systematic scheme which enables this kind of adaptive control using Level-of-Detail. In consideration of the state of network, the speed of rendering, or the size of facial model in screen, it makes real-time animation possible and reduces the load of network to use various facial animation control level. If the network load is very heavy or the rendering speed is low, control level is lowered so as to make the motion control possible with the least amount of information, and vice versa.