In order to immerse a participant into a virtual environment(VE), it is crucial to define avatar representing the participant in VE properly and to devise a metaphor, mapping between the participant and the avatar, intuitively.
Due to the limited degree of freedom(DoF) of input devices, it is impossible or confusing to control all of DoF in avatar independently by the input devices. In this paper, we propose an avatar control method using predefined actions like body gestures. The control method interprets user's input according to the internal status of the avatar such as current posture, and the status of VE. The avatar manager module consists of four main components: "avatar controller", "action executer", "session manager" and "communicator". Using this, the participants are able to control the avatar easily. As a verification of easiness in avatar control, a virtual conferencing system called VConf has been developed. VConf is integrated with audio conferencing tool and it supports talking action and speaker following.