3D avatars are widely used in various fields of emerging technology, from augmented reality to social robots. To interact in a natural way with the user, they must be able to show at least some basic emotions. However, generating animation for these virtual avatars is a time-consuming task and a creative process. The main goal of this work is to facilitate a generation of facial animation of basic emotions on a 3D avatar. To this end, we developed and compared two approaches. The first method consists of the generation of animation using tuning Blendshape features of the 3D model, whereas the second method captures it from the real face and maps it on the model correspondingly. Additionally, the text, from which the emotion was estimated, was passed to lip synchronization software for generating realistic lip movements for the avatar. Then, animations of six basic emotions were shown in different variations in the survey and respondents were asked to guess the emotion shown in the video. Besides, such anthropomorphic features of the avatar as human-likeness, life-likeness and pleasantness were examined. In general, the analysis of the survey provided the following interesting findings: a) participants did not have significant differences in recognizing emotions based on the type of animation generation method; b) inclusion of voice significantly enhanced the recognition of emotion. In relation to participants' accuracy of emotion recognition, Excitement and Happiness were mostly confused between each other more than any other two emotions, while Anger was the easiest emotion to recognize.