TY - GEN
T1 - Facial expression generation of 3D avatar based on semantic analysis
AU - Mukashev, Dinmukhamed
AU - Kairgaliyev, Merey
AU - Alibekov, Ulugbek
AU - Oralbayeva, Nurziya
AU - Sandygulova, Anara
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021/8/8
Y1 - 2021/8/8
N2 - 3D avatars are widely used in various fields of emerging technology, from augmented reality to social robots. To interact in a natural way with the user, they must be able to show at least some basic emotions. However, generating animation for these virtual avatars is a time-consuming task and a creative process. The main goal of this work is to facilitate a generation of facial animation of basic emotions on a 3D avatar. To this end, we developed and compared two approaches. The first method consists of the generation of animation using tuning Blendshape features of the 3D model, whereas the second method captures it from the real face and maps it on the model correspondingly. Additionally, the text, from which the emotion was estimated, was passed to lip synchronization software for generating realistic lip movements for the avatar. Then, animations of six basic emotions were shown in different variations in the survey and respondents were asked to guess the emotion shown in the video. Besides, such anthropomorphic features of the avatar as human-likeness, life-likeness and pleasantness were examined. In general, the analysis of the survey provided the following interesting findings: a) participants did not have significant differences in recognizing emotions based on the type of animation generation method; b) inclusion of voice significantly enhanced the recognition of emotion. In relation to participants' accuracy of emotion recognition, Excitement and Happiness were mostly confused between each other more than any other two emotions, while Anger was the easiest emotion to recognize.
AB - 3D avatars are widely used in various fields of emerging technology, from augmented reality to social robots. To interact in a natural way with the user, they must be able to show at least some basic emotions. However, generating animation for these virtual avatars is a time-consuming task and a creative process. The main goal of this work is to facilitate a generation of facial animation of basic emotions on a 3D avatar. To this end, we developed and compared two approaches. The first method consists of the generation of animation using tuning Blendshape features of the 3D model, whereas the second method captures it from the real face and maps it on the model correspondingly. Additionally, the text, from which the emotion was estimated, was passed to lip synchronization software for generating realistic lip movements for the avatar. Then, animations of six basic emotions were shown in different variations in the survey and respondents were asked to guess the emotion shown in the video. Besides, such anthropomorphic features of the avatar as human-likeness, life-likeness and pleasantness were examined. In general, the analysis of the survey provided the following interesting findings: a) participants did not have significant differences in recognizing emotions based on the type of animation generation method; b) inclusion of voice significantly enhanced the recognition of emotion. In relation to participants' accuracy of emotion recognition, Excitement and Happiness were mostly confused between each other more than any other two emotions, while Anger was the easiest emotion to recognize.
KW - 3D avatar
KW - Animations
KW - Blendshapes
KW - Emotions
KW - Face
KW - Unity
UR - http://www.scopus.com/inward/record.url?scp=85115089367&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85115089367&partnerID=8YFLogxK
U2 - 10.1109/RO-MAN50785.2021.9515463
DO - 10.1109/RO-MAN50785.2021.9515463
M3 - Conference contribution
AN - SCOPUS:85115089367
T3 - 2021 30th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2021
SP - 89
EP - 94
BT - 2021 30th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 30th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2021
Y2 - 8 August 2021 through 12 August 2021
ER -