TY - GEN
T1 - Towards Interpreting Robotic System for Fingerspelling Recognition in Real Time
AU - Kalidolda, Nazerke
AU - Sandygulova, Anara
N1 - Publisher Copyright:
© 2018 Authors.
Copyright:
Copyright 2018 Elsevier B.V., All rights reserved.
PY - 2018/3/1
Y1 - 2018/3/1
N2 - Hearing-impaired communities around the world communicate via a sign language. The focus of this work is to develop an interpreting human-robot interaction system that could act as a sign language interpreter in public places. This paper presents an ongoing work, which aims to recognize fingerspelling gestures in real time. To this end, we utilize a deep learning method for classification of 33 gestures used for fingerspelling by the local deaf-mute community. In order to train and test the performance of the recognition system, we utilize previously collected dataset of motion RGB-D data of 33 manual gestures. After applying it to a deep learning method, we achieved an offline result of an average accuracy of 75% for a complete alphabet recognition. In real time, the result was only 24.72%. In addition, we integrated a form of auto-correction in order to perform spell-checking on the recognized letters. Among 35 tested words, four words were recognized correctly (11.4%). Finally, we conducted an exploratory study inviting ten deaf individuals to interact with our sign language interpreting robotic system.
AB - Hearing-impaired communities around the world communicate via a sign language. The focus of this work is to develop an interpreting human-robot interaction system that could act as a sign language interpreter in public places. This paper presents an ongoing work, which aims to recognize fingerspelling gestures in real time. To this end, we utilize a deep learning method for classification of 33 gestures used for fingerspelling by the local deaf-mute community. In order to train and test the performance of the recognition system, we utilize previously collected dataset of motion RGB-D data of 33 manual gestures. After applying it to a deep learning method, we achieved an offline result of an average accuracy of 75% for a complete alphabet recognition. In real time, the result was only 24.72%. In addition, we integrated a form of auto-correction in order to perform spell-checking on the recognized letters. Among 35 tested words, four words were recognized correctly (11.4%). Finally, we conducted an exploratory study inviting ten deaf individuals to interact with our sign language interpreting robotic system.
KW - human-robot interaction
KW - sign language recognition
KW - social robotics
UR - http://www.scopus.com/inward/record.url?scp=85045263087&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85045263087&partnerID=8YFLogxK
U2 - 10.1145/3173386.3177085
DO - 10.1145/3173386.3177085
M3 - Conference contribution
AN - SCOPUS:85045263087
T3 - ACM/IEEE International Conference on Human-Robot Interaction
SP - 141
EP - 142
BT - HRI 2018 - Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction
PB - IEEE Computer Society
T2 - 13th Annual ACM/IEEE International Conference on Human Robot Interaction, HRI 2018
Y2 - 5 March 2018 through 8 March 2018
ER -