TY - GEN
T1 - Cyrillic manual alphabet recognition in RGB and RGB-D data for sign language interpreting robotic system (SLIRS)
AU - Tazhigaliyeva, Nazgul
AU - Kalidolda, Nazerke
AU - Imashev, Alfarabi
AU - Islam, Shynggys
AU - Aitpayev, Kairat
AU - Parisi, German I.
AU - Sandygulova, Anara
N1 - Funding Information:
The authors would like to thank the stuff of the community fund “Yunsz Alem” (Silent World) for the provided help with the project. This work was funded by the School of Science and Technology, Nazarbayev University, Kazakhstan.
Publisher Copyright:
© 2017 IEEE.
PY - 2017/7/21
Y1 - 2017/7/21
N2 - Deaf-mute communities around the world experience a need in effective human-robot interaction system that would act as an interpreter in public places such as banks, hospitals, or police stations. The focus of this work is to address the challenges presented to hearing-impaired people by developing an interpreting robotic system required for effective communication in public places. To this end, we utilize a previously developed neural network-based learning architecture to recognize Cyrillic manual alphabet, which is used for fingerspelling in Kazakhstan. In order to train and test the performance of the recognition system, we collected four datasets comprising of static and motion RGB and RGB-D data of 33 manual gestures. After applying them to standard machine learning algorithms as well as to our previously developed learning-based method, we achieved an average accuracy of 93% for a complete alphabet recognition by modeling motion depth data.
AB - Deaf-mute communities around the world experience a need in effective human-robot interaction system that would act as an interpreter in public places such as banks, hospitals, or police stations. The focus of this work is to address the challenges presented to hearing-impaired people by developing an interpreting robotic system required for effective communication in public places. To this end, we utilize a previously developed neural network-based learning architecture to recognize Cyrillic manual alphabet, which is used for fingerspelling in Kazakhstan. In order to train and test the performance of the recognition system, we collected four datasets comprising of static and motion RGB and RGB-D data of 33 manual gestures. After applying them to standard machine learning algorithms as well as to our previously developed learning-based method, we achieved an average accuracy of 93% for a complete alphabet recognition by modeling motion depth data.
UR - http://www.scopus.com/inward/record.url?scp=85027966201&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85027966201&partnerID=8YFLogxK
U2 - 10.1109/ICRA.2017.7989526
DO - 10.1109/ICRA.2017.7989526
M3 - Conference contribution
AN - SCOPUS:85027966201
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 4531
EP - 4536
BT - ICRA 2017 - IEEE International Conference on Robotics and Automation
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2017 IEEE International Conference on Robotics and Automation, ICRA 2017
Y2 - 29 May 2017 through 3 June 2017
ER -