Cyrillic manual alphabet recognition in RGB and RGB-D data for sign language interpreting robotic system (SLIRS)

Nazgul Tazhigaliyeva, Nazerke Kalidolda, Alfarabi Imashev, Shynggys Islam, Kairat Aitpayev, German I. Parisi, Anara Sandygulova

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

Deaf-mute communities around the world experience a need in effective human-robot interaction system that would act as an interpreter in public places such as banks, hospitals, or police stations. The focus of this work is to address the challenges presented to hearing-impaired people by developing an interpreting robotic system required for effective communication in public places. To this end, we utilize a previously developed neural network-based learning architecture to recognize Cyrillic manual alphabet, which is used for fingerspelling in Kazakhstan. In order to train and test the performance of the recognition system, we collected four datasets comprising of static and motion RGB and RGB-D data of 33 manual gestures. After applying them to standard machine learning algorithms as well as to our previously developed learning-based method, we achieved an average accuracy of 93% for a complete alphabet recognition by modeling motion depth data.

Original languageEnglish
Title of host publicationICRA 2017 - IEEE International Conference on Robotics and Automation
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages4531-4536
Number of pages6
ISBN (Electronic)9781509046331
DOIs
Publication statusPublished - Jul 21 2017
Event2017 IEEE International Conference on Robotics and Automation, ICRA 2017 - Singapore, Singapore
Duration: May 29 2017Jun 3 2017

Conference

Conference2017 IEEE International Conference on Robotics and Automation, ICRA 2017
CountrySingapore
CitySingapore
Period5/29/176/3/17

Fingerprint

Human robot interaction
Audition
Law enforcement
Learning algorithms
Learning systems
Robotics
Neural networks
Communication

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Artificial Intelligence
  • Electrical and Electronic Engineering

Cite this

Tazhigaliyeva, N., Kalidolda, N., Imashev, A., Islam, S., Aitpayev, K., Parisi, G. I., & Sandygulova, A. (2017). Cyrillic manual alphabet recognition in RGB and RGB-D data for sign language interpreting robotic system (SLIRS). In ICRA 2017 - IEEE International Conference on Robotics and Automation (pp. 4531-4536). [7989526] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICRA.2017.7989526

Cyrillic manual alphabet recognition in RGB and RGB-D data for sign language interpreting robotic system (SLIRS). / Tazhigaliyeva, Nazgul; Kalidolda, Nazerke; Imashev, Alfarabi; Islam, Shynggys; Aitpayev, Kairat; Parisi, German I.; Sandygulova, Anara.

ICRA 2017 - IEEE International Conference on Robotics and Automation. Institute of Electrical and Electronics Engineers Inc., 2017. p. 4531-4536 7989526.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Tazhigaliyeva, N, Kalidolda, N, Imashev, A, Islam, S, Aitpayev, K, Parisi, GI & Sandygulova, A 2017, Cyrillic manual alphabet recognition in RGB and RGB-D data for sign language interpreting robotic system (SLIRS). in ICRA 2017 - IEEE International Conference on Robotics and Automation., 7989526, Institute of Electrical and Electronics Engineers Inc., pp. 4531-4536, 2017 IEEE International Conference on Robotics and Automation, ICRA 2017, Singapore, Singapore, 5/29/17. https://doi.org/10.1109/ICRA.2017.7989526
Tazhigaliyeva N, Kalidolda N, Imashev A, Islam S, Aitpayev K, Parisi GI et al. Cyrillic manual alphabet recognition in RGB and RGB-D data for sign language interpreting robotic system (SLIRS). In ICRA 2017 - IEEE International Conference on Robotics and Automation. Institute of Electrical and Electronics Engineers Inc. 2017. p. 4531-4536. 7989526 https://doi.org/10.1109/ICRA.2017.7989526
Tazhigaliyeva, Nazgul ; Kalidolda, Nazerke ; Imashev, Alfarabi ; Islam, Shynggys ; Aitpayev, Kairat ; Parisi, German I. ; Sandygulova, Anara. / Cyrillic manual alphabet recognition in RGB and RGB-D data for sign language interpreting robotic system (SLIRS). ICRA 2017 - IEEE International Conference on Robotics and Automation. Institute of Electrical and Electronics Engineers Inc., 2017. pp. 4531-4536
@inproceedings{792e1cf2468c42208eb21bdcc7f85ef5,
title = "Cyrillic manual alphabet recognition in RGB and RGB-D data for sign language interpreting robotic system (SLIRS)",
abstract = "Deaf-mute communities around the world experience a need in effective human-robot interaction system that would act as an interpreter in public places such as banks, hospitals, or police stations. The focus of this work is to address the challenges presented to hearing-impaired people by developing an interpreting robotic system required for effective communication in public places. To this end, we utilize a previously developed neural network-based learning architecture to recognize Cyrillic manual alphabet, which is used for fingerspelling in Kazakhstan. In order to train and test the performance of the recognition system, we collected four datasets comprising of static and motion RGB and RGB-D data of 33 manual gestures. After applying them to standard machine learning algorithms as well as to our previously developed learning-based method, we achieved an average accuracy of 93{\%} for a complete alphabet recognition by modeling motion depth data.",
author = "Nazgul Tazhigaliyeva and Nazerke Kalidolda and Alfarabi Imashev and Shynggys Islam and Kairat Aitpayev and Parisi, {German I.} and Anara Sandygulova",
year = "2017",
month = "7",
day = "21",
doi = "10.1109/ICRA.2017.7989526",
language = "English",
pages = "4531--4536",
booktitle = "ICRA 2017 - IEEE International Conference on Robotics and Automation",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
address = "United States",

}

TY - GEN

T1 - Cyrillic manual alphabet recognition in RGB and RGB-D data for sign language interpreting robotic system (SLIRS)

AU - Tazhigaliyeva, Nazgul

AU - Kalidolda, Nazerke

AU - Imashev, Alfarabi

AU - Islam, Shynggys

AU - Aitpayev, Kairat

AU - Parisi, German I.

AU - Sandygulova, Anara

PY - 2017/7/21

Y1 - 2017/7/21

N2 - Deaf-mute communities around the world experience a need in effective human-robot interaction system that would act as an interpreter in public places such as banks, hospitals, or police stations. The focus of this work is to address the challenges presented to hearing-impaired people by developing an interpreting robotic system required for effective communication in public places. To this end, we utilize a previously developed neural network-based learning architecture to recognize Cyrillic manual alphabet, which is used for fingerspelling in Kazakhstan. In order to train and test the performance of the recognition system, we collected four datasets comprising of static and motion RGB and RGB-D data of 33 manual gestures. After applying them to standard machine learning algorithms as well as to our previously developed learning-based method, we achieved an average accuracy of 93% for a complete alphabet recognition by modeling motion depth data.

AB - Deaf-mute communities around the world experience a need in effective human-robot interaction system that would act as an interpreter in public places such as banks, hospitals, or police stations. The focus of this work is to address the challenges presented to hearing-impaired people by developing an interpreting robotic system required for effective communication in public places. To this end, we utilize a previously developed neural network-based learning architecture to recognize Cyrillic manual alphabet, which is used for fingerspelling in Kazakhstan. In order to train and test the performance of the recognition system, we collected four datasets comprising of static and motion RGB and RGB-D data of 33 manual gestures. After applying them to standard machine learning algorithms as well as to our previously developed learning-based method, we achieved an average accuracy of 93% for a complete alphabet recognition by modeling motion depth data.

UR - http://www.scopus.com/inward/record.url?scp=85027966201&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85027966201&partnerID=8YFLogxK

U2 - 10.1109/ICRA.2017.7989526

DO - 10.1109/ICRA.2017.7989526

M3 - Conference contribution

SP - 4531

EP - 4536

BT - ICRA 2017 - IEEE International Conference on Robotics and Automation

PB - Institute of Electrical and Electronics Engineers Inc.

ER -