Semi-automatic annotation tool for sign languages

Kairat Aitpayev, Shynggys Islam, Alfarabi Imashev

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

The goal of this work is to automatically annotate manual and some non-manual features of sign language in video. To achieve this we examine two techniques one using depth camera Microsoft Kinect 2.0 and second using simple RGB mono camera. In this work, we describe strength and weaknesses of both approaches. Finally, we propose the semi-automatic web-based annotation tool based on second technique, which uses hand and face movement detection algorithms. Furthermore, proposed algorithm could be used not only for annotating clean training data, but also for automatic sign language recognition, as it is works in real time and quite robust to variability in intensity and background. Results are presented in our corpus1 with free access.

Original languageEnglish
Title of host publicationApplication of Information and Communication Technologies, AICT 2016 - Conference Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781509018406
DOIs
Publication statusPublished - Jul 25 2017
Event10th IEEE International Conference on Application of Information and Communication Technologies, AICT 2016 - Baku, Azerbaijan
Duration: Oct 12 2016Oct 14 2016

Conference

Conference10th IEEE International Conference on Application of Information and Communication Technologies, AICT 2016
CountryAzerbaijan
CityBaku
Period10/12/1610/14/16

Fingerprint

Sign Language
Annotation
Camera
Cameras
Web-based
Face
Movement
Background
Training

Keywords

  • annotation tool
  • hand detection
  • hand tracking
  • Sign language recognition

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Computer Science Applications
  • Computer Networks and Communications
  • Information Systems
  • Modelling and Simulation

Cite this

Aitpayev, K., Islam, S., & Imashev, A. (2017). Semi-automatic annotation tool for sign languages. In Application of Information and Communication Technologies, AICT 2016 - Conference Proceedings [7991803] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICAICT.2016.7991803

Semi-automatic annotation tool for sign languages. / Aitpayev, Kairat; Islam, Shynggys; Imashev, Alfarabi.

Application of Information and Communication Technologies, AICT 2016 - Conference Proceedings. Institute of Electrical and Electronics Engineers Inc., 2017. 7991803.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Aitpayev, K, Islam, S & Imashev, A 2017, Semi-automatic annotation tool for sign languages. in Application of Information and Communication Technologies, AICT 2016 - Conference Proceedings., 7991803, Institute of Electrical and Electronics Engineers Inc., 10th IEEE International Conference on Application of Information and Communication Technologies, AICT 2016, Baku, Azerbaijan, 10/12/16. https://doi.org/10.1109/ICAICT.2016.7991803
Aitpayev K, Islam S, Imashev A. Semi-automatic annotation tool for sign languages. In Application of Information and Communication Technologies, AICT 2016 - Conference Proceedings. Institute of Electrical and Electronics Engineers Inc. 2017. 7991803 https://doi.org/10.1109/ICAICT.2016.7991803
Aitpayev, Kairat ; Islam, Shynggys ; Imashev, Alfarabi. / Semi-automatic annotation tool for sign languages. Application of Information and Communication Technologies, AICT 2016 - Conference Proceedings. Institute of Electrical and Electronics Engineers Inc., 2017.
@inproceedings{ea03291edf224f6f86dcf686d4a1da2c,
title = "Semi-automatic annotation tool for sign languages",
abstract = "The goal of this work is to automatically annotate manual and some non-manual features of sign language in video. To achieve this we examine two techniques one using depth camera Microsoft Kinect 2.0 and second using simple RGB mono camera. In this work, we describe strength and weaknesses of both approaches. Finally, we propose the semi-automatic web-based annotation tool based on second technique, which uses hand and face movement detection algorithms. Furthermore, proposed algorithm could be used not only for annotating clean training data, but also for automatic sign language recognition, as it is works in real time and quite robust to variability in intensity and background. Results are presented in our corpus1 with free access.",
keywords = "annotation tool, hand detection, hand tracking, Sign language recognition",
author = "Kairat Aitpayev and Shynggys Islam and Alfarabi Imashev",
year = "2017",
month = "7",
day = "25",
doi = "10.1109/ICAICT.2016.7991803",
language = "English",
booktitle = "Application of Information and Communication Technologies, AICT 2016 - Conference Proceedings",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
address = "United States",

}

TY - GEN

T1 - Semi-automatic annotation tool for sign languages

AU - Aitpayev, Kairat

AU - Islam, Shynggys

AU - Imashev, Alfarabi

PY - 2017/7/25

Y1 - 2017/7/25

N2 - The goal of this work is to automatically annotate manual and some non-manual features of sign language in video. To achieve this we examine two techniques one using depth camera Microsoft Kinect 2.0 and second using simple RGB mono camera. In this work, we describe strength and weaknesses of both approaches. Finally, we propose the semi-automatic web-based annotation tool based on second technique, which uses hand and face movement detection algorithms. Furthermore, proposed algorithm could be used not only for annotating clean training data, but also for automatic sign language recognition, as it is works in real time and quite robust to variability in intensity and background. Results are presented in our corpus1 with free access.

AB - The goal of this work is to automatically annotate manual and some non-manual features of sign language in video. To achieve this we examine two techniques one using depth camera Microsoft Kinect 2.0 and second using simple RGB mono camera. In this work, we describe strength and weaknesses of both approaches. Finally, we propose the semi-automatic web-based annotation tool based on second technique, which uses hand and face movement detection algorithms. Furthermore, proposed algorithm could be used not only for annotating clean training data, but also for automatic sign language recognition, as it is works in real time and quite robust to variability in intensity and background. Results are presented in our corpus1 with free access.

KW - annotation tool

KW - hand detection

KW - hand tracking

KW - Sign language recognition

UR - http://www.scopus.com/inward/record.url?scp=85034231840&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85034231840&partnerID=8YFLogxK

U2 - 10.1109/ICAICT.2016.7991803

DO - 10.1109/ICAICT.2016.7991803

M3 - Conference contribution

BT - Application of Information and Communication Technologies, AICT 2016 - Conference Proceedings

PB - Institute of Electrical and Electronics Engineers Inc.

ER -