Faculty Development Competitive Research Grant Program 2019-2021
Hearing-impaired communities around the world communicate via a sign language, which uses gestures to express meaning and intent, that include hand-shapes, arms and body, facial expressions and lip-patterns. Similar to spoken languages, each country or region has its own sign language of varying grammar and rules, leading to a few hundreds of sign languages that exist today. While automatic speech recognition has progressed to being commercially available, automatic Sign Language Recognition (SLR) is still in its infancy. This proposal describes the Kazakh Sign Language Automatic Recognition System (K-SLARS) project, which aims to develop an interpreting system of a sign language tailored for Kazakhstan by utilizing the latest advances from Computer Vision and Machine Learning (ML) fields. To this end, we aim to develop the first corpus of the Kazakh Sign Language (KSL) necessary for training ML approaches. KSL corpus is envisioned to be similar to corpora created elsewhere in the world. We have already collected the first dataset, which is available at the http://kslc.kz, however we are working towards improving the search interface and expanding the dataset. As with any video dataset, manual annotation of sign language (manual and non-manual features) is extremely time and resource consuming. We also aim to create a semi- automatic tool, which will automatically annotate sign language manual and non-manual features, thus contributing to a challenging vision-based automatic SLR. And finally, we aim to develop a robust algorithm, which will firstly be used during annotation process and then further applied for automatic SLR.
|Effective start/end date||1/31/19 → 12/31/21|