Kazakh-Russian Sign Language Processing: Data, Tools and Interaction

  • Sandygulova, Anara (PI)
  • Cerone, Antonio (Co-PI)
  • Kimmelman, Vadim (Other Faculty/Researcher)
  • Mukushev, Medet (Other Faculty/Researcher)
  • Imashev, Alfarabi (Other Faculty/Researcher)

Project: Monitored by Research Administration

Project Details

Grant Program

Faculty Development Competitive Research Grants Program 2022-2024

Project Description

Building from a previous grant-funded international interdisciplinary collaboration, this project proposal will continue adapting an interdisciplinary approach to advance Sign Language Processing. This proposal will bring together expertise in computer vision, sign language linguistics and human-computer interaction to design, develop and evaluate innovative solutions for all demographic groups that use sign language on a daily basis. Central to this project is the long outstanding need for the involvement of Deaf community to advance research on sign languages. Therefore, this proposed interdisciplinary, collaborative project aims to continue our current work on SLP in order to address the following challenges present to research on sign languages: a) lack of public datasets that are appropriate for real-world use cases that involve natural signing (continuous and spontaneous) performed by native signers (Bragg et al., 2019), and b) lack of human-robot interaction solutions explicitly designed for and evaluated by deaf people. Hence, with this research grant, we propose to address the following objectives:
Objective 1: To continue annotating the KRSL:OnlineSchool dataset. So far, we have been able to annotate 138 hours of the large video material of OnlineSchool which will be continued within this objective to allow for supervised, semi-supervised, and unsupervised learning approaches.
Objective 2: To improve the range of functionalities of our semi-automatic annotation tool for sign languages (SurdoBot). SurdoBot is a web-based semi-automatic annotation tool that currently provides two automatic functionalities: automatic handshapes labeling and labeling of signing regions in a continuous video. With this proposal, we will continue on adding further functionalities which will include labeling of other manual and non-manual components such as location, orientation, movement, and facial expressions.
Objective 3: To design, implement and evaluate human-computer interaction solutions for the Deaf.
Short titleData, Tools and Interaction for Sign Languages
AcronymK-RSL4P
StatusActive
Effective start/end date1/1/2212/31/24

Keywords

  • sign language processing
  • human-computer interaction

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.