Using Computer Vision to Analyze Non-manual Marking of Questions in KRSL

Anna Kuznetsova, Alfarabi Imashev, Medet Mukushev, Anara Sandygulova, Vadim Kimmelman

Research output: Contribution to conferencePaperpeer-review

8 Citations (Scopus)

Abstract

This paper presents a study that compares non-manual markers of polar and wh-questions to statements in Kazakh-Russian Sign Language (KRSL) in a dataset collected for NLP tasks. The primary focus of the study is to demonstrate the utility of computer vision solutions for the linguistic analysis of non-manuals in sign languages, although additional corrections are required to account for biases in the output. To this end, we analyzed recordings of 10 triplets of sentences produced by 9 native signers using both manual annotation and computer vision solutions (such as OpenFace). We utilize and improve the computer vision solution, and briefly describe the results of the linguistic analysis.

Original languageEnglish
Pages49-59
Number of pages11
Publication statusPublished - 2021
Event1st International Workshop on Automatic Translation for Signed and Spoken Languages, AT4SSL 2021 - Virtual, Online, United States
Duration: Aug 16 2021Aug 20 2021

Conference

Conference1st International Workshop on Automatic Translation for Signed and Spoken Languages, AT4SSL 2021
Country/TerritoryUnited States
CityVirtual, Online
Period8/16/218/20/21

ASJC Scopus subject areas

  • Language and Linguistics
  • Artificial Intelligence
  • Software
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'Using Computer Vision to Analyze Non-manual Marking of Questions in KRSL'. Together they form a unique fingerprint.

Cite this