Top Qs
Timeline
Chat
Perspective
Sign language recognition
From Wikipedia, the free encyclopedia
Remove ads
Sign Language Recognition (shortened generally as SLR) is a computational task that involves recognizing actions from sign languages.[1] It is an essential problem to solve, particularly in the digital world, as it helps bridge the communication gap faced by individuals with hearing impairments.
This article needs additional citations for verification. (August 2021) |
Solving the problem typically requires annotated color (RGB) data; however, additional modalities such as depth and sensory information are also useful.
Remove ads
Isolated sign language recognition
Isolated sign language recognition (ISLR), also known as word-level SLR, is the task of recognizing individual signs or tokens, known as glosses, from a given segment of a signing video clip. It is commonly treated as a classification problem for isolated videos, but real-time applications also require handling tasks such as video segmentation.
Continuous sign language recognition
Continuous sign language recognition (CSLR), also referred to as sign language transcription, involves predicting all signs (or glosses) from a given sequence of sign language in a video. This task is more suitable for real-world transcription and is sometimes considered an extension of ISLR, depending on the approach used.
Continuous sign language translation
Sign language translation (SLT) refers to the task of translating a sequence of signs (or glosses) into a corresponding spoken language. It is generally modeled as an extension of the CSLR problem.
References
Wikiwand - on
Seamless Wikipedia browsing. On steroids.
Remove ads