New Language Technologies for ASL| USC Viterbi
USC Researchers achieved 91 percent accuracy at computer recognition of signs
While there is “talk to text,” there’s no equivalent tool for American Sign Language (ASL) to be automatically recognized and translated into text. New research and language technologies developed by scholars affiliated with the USC School of Advanced Computing’s Thomas Lord Department of Computer Science might help future researchers who aim to build translation tools.
The team’s innovations outlined in paper presented at the 2025 Nations of the Americas Chapter of the Association for Computational Linguistics conference, are in developing a machine learning model that treats sign language data as a complex linguistic system rather than just a mere translation of English. The team led by Lee Kezar, then a doctoral candidate in computer science out of Professor Jesse Thomason’s GLAMOR (Grounding Language in Actions, Multimodal Observations, and Robotics) Lab, introduces a new natural language processing model, incorporating the spatial and semantic richness of ASL, treating it as a primary language with its own syntax.
The first step for developing a means of ASL recognition which demands an understanding of the language’s specific nuances—and how natural signing may be divided into phonological features, such as the ‘C handshape’ or ‘produced on the forearm’” for a computer.
