Sign-Language Synthesis for Mobile Environments

Seiji Igi
Communications Research Laboratory
Keihanna Human Info-Communication Research Center
619-0289 Kyoto


Keywords: Sign Language, Computer Graphics, Animation, Mobile Environment, PDA


This paper describes the synthesis of sign-language animation for mobile environments. Japanese Sign language is synthesized by using either the motion-capture or motion-primitive method. We use the motion-capture method to synthesize realistic sign-language animation, and we use the motion-primitive method for new sign language. An editing system can add facial expressions, mouth shapes and gestures to the sign-language CG animation. The system prepares 19 textures of facial expressions appearing frequently in sign language. Sign-language animation is displayed on PDA screens to inform the user of his/her mobile environment. We selected an application scene that transmitted content on a museum to a moving hearing-impaired user. The system provides information on the exhibit that the visitor pays attention to with sign-language animation in company with moving visitor. We evaluated sign-language animation for PDAs using seven hearing-impaired subjects who used sign language in their daily lives. Average recognition for words by the motion-capture and motion-primitive methods was 91.6% and 63.8%, respectively. The evaluation experiment for sentences shows that 44.2% and 71.4% of the sentences are correctly answered and almost correctly answered, respectively. The almost subjects said that it was difficult to recognize the facial expression and mouth shapes, as it was too small.