Virtual Conversational Agent (avatar) for Sign Language

Accessibility & Assistive Technology , ASL-MT Jul 16, 2017 No Comments
Virtual Conversation Agent (avatar) for Sign Language

Sign language (SL) was first acknowledged as a separate language only in the 1960s. Similar to spoken language, it evolved from different cultural backgrounds. Every country has its own sign language with various dialects, which are based on different rules than the spoken language.

Although several websites provide video clips in which sign language interpreters translate the text, much Internet content remains cryptic for the deaf community. To inform deaf people quickly in cases where there is no interpreter on hand, researchers are working on a novel approach to provide content. Their idea: avatars. These animated characters could be used in the context of announcements at train stations, or on websites.

Virtual Conversation Agent (avatar) are a technology for displaying signed conversation without the necessity of displaying a video of a human signer. Instead, the systems use 3D animated models, which can be stored more efficiently than video. The avatar can produce movement of the fingers, hands, facial gestures for facial expressions (happiness, surprise etc.), body movements, and co-signs, in which two different words or ideas are signed at the same time. The avatar can be programmed to communicate in either a Sign Language (for example American Sign Language (ASL) or French Sign Language (LSF)). Advances in computer graphics capabilities mean that personal computers and smartphones are able to produce this animation with much great clarity than in the past, when transitions between the signs were rough and smooth and the hands had to return to a central position between each sign.

To capture the motions of deaf people, the scientists and researchers make use of affordable cameras and sensors that are typically used by teenagers for computer games. A computing method transfers the movements of the entire body onto the avatar. In the long term, the researchers want to create a collection of short sign language sequences that can be used by the deaf to interact on the web. An alternate method can be used is to create animation manually using 3d processing software or using a developed editor solution.

#1 – The American Sign Language Avatar Project at DePaul University

The main goal of this project is to enable automatic translation of English into American Sign Language, the language of the Deaf in North America.

Project’s avatar, named “Paula” for DePaul University, can portray all linguistic parameters of ASL. Paula has earned high marks for clarity and naturalness from users fluent in ASL.

In addition to producing signed language, Paula has participated in several related projects, from Deaf education and interpreter training, to tutoring caregivers of people who are both deaf and mentally challenged.

#2 – WebSign Project

WebSign is a project that aims to improve accessibility for the deaf to ICT and provide tools able to increase their autonomy and does not require the hearing person to acquire specific skills to communicate with them. The objective of this project is to develop a Web-based interpreter of Sign Language (SL). This tool would enable people who do not know SL to communicate with deaf individuals. Therefore, contribute in reducing the language barrier between deaf and hearing people. The secondary objective is to distribute this tool on a non-profit basis to educators, students, users, and researchers, and to disseminate a call for contribution to support this project mainly in its exploitation step and to encourage its wide use by different communities.

Websign is based on the technology of avatar (animation in virtual world). The input of the system is a text in natural language. The output is a real-time and on-line interpretation in sign language. This interpretation is constructed thanks to a dictionary of word and signs.

 

 

 

If your project is not figured in this article, feel free to contact me.

Links

http://www.hearingreview.com/2014/09/researchers-create-avatars-use-sign-language/

http://www.snow.idrc.ocad.ca/content/animated-signing-characters-signing-avatars

http://asl.cs.depaul.edu/project_info.html

http://www.latice.rnu.tn/websign

Bibliography

Wolfe, R., Ethimious, E, Glauert, J., Hanke, T., McDonald, J., & Schnepp, J. eds. Special issue: recent advances in sign language translation and avatar technology Springer International Publishing, 2016.

Leave a Reply

Your email address will not be published. Required fields are marked *

Achraf Othman

Dr. Achraf is a senior research specialist in Accessibility and Assistive Technology for People with disabilities and Machine Translation and Machine Learning.