Sign languages are highly structured languages with linguistic rules distinct from their spoken counterparts without a standard written form. However, the vast majority of information available online is provided in spoken or written language, which excludes sign languages. Many communication barriers exist for sign language users, and signing avatars (computer animations of humans) have the potential to break down these barriers for Deaf people who prefer sign language or have lower literacy in written language.
AVATAR proposes creating a signing 3D avatar able to synthesize Portuguese Sign Language from European Portuguese text. We will leverage a an annotated (phonetic-phonological, morphological and syntactic) corpus consisting of video-recorded interactions between deaf signers. This project is supported by Fundaçãoo para a Ciêcia e Tecnologia (FCT), project CORPUS LINGISTICO E AVATAR (PTDC/LLT-LIN/29887/2017).
Date: Apr 9, 2020
Authors: Hugo Nicolau, Luisa Coheur, Carolina Neves, Matilde Gonçalves, Pedro Cabral
Keywords: Portuguese Sign Language, animation, accessibility
Current signing avatars are often described as unnatural as they cannot accurately reproduce all the subtleties of synchronized body behaviors of a human signer. In this paper, we investigate a new dynamic approach for transitions between signs and the effect of mouthing behaviors. Although native signers preferred animations with dynamic transitions, we did not find significant differences in comprehension and perceived naturalness scores. On the other hand, we show that including mouthing behaviors improved comprehension and perceived naturalness for novice Portuguese sign language learners.
Current signing avatars are often described as unnatural as they cannot accurately reproduce all the subtleties of synchronized body behaviors of a human signer. In this paper, we propose a new dynamic approach for transitions between signs, focusing on mouthing animations for Portuguese Sign Language. Although native signers preferred animations with dynamic transitions, we did not find significant differences in comprehension and perceived naturalness scores. On the other hand, we show that including mouthing behaviors improved comprehension and perceived naturalness for novice sign language learners. Results have implications in computational linguistics, humancomputer interaction, and synthetic animation of signing avatars.
A lingua gestual portuguesa, tal como a lingua portuguesa, evoluiu de forma natural, adquirindo caracteristicas gramaticais distintas do portugues. Assim, o desenvolvimento de um tradutor entre as duas nao consiste somente no mapeamento de uma palavra num gesto (portugues gestuado), mas em garantir que os gestos resultantes satisfazem a gramatica da lingua gestual portuguesa e que as traducoes estejam semanticamente corretas. Trabalhos desenvolvidos anteriormente utilizam exclusivamente regras de traduçao manuais, sendo muito limitados na quantidade de fenomenos gramaticais abrangidos, produzindo pouco mais que portugues gestuado. Neste artigo, apresentasse o primeiro sistema de traducao de portugues para a lingua gestual portuguesa, o PE2LGP, que, para alem de regras manuais, se baseia em regras de traducao construidas automaticamente a partir de um corpus de referencia. Dada uma frase em portugues, o sistema devolve uma sequencia de glosas com marcadores que identicam expressoes faciais, palavras soletradas, entre outras. Uma avaliacao automatica e uma avaliacao manual sao apresentadas, indicando os resultados melhorias na qualidade da traducao de frases simples e pequenas em comparacao ao sistema baseline (portugues gestuado). E tambem o primeiro trabalho que lida com as expresoes faciais gramaticais que marcam as frases interrogativas e negativas.
Sign Languages are visual languages and the main means of communication used by Deaf people. However, the majority of the information available online is presented through written form. Hence, it is not of easy access to the Deaf community. Avatars that can animate sign languages have gained an increase of interest in this area due to their flexibility in the process of generation and edition. Synthetic animation of conversational agents can be achieved through the use of notation systems. HamNoSys is one of these systems, which describes movements of the body through symbols. Its XML-compliant, SiGML, is a machine-readable input of HamNoSys able to animate avatars. Nevertheless, current tools have no freely available open source libraries that allow the conversion from HamNoSys to SiGML. Our goal is to develop a tool of open access, which can perform this conversion independently from other platforms. This system represents a crucial intermediate step in the bigger pipeline of animating signing avatars. Two cases studies are described in order to illustrate different applications of our tool.
Software for the production of sign languages is much less common than for spoken languages. Such software usually relies on 3D humanoid avatars to produce signs which, inevitably, necessitates the use of animation. One barrier to the use of popular animation tools is their complexity and steep learning curve, which can be hard to master for inexperienced users. Here, we present PE2LGP, an authoring system that features a 3D avatar that signs Portuguese Sign Language. Our Animator is designed specifically to craft sign language animations using a key frame method, and is meant to be easy to use and learn to users without animation skills. We conducted a preliminary evaluation of the Animator, where we animated seven Portuguese Sign Language sentences and asked four sign language users to evaluate their quality. This evaluation revealed that the system, in spite of its simplicity, is indeed capable of producing comprehensible messages.