Parents' Guide

Education, Scholarships, Parenting Tips

Movie magic could be used to translate for the deaf

Last Updated on 12 July 2023

Chris Berdik Huenerfauth Lab e1494954255392 800x0 c default

Matt Huenerfauth (right), director of the Linguistic and Assistive Technologies Laboratory at the Rochester Institute of Technology, records video and motion-capture data from someone performing American Sign Language (ASL). His laboratory is developing software to animate ASL avatars.

The technology behind the cooking rats in “Ratatouille” and the dancing penguins in “Happy Feet” could help bridge stubborn academic gaps between deaf and hearing students. Researchers are using computer-animation techniques, such as motion-capture, to make lifelike computer avatars that can reliably and naturally translate written and spoken words into sign language, whether it’s American Sign Language (ASL) or that of another country.

English and ASL are fundamentally different languages, said computer scientist Matthew Huenerfauth, director of the Linguistic and Assistive Technologies Laboratory at the Rochester Institute of Technology, and translation between them, “is just as hard as translating English to Chinese.” Programming avatars to perform that translation is much harder. Not only is ASL grammar different from English, but sign language also depends heavily on facial expressions, gaze changes, body positions and interactions with the physical space around the signer to make and modify meaning. It’s translation in three dimensions.

About three quarters of deaf and hard-of-hearing students in America are mainstreamed, learning alongside hearing students in schools and classes where sign-language interpreters are often in short supply. On average, deaf students graduate high school reading English  – a second language to them  – at a fourth-grade level, according to a report out of Gallaudet University, the premier university for deaf students. That reading deficit slows their learning in every other subject. It also limits the usefulness of closed captioning for multimedia course material.

You would think it would be easier, with all the amazing animation in movies. But once a movie is made, it’s frozen in time. The avatar must respond to the immediate situation. You can’t just make an animated phrase book.
Rosalee Wolfe, professor of computer graphics and human-computer interaction, DePaul University

“For kids, captioning is almost a waste of time,” said Harley Hamilton, a computer scientist at Georgia Tech affiliated with the Center for Accessible Technology in Sign (CATS), a joint project of the university and the Atlanta Area School for the Deaf. At the same time, he said, existing sign-language avatars aren’t ready for prime time, citing studies that show deaf students understand between 25 and 60 percent of what these avatars sign.

Among the best performing sign-language avatars is Paula, named after DePaul University, where she’s being developed for a myriad of potential uses, ranging from doctors’ offices to airport security checkpoints to schools. A team of animators, computer scientists and sign-language experts at DePaul builds Paula’s skills one linguistic challenge at a time. For instance, “role shifting” in a story with multiple characters, which human signers indicate by turning their bodies to the side in a fluid, subtle sequence that starts with the eyes, followed by the head, neck, and torso. The researchers develop mathematical models of how bodies naturally make these moves, and use these models to automate critical parts of Paula’s signing, a process called keyframe animation.

“You would think it would be easier, with all the amazing animation in movies,” said Rosalee Wolfe, a lead researcher on the Paula project and professor of computer graphics and human-computer interaction. “But once a movie is made, it’s frozen in time. The avatar must respond to the immediate situation. You can’t just make an animated phrase book. You need a much deeper understanding of the language, grammar, and human kinesiology.”

Each bit of Paula that can be fine-tuned with mathematical modeling is known as a “polygon,” and there are more than 17,000 polygons in her eyes alone, more than 8,000 controlling her mouth, and a mere 4,000 for each hand. Plus, the human body is never completely still, so the researchers need to mix in enough random movements to keep Paula “alive,” without making her seem jittery or shaky.

To continue reading, click on the page number below…

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
We'd love to hear your thoughts about this!x
()
x
Send this to a friend