Reimagining Sign Language Technologies: Analyzing Translation Work of Chinese Deaf Online Content Creators

Reimagining Sign Language Technologies: Analyzing Translation Work of Chinese Deaf Online Content Creators
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

While sign language translation systems promise to enhance deaf people’s access to information and communication, they have been met with strong skepticism from deaf communities due to risks of misrepresenting and oversimplifying the richness of signed communication in technologies. This article provides empirical evidence of the complexity of translation work involved in deaf communication through interviews with 13 deaf Chinese content creators who actively produce and share sign language content on video sharing platforms with both deaf and hearing audiences. By studying this unique group of content creators, our findings highlight the nuances of sign language translation, showing how deaf creators create content with multilingualism and multiculturalism in mind, support meaning making across languages and cultures, and navigate politics involved in their translation work. Grounded in these deaf-led translation practices, we draw on the sociolinguistic concept of (trans)languaging to re-conceptualize and reimagine the design of sign language translation systems.


💡 Research Summary

The paper investigates the intricate translation work performed by Chinese deaf online content creators, offering empirical insight into why sign‑language technologies have been met with skepticism by deaf communities. By conducting semi‑structured interviews with 13 active deaf creators who publish videos for both deaf and hearing audiences, the authors reveal that translation in this context is far more than a simple conversion between sign language and spoken/written language. Instead, creators engage in a multilayered practice that simultaneously navigates multilingualism, multiculturalism, and political considerations.

In the introduction, the authors contextualize recent AI developments such as Google’s SignGemma, noting that despite technical progress, deaf users remain wary because many systems oversimplify sign language, ignore non‑manual markers, and risk cultural appropriation. A brief historical overview highlights the long‑standing tension between deaf communities and technology developers, from early gestural interfaces in the 1980s to modern multimodal LLM‑driven approaches.

The background section explains that deaf communication is inherently multimodal: it blends visual‑spatial grammar, facial expressions, body movements, and a variety of auxiliary tools (captions, lip‑reading, etc.). In China, the situation is especially complex because there is no single standardized national sign language; regional variants coexist, making any “one‑size‑fits‑all” translation model untenable.

Related work surveys sign‑language recognition, generation, and translation research, emphasizing persistent challenges such as limited, low‑quality datasets, lack of standardized annotation schemes, and the dominance of gloss‑based representations that strip away meaning. The authors also summarize critiques from deaf scholars who argue that many projects treat deaf people as passive beneficiaries rather than co‑designers, often using human interpreters as the benchmark for AI performance.

Methodologically, the study selects 13 creators based on activity level, audience diversity, and willingness to discuss their workflow. Interviews probe (1) motivations and target audiences, (2) the mix of sign, spoken language, subtitles, images, and other visual cues used in videos, (3) concrete translation steps and challenges, and (4) political or cultural sensitivities. Thematic coding yields three overarching dimensions of their practice:

  1. Multilingualism – Creators blend Chinese (Mandarin), English, regional dialects, and occasionally other languages within a single video. They synchronize sign, voice‑over, and on‑screen text so that both deaf and hearing viewers can follow.

  2. Multiculturalism – Content spans news, scientific explanations, cultural commentary, and entertainment. Creators deliberately select culturally resonant metaphors, icons, and visual examples to bridge gaps between deaf cultural norms and mainstream (often hearing‑centric) knowledge.

  3. Political navigation – Participants are acutely aware that their videos can be interpreted as “translated for the hearing,” a framing that may reinforce linguistic hierarchies. They therefore implement self‑review processes, annotate potential ambiguities, and negotiate the terms under which their data might be used for AI training, insisting on community consent and control.

These findings lead to three key contributions. First, the work expands the HCI literature on sign‑language translation by foregrounding the lived translation labor of Chinese deaf creators—a group previously under‑studied. Second, it reconceptualizes translation through the lens of (trans)languaging, showing that sign, speech, text, and visual symbols co‑act as a single communicative ecosystem rather than discrete source‑target pairs. Third, the authors propose concrete design recommendations for future sign‑language technologies:

  • Multimodal, multitask models that ingest video, audio, and textual cues simultaneously, preserving non‑manual markers and spatial grammar.
  • Rich output representations that go beyond plain glosses or subtitles, delivering combined visual‑linguistic packets (e.g., animated avatars with facial expressions, synchronized captions, and contextual icons).
  • Participatory data pipelines where deaf creators co‑label, validate, and curate datasets, ensuring cultural and political nuances are respected and reducing bias.

In conclusion, the paper argues that sign‑language translation systems must evolve from “sign ↔ text” converters into platforms that support the full spectrum of multilingual, multimodal, and sociopolitical meaning‑making practiced by deaf communities. Achieving this requires sustained collaboration with deaf stakeholders throughout data collection, model development, and evaluation, thereby safeguarding linguistic rights while harnessing the potential of AI to amplify, rather than diminish, the richness of sign language.


Comments & Academic Discussion

Loading comments...

Leave a Comment