all AI news
LLMs are Good Sign Language Translators
April 2, 2024, 7:48 p.m. | Jia Gong, Lin Geng Foo, Yixuan He, Hossein Rahmani, Jun Liu
cs.CV updates on arXiv.org arxiv.org
Abstract: Sign Language Translation (SLT) is a challenging task that aims to translate sign videos into spoken language. Inspired by the strong translation capabilities of large language models (LLMs) that are trained on extensive multilingual text corpora, we aim to harness off-the-shelf LLMs to handle SLT. In this paper, we regularize the sign videos to embody linguistic characteristics of spoken language, and propose a novel SignLLM framework to transform sign videos into a language-like representation for …
abstract aim arxiv capabilities cs.cl cs.cv good harness language language models language translation large language large language models llms multilingual paper spoken text translate translation type videos
More from arxiv.org / cs.CV updates on arXiv.org
Compact 3D Scene Representation via Self-Organizing Gaussian Grids
1 day, 4 hours ago |
arxiv.org
Fingerprint Matching with Localized Deep Representation
1 day, 4 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne