Oct. 26, 2022, 1:16 a.m. | Shining Liang, Linjun Shou, Jian Pei, Ming Gong, Wanli Zuo, Xianglin Zuo, Daxin Jiang

cs.CL updates on arXiv.org arxiv.org

Despite the great success of spoken language understanding (SLU) in
high-resource languages, it remains challenging in low-resource languages
mainly due to the lack of labeled training data. The recent multilingual
code-switching approach achieves better alignments of model representations
across languages by constructing a mixed-language context in zero-shot
cross-lingual SLU. However, current code-switching methods are limited to
implicit alignment and disregard the inherent semantic structure in SLU, i.e.,
the hierarchical inclusion of utterances, slots, and words. In this paper, we
propose …

arxiv cross-lingual language spoken language understanding understanding

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US