July 27, 2022, 1:11 a.m. | Anirudh Raju, Milind Rao, Gautam Tiwari, Pranav Dheram, Bryan Anderson, Zhe Zhang, Chul Lee, Bach Bui, Ariya Rastrow

cs.CL updates on arXiv.org arxiv.org

Spoken language understanding (SLU) systems extract both text transcripts and
semantics associated with intents and slots from input speech utterances. SLU
systems usually consist of (1) an automatic speech recognition (ASR) module,
(2) an interface module that exposes relevant outputs from ASR, and (3) a
natural language understanding (NLU) module. Interfaces in SLU systems carry
information on text transcriptions or richer information like neural embeddings
from ASR to NLU. In this paper, we study how interfaces affect joint-training
for spoken …

arxiv interfaces language spoken language understanding training understanding

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US