Feb. 15, 2024, 5:46 a.m. | Ruchao Fan, Natarajan Balaji Shanka, Abeer Alwan

cs.CL updates on arXiv.org arxiv.org

arXiv:2402.08898v1 Announce Type: cross
Abstract: Non-autoregressive automatic speech recognition (NASR) models have gained attention due to their parallelism and fast inference. The encoder-based NASR, e.g. connectionist temporal classification (CTC), can be initialized from the speech foundation models (SFM) but does not account for any dependencies among intermediate tokens. The encoder-decoder-based NASR, like CTC alignment-based single-step non-autoregressive transformer (CASS-NAT), can mitigate the dependency problem but is not able to efficiently integrate SFM. Inspired by the success of recent work of speech-text …

abstract arxiv asr attention automatic speech recognition classification cs.cl cs.sd decoder dependencies eess.as encoder encoder-decoder foundation inference intermediate recognition speech speech foundation models speech recognition ssl temporal tokens type

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US