Feb. 15, 2024, 5:46 a.m. | Ruchao Fan, Natarajan Balaji Shanka, Abeer Alwan

cs.CL updates on arXiv.org arxiv.org

arXiv:2402.08898v1 Announce Type: cross
Abstract: Non-autoregressive automatic speech recognition (NASR) models have gained attention due to their parallelism and fast inference. The encoder-based NASR, e.g. connectionist temporal classification (CTC), can be initialized from the speech foundation models (SFM) but does not account for any dependencies among intermediate tokens. The encoder-decoder-based NASR, like CTC alignment-based single-step non-autoregressive transformer (CASS-NAT), can mitigate the dependency problem but is not able to efficiently integrate SFM. Inspired by the success of recent work of speech-text …

abstract arxiv asr attention automatic speech recognition classification cs.cl cs.sd decoder dependencies eess.as encoder encoder-decoder foundation inference intermediate recognition speech speech foundation models speech recognition ssl temporal tokens type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Scientist, gTech Ads

@ Google | Mexico City, CDMX, Mexico

Lead, Data Analytics Operations

@ Zocdoc | Pune, Maharashtra, India