all AI news
On the Relation between Internal Language Model and Sequence Discriminative Training for Neural Transducers
April 16, 2024, 4:45 a.m. | Zijian Yang, Wei Zhou, Ralf Schl\"uter, Hermann Ney
cs.LG updates on arXiv.org arxiv.org
Abstract: Internal language model (ILM) subtraction has been widely applied to improve the performance of the RNN-Transducer with external language model (LM) fusion for speech recognition. In this work, we show that sequence discriminative training has a strong correlation with ILM subtraction from both theoretical and empirical points of view. Theoretically, we derive that the global optimum of maximum mutual information (MMI) training shares a similar formula as ILM subtraction. Empirically, we show that ILM subtraction …
abstract arxiv correlation cs.cl cs.lg cs.sd eess.as fusion language language model performance recognition rnn show speech speech recognition training type work
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Data Engineer
@ Kaseya | Bengaluru, Karnataka, India