Feb. 9, 2024, 5:47 a.m. | Sungho Jeon Ching-Feng Yeh Hakan Inan Wei-Ning Hsu Rashi Rungta Yashar Mehdad Daniel Bikel

cs.CL updates on arXiv.org arxiv.org

In this paper, we show that a simple self-supervised pre-trained audio model can achieve comparable inference efficiency to more complicated pre-trained models with speech transformer encoders. These speech transformers rely on mixing convolutional modules with self-attention modules. They achieve state-of-the-art performance on ASR with top efficiency. We first show that employing these speech transformers as an encoder significantly improves the efficiency of pre-trained audio models as well. However, our study shows that we can achieve comparable efficiency with advanced self-attention …

art asr attention audio convolution cs.cl cs.sd eess.as efficiency inference language language models modules paper performance pre-trained models self-attention show simple speech state transformer transformers

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Analyst

@ Alstom | Johannesburg, GT, ZA