all AI news
Attention or Convolution: Transformer Encoders in Audio Language Models for Inference Efficiency
Feb. 9, 2024, 5:47 a.m. | Sungho Jeon Ching-Feng Yeh Hakan Inan Wei-Ning Hsu Rashi Rungta Yashar Mehdad Daniel Bikel
cs.CL updates on arXiv.org arxiv.org
art asr attention audio convolution cs.cl cs.sd eess.as efficiency inference language language models modules paper performance pre-trained models self-attention show simple speech state transformer transformers
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Business Data Analyst
@ Alstom | Johannesburg, GT, ZA