Web: http://arxiv.org/abs/2209.08326

Sept. 20, 2022, 1:14 a.m. | Ye Bai, Jie Li, Wenjing Han, Hao Ni, Kaituo Xu, Zhuo Zhang, Cheng Yi, Xiaorui Wang

cs.CL updates on arXiv.org arxiv.org

While transformers and their variant conformers show promising performance in
speech recognition, the parameterized property leads to much memory cost during
training and inference. Some works use cross-layer weight-sharing to reduce the
parameters of the model. However, the inevitable loss of capacity harms the
model performance. To address this issue, this paper proposes a
parameter-efficient conformer via sharing sparsely-gated experts. Specifically,
we use sparsely-gated mixture-of-experts (MoE) to extend the capacity of a
conformer block without increasing computation. Then, the parameters …

arxiv experts speech speech recognition

More from arxiv.org / cs.CL updates on arXiv.org

Postdoctoral Fellow: ML for autonomous materials discovery

@ Lawrence Berkeley National Lab | Berkeley, CA

Research Scientists

@ ODU Research Foundation | Norfolk, Virginia

Embedded Systems Engineer (Robotics)

@ Neo Cybernetica | Bedford, New Hampshire

2023 Luis J. Alvarez and Admiral Grace M. Hopper Postdoc Fellowship in Computing Sciences

@ Lawrence Berkeley National Lab | San Francisco, CA

Senior Manager Data Scientist

@ NAV | Remote, US

Senior AI Research Scientist

@ Earth Species Project | Remote anywhere

Research Fellow- Center for Security and Emerging Technology (Multiple Opportunities)

@ University of California Davis | Washington, DC

Staff Fellow - Data Scientist

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Staff Fellow - Senior Data Engineer

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Research Engineer - VFX, Neural Compositing

@ Flawless | Los Angeles, California, United States

[Job-TB] Senior Data Engineer

@ CI&T | Brazil

Data Analytics Engineer

@ The Fork | Paris, France