June 20, 2022, 1:12 a.m. | Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Sh

cs.CL updates on arXiv.org arxiv.org

Self-supervised learning (SSL) achieves great success in speech recognition,
while limited exploration has been attempted for other speech processing tasks.
As speech signal contains multi-faceted information including speaker identity,
paralinguistics, spoken content, etc., learning universal representations for
all speech tasks is challenging. To tackle the problem, we propose a new
pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM
jointly learns masked speech prediction and denoising in pre-training. By this
means, WavLM does not only keep the speech content …

arxiv pre-training processing scale speech speech processing stack training

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Management Associate

@ EcoVadis | Ebène, Mauritius

Senior Data Engineer

@ Telstra | Telstra ICC Bengaluru