Jan. 14, 2022, 2:11 a.m. | Anirudh Gupta, Harveen Singh Chadha, Priyanshi Shah, Neeraj Chhimwal, Ankur Dhuriya, Rishabh Gaur, Vivek Raghavan

cs.LG updates on arXiv.org arxiv.org

We present a CLSRIL-23, a self supervised learning based audio pre-trained
model which learns cross lingual speech representations from raw audio across
23 Indic languages. It is built on top of wav2vec 2.0 which is solved by
training a contrastive task over masked latent speech representations and
jointly learns the quantization of latents shared across all languages. We
compare the language wise loss during pretraining to compare effects of
monolingual and multilingual pretraining. Performance on some downstream
fine-tuning tasks for …

arxiv speech

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Engineer

@ Parker | New York City

Sr. Data Analyst | Home Solutions

@ Three Ships | Raleigh or Charlotte, NC