Sept. 23, 2022, 1:11 a.m. | Perry Lam, Huayun Zhang, Nancy F. Chen, Berrak Sisman

cs.LG updates on arXiv.org arxiv.org

Neural models are known to be over-parameterized, and recent work has shown
that sparse text-to-speech (TTS) models can outperform dense models. Although a
plethora of sparse methods has been proposed for other domains, such methods
have rarely been applied in TTS. In this work, we seek to answer the question:
what are the characteristics of selected sparse techniques on the performance
and model complexity? We compare a Tacotron2 baseline and the results of
applying five techniques. We then evaluate the …

arxiv investigations pruning speech text text-to-speech tts

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Engineer

@ Parker | New York City

Sr. Data Analyst | Home Solutions

@ Three Ships | Raleigh or Charlotte, NC