all AI news
A Multi-Stage Multi-Codebook VQ-VAE Approach to High-Performance Neural TTS. (arXiv:2209.10887v1 [cs.SD])
Sept. 23, 2022, 1:15 a.m. | Haohan Guo, Fenglong Xie, Frank K. Soong, Xixin Wu, Helen Meng
cs.CL updates on arXiv.org arxiv.org
We propose a Multi-Stage, Multi-Codebook (MSMC) approach to high-performance
neural TTS synthesis. A vector-quantized, variational autoencoder (VQ-VAE)
based feature analyzer is used to encode Mel spectrograms of speech training
data by down-sampling progressively in multiple stages into MSMC
Representations (MSMCRs) with different time resolutions, and quantizing them
with multiple VQ codebooks, respectively. Multi-stage predictors are trained to
map the input text sequence to MSMCRs progressively by minimizing a combined
loss of the reconstruction Mean Square Error (MSE) and "triplet loss". …
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Analyst (CPS-GfK)
@ GfK | Bucharest
Consultant Data Analytics IT Digital Impulse - H/F
@ Talan | Paris, France
Data Analyst
@ Experian | Mumbai, India
Data Scientist
@ Novo Nordisk | Princeton, NJ, US
Data Architect IV
@ Millennium Corporation | United States