June 7, 2022, 1:11 a.m. | Alon Levkovitch, Eliya Nachmani, Lior Wolf

cs.LG updates on arXiv.org arxiv.org

We present a novel way of conditioning a pretrained denoising diffusion
speech model to produce speech in the voice of a novel person unseen during
training. The method requires a short (~3 seconds) sample from the target
person, and generation is steered at inference time, without any training
steps. At the heart of the method lies a sampling process that combines the
estimation of the denoising model with a low-pass version of the new speaker's
sample. The objective and subjective …

arxiv denoising diffusion tts voice

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Program Control Data Analyst

@ Ford Motor Company | Mexico

Vice President, Business Intelligence / Data & Analytics

@ AlphaSense | Remote - United States