May 19, 2022, 2:28 p.m. | /u/disentangle

Machine Learning www.reddit.com

Is there any "theoretically sound" way to reduce variance during sampling in diffusion models? Even if I use the lower bound suggested in the DDPM paper (and it going toward zero during sampling), my final samples are excessively noisy. Simply reducing the diffusion variance schedule (without changing the number of steps), in my experiments seems to not reach sufficient diffusion at the end of the chain.

I'm predicting speech mel-spectrograms and the harmonic amplitudes are excessively noisy, unless I manually …

diffusion diffusion models machinelearning sampling variance

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Management Associate

@ EcoVadis | Ebène, Mauritius

Senior Data Engineer

@ Telstra | Telstra ICC Bengaluru