all AI news
Deep MMD Gradient Flow without adversarial training
May 14, 2024, 4:41 a.m. | Alexandre Galashov, Valentin de Bortoli, Arthur Gretton
cs.LG updates on arXiv.org arxiv.org
Abstract: We propose a gradient flow procedure for generative modeling by transporting particles from an initial source distribution to a target distribution, where the gradient field on the particles is given by a noise-adaptive Wasserstein Gradient of the Maximum Mean Discrepancy (MMD). The noise-adaptive MMD is trained on data distributions corrupted by increasing levels of noise, obtained via a forward diffusion process, as commonly used in denoising diffusion probabilistic models. The result is a generalization of …
abstract adversarial adversarial training arxiv cs.ai cs.lg data distribution flow generative generative modeling gradient maximum mean modeling noise training type
More from arxiv.org / cs.LG updates on arXiv.org
Trainwreck: A damaging adversarial attack on image classifiers
1 day, 15 hours ago |
arxiv.org
Fast Controllable Diffusion Models for Undersampled MRI Reconstruction
1 day, 15 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
Sr. Data Operations
@ Carousell Group | West Jakarta, Indonesia
Senior Analyst, Business Intelligence & Reporting
@ Deutsche Bank | Bucharest
Business Intelligence Subject Matter Expert (SME) - Assistant Vice President
@ Deutsche Bank | Cary, 3000 CentreGreen Way
Enterprise Business Intelligence Specialist
@ NAIC | Kansas City
Senior Business Intelligence (BI) Developer - Associate
@ Deutsche Bank | Cary, 3000 CentreGreen Way