Feb. 28, 2024, 5:46 a.m. | Ling Yang, Haotian Qian, Zhilong Zhang, Jingwei Liu, Bin Cui

cs.CV updates on arXiv.org arxiv.org

arXiv:2402.17563v1 Announce Type: new
Abstract: Diffusion models have demonstrated exceptional efficacy in various generative applications. While existing models focus on minimizing a weighted sum of denoising score matching losses for data distribution modeling, their training primarily emphasizes instance-level optimization, overlooking valuable structural information within each mini-batch, indicative of pair-wise relationships among samples. To address this limitation, we introduce Structure-guided Adversarial training of Diffusion Models (SADM). In this pioneering approach, we compel the model to learn manifold structures between samples in …

abstract adversarial adversarial training applications arxiv cs.cv data denoising diffusion diffusion models distribution focus generative indicative information instance losses modeling optimization relationships samples training type wise

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Scientist, gTech Ads

@ Google | Mexico City, CDMX, Mexico

Lead, Data Analytics Operations

@ Zocdoc | Pune, Maharashtra, India