Feb. 1, 2024, 12:42 p.m. | Fei Kong Jinhao Duan Lichao Sun Hao Cheng Renjing Xu Hengtao Shen Xiaofeng Zhu Xiaoshuang Shi

cs.CV updates on arXiv.org arxiv.org

Though diffusion models excel in image generation, their step-by-step denoising leads to slow generation speeds. Consistency training addresses this issue with single-step sampling but often produces lower-quality generations and requires high training costs. In this paper, we show that optimizing consistency training loss minimizes the Wasserstein distance between target and generated distributions. As timestep increases, the upper bound accumulates previous consistency training losses. Therefore, larger batch sizes are needed to reduce both current and accumulated losses. We propose Adversarial Consistency …

act adversarial costs cs.cv denoising diffusion diffusion models excel generated image image generation issue leads loss paper quality sampling show step-by-step training training costs training loss

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US