April 12, 2024, 4:42 a.m. | Tianshuo Xu, Peng Mi, Ruilin Wang, Yingcong Chen

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.07946v1 Announce Type: new
Abstract: Diffusion models (DMs) are a powerful generative framework that have attracted significant attention in recent years. However, the high computational cost of training DMs limits their practical applications. In this paper, we start with a consistency phenomenon of DMs: we observe that DMs with different initializations or even different architectures can produce very similar outputs given the same noise inputs, which is rare in other generative models. We attribute this phenomenon to two factors: (1) …

abstract applications arxiv attention computational cost cs.ai cs.lg diffusion diffusion models faster framework generative however inspiration observe paper practical training type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Scientist, gTech Ads

@ Google | Mexico City, CDMX, Mexico

Lead, Data Analytics Operations

@ Zocdoc | Pune, Maharashtra, India