March 25, 2024, 4:45 a.m. | Thuan Hoang Nguyen, Anh Tran

cs.CV updates on arXiv.org arxiv.org

arXiv:2312.05239v2 Announce Type: replace
Abstract: Despite their ability to generate high-resolution and diverse images from text prompts, text-to-image diffusion models often suffer from slow iterative sampling processes. Model distillation is one of the most effective directions to accelerate these models. However, previous distillation methods fail to retain the generation quality while requiring a significant amount of images for training, either from real data or synthetically generated by the teacher model. In response to this limitation, we present a novel image-free …

abstract arxiv cs.cv diffusion diffusion model diffusion models distillation diverse generate however image image diffusion images iterative model distillation processes prompts quality resolution sampling text text-to-image type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Research Scientist, Demography and Survey Science, University Grad

@ Meta | Menlo Park, CA | New York City

Computer Vision Engineer, XR

@ Meta | Burlingame, CA