March 4, 2024, 5:45 a.m. | Yuhao Liu, Fang Liu, Zhanghan Ke, Nanxuan Zhao, Rynson W. H. Lau

cs.CV updates on arXiv.org arxiv.org

arXiv:2403.00644v1 Announce Type: new
Abstract: Diffusion models trained on large-scale datasets have achieved remarkable progress in image synthesis. However, due to the randomness in the diffusion process, they often struggle with handling diverse low-level tasks that require details preservation. To overcome this limitation, we present a new Diff-Plugin framework to enable a single pre-trained diffusion model to generate high-fidelity results across a variety of low-level tasks. Specifically, we first propose a lightweight Task-Plugin module with a dual branch design to …

abstract arxiv cs.cv datasets diff diffusion diffusion models diverse framework image low plugin preservation process progress randomness scale struggle synthesis tasks type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Codec Avatars Research Engineer

@ Meta | Pittsburgh, PA