all AI news
LAPTOP-Diff: Layer Pruning and Normalized Distillation for Compressing Diffusion Models
April 18, 2024, 4:44 a.m. | Dingkun Zhang, Sijia Li, Chen Chen, Qingsong Xie, Haonan Lu
cs.CV updates on arXiv.org arxiv.org
Abstract: In the era of AIGC, the demand for low-budget or even on-device applications of diffusion models emerged. In terms of compressing the Stable Diffusion models (SDMs), several approaches have been proposed, and most of them leveraged the handcrafted layer removal methods to obtain smaller U-Nets, along with knowledge distillation to recover the network performance. However, such a handcrafting manner of layer removal is inefficient and lacks scalability and generalization, and the feature distillation employed in …
abstract aigc applications arxiv budget cs.cv demand diff diffusion diffusion models distillation laptop layer low pruning stable diffusion stable diffusion models terms them type
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
#13721 - Data Engineer - AI Model Testing
@ Qualitest | Miami, Florida, United States
Elasticsearch Administrator
@ ManTech | 201BF - Customer Site, Chantilly, VA