all AI news
LAPTOP-Diff: Layer Pruning and Normalized Distillation for Compressing Diffusion Models
April 18, 2024, 4:44 a.m. | Dingkun Zhang, Sijia Li, Chen Chen, Qingsong Xie, Haonan Lu
cs.CV updates on arXiv.org arxiv.org
Abstract: In the era of AIGC, the demand for low-budget or even on-device applications of diffusion models emerged. In terms of compressing the Stable Diffusion models (SDMs), several approaches have been proposed, and most of them leveraged the handcrafted layer removal methods to obtain smaller U-Nets, along with knowledge distillation to recover the network performance. However, such a handcrafting manner of layer removal is inefficient and lacks scalability and generalization, and the feature distillation employed in …
abstract aigc applications arxiv budget cs.cv demand diff diffusion diffusion models distillation laptop layer low pruning stable diffusion stable diffusion models terms them type
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US