all AI news
LD-Pruner: Efficient Pruning of Latent Diffusion Models using Task-Agnostic Insights
April 19, 2024, 4:41 a.m. | Thibault Castells, Hyoung-Kyu Song, Bo-Kyeong Kim, Shinkook Choi
cs.LG updates on arXiv.org arxiv.org
Abstract: Latent Diffusion Models (LDMs) have emerged as powerful generative models, known for delivering remarkable results under constrained computational resources. However, deploying LDMs on resource-limited devices remains a complex issue, presenting challenges such as memory consumption and inference speed. To address this issue, we introduce LD-Pruner, a novel performance-preserving structured pruning method for compressing LDMs. Traditional pruning methods for deep neural networks are not tailored to the unique characteristics of LDMs, such as the high computational …
abstract arxiv challenges computational consumption cs.ai cs.cv cs.lg devices diffusion diffusion models generative generative models however inference insights issue latent diffusion models memory memory consumption presenting pruning resources results speed type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Analyst (Digital Business Analyst)
@ Activate Interactive Pte Ltd | Singapore, Central Singapore, Singapore