March 25, 2024, 4:44 a.m. | Zhichao Wei, Qingkun Su, Long Qin, Weizhi Wang

cs.CV updates on arXiv.org arxiv.org

arXiv:2403.15059v1 Announce Type: new
Abstract: Recent advances in tuning-free personalized image generation based on diffusion models are impressive. However, to improve subject fidelity, existing methods either retrain the diffusion model or infuse it with dense visual embeddings, both of which suffer from poor generalization and efficiency. Also, these methods falter in multi-subject image generation due to the unconstrained cross-attention mechanism. In this paper, we propose MM-Diff, a unified and tuning-free image personalization framework capable of generating high-fidelity images of both …

abstract advances arxiv cs.ai cs.cv diff diffusion diffusion model diffusion models efficiency embeddings fidelity free however image image generation integration modal multi-modal personalization personalized type via visual

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Engineer

@ Cint | Gurgaon, India

Data Science (M/F), setor automóvel - Aveiro

@ Segula Technologies | Aveiro, Portugal