March 21, 2024, 4:46 a.m. | Jiwoo Chung, Sangeek Hyun, Jae-Pil Heo

cs.CV updates on arXiv.org arxiv.org

arXiv:2312.09008v2 Announce Type: replace
Abstract: Despite the impressive generative capabilities of diffusion models, existing diffusion model-based style transfer methods require inference-stage optimization (e.g. fine-tuning or textual inversion of style) which is time-consuming, or fails to leverage the generative ability of large-scale diffusion models. To address these issues, we introduce a novel artistic style transfer method based on a pre-trained large-scale diffusion model without any optimization. Specifically, we manipulate the features of self-attention layers as the way the cross-attention mechanism works; …

abstract arxiv capabilities cs.cv diffusion diffusion model diffusion models fine-tuning free generative inference optimization scale stage style style transfer textual training transfer type

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US