Feb. 22, 2024, 5:45 a.m. | Denis Lukovnikov, Asja Fischer

cs.CV updates on arXiv.org arxiv.org

arXiv:2402.13404v1 Announce Type: new
Abstract: While text-to-image diffusion models can generate highquality images from textual descriptions, they generally lack fine-grained control over the visual composition of the generated images. Some recent works tackle this problem by training the model to condition the generation process on additional input describing the desired image layout. Arguably the most popular among such methods, ControlNet, enables a high degree of control over the generated image using various types of conditioning inputs (e.g. segmentation maps). However, …

abstract arxiv attention control controlnet cs.cv diffusion diffusion models fine-grained generate generated image image diffusion image generation images process text text-to-image textual training type visual

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US