March 22, 2024, 4:45 a.m. | Pablo Marcos-Manch\'on, Roberto Alcover-Couso, Juan C. SanMiguel, Jose M. Mart\'inez

cs.CV updates on arXiv.org arxiv.org

arXiv:2403.14291v1 Announce Type: new
Abstract: Diffusion models represent a new paradigm in text-to-image generation. Beyond generating high-quality images from text prompts, models such as Stable Diffusion have been successfully extended to the joint generation of semantic segmentation pseudo-masks. However, current extensions primarily rely on extracting attentions linked to prompt words used for image synthesis. This approach limits the generation of segmentation masks derived from word tokens not contained in the text prompt. In this work, we introduce Open-Vocabulary Attention Maps …

abstract arxiv attention beyond cs.cv current diffusion diffusion models extensions however image image generation images maps masks new paradigm optimization paradigm prompts quality segmentation semantic stable diffusion text text-to-image token type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne