March 26, 2024, 4:47 a.m. | Zizhao Hu, Shaochong Jia, Mohammad Rostami

cs.CV updates on arXiv.org arxiv.org

arXiv:2403.16530v1 Announce Type: new
Abstract: Diffusion models have been widely used for conditional data cross-modal generation tasks such as text-to-image and text-to-video. However, state-of-the-art models still fail to align the generated visual concepts with high-level semantics in a language such as object count, spatial relationship, etc. We approach this problem from a multimodal data fusion perspective and investigate how different fusion strategies can affect vision-language alignment. We discover that compared to the widely used early fusion of conditioning text in …

abstract alignment art arxiv concepts count cs.ai cs.cv data diffusion diffusion models etc fusion generated however image intermediate language modal object relationship semantics spatial state state-of-the-art models tasks text text-image text-to-image text-to-video type video visual visual concepts vit

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior ML Engineer

@ Carousell Group | Ho Chi Minh City, Vietnam

Data and Insight Analyst

@ Cotiviti | Remote, United States