Sept. 27, 2022, 1:12 a.m. | Fan Bao, Chongxuan Li, Yue Cao, Jun Zhu

cs.CV updates on arXiv.org arxiv.org

Vision transformers (ViT) have shown promise in various vision tasks
including low-level ones while the U-Net remains dominant in score-based
diffusion models. In this paper, we perform a systematical empirical study on
the ViT-based architectures in diffusion models. Our results suggest that
adding extra long skip connections (like the U-Net) to ViT is crucial to
diffusion models. The new ViT architecture, together with other improvements,
is referred to as U-ViT. On several popular visual datasets, U-ViT achieves
competitive generation results …

arxiv diffusion diffusion models words

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Alternant Data Engineering

@ Aspire Software | Angers, FR

Senior Software Engineer, Generative AI

@ Google | Dublin, Ireland