June 10, 2022, 1:10 a.m. | Doyup Lee, Chiheon Kim, Saehoon Kim, Minsu Cho, Wook-Shin Han

cs.LG updates on arXiv.org arxiv.org

Although autoregressive models have achieved promising results on image
generation, their unidirectional generation process prevents the resultant
images from fully reflecting global contexts. To address the issue, we propose
an effective image generation framework of Draft-and-Revise with Contextual
RQ-transformer to consider global contexts during the generation process. As a
generalized VQ-VAE, RQ-VAE first represents a high-resolution image as a
sequence of discrete code stacks. After code stacks in the sequence are
randomly masked, Contextual RQ-Transformer is trained to infill the …

arxiv cv draft generation image image generation transformer

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Intelligence Analyst

@ Rappi | COL-Bogotá

Applied Scientist II

@ Microsoft | Redmond, Washington, United States