July 8, 2022, 1:12 a.m. | Chaerin Kong, Jeesoo Kim, Donghoon Han, Nojun Kwak

cs.CV updates on arXiv.org arxiv.org

Producing diverse and realistic images with generative models such as GANs
typically requires large scale training with vast amount of images. GANs
trained with limited data can easily memorize few training samples and display
undesirable properties like "stairlike" latent space where interpolation in the
latent space yields discontinuous transitions in the output space. In this
work, we consider a challenging task of pretraining-free few-shot image
synthesis, and seek to train existing generative models with minimal
overfitting and mode collapse. We …

arxiv cv distance learning generation image image generation learning

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne