March 4, 2024, 5:41 a.m. | Laura Manduchi, Kushagra Pandey, Robert Bamler, Ryan Cotterell, Sina D\"aubener, Sophie Fellenz, Asja Fischer, Thomas G\"artner, Matthias Kirchler, Ma

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.00025v1 Announce Type: new
Abstract: The field of deep generative modeling has grown rapidly and consistently over the years. With the availability of massive amounts of training data coupled with advances in scalable unsupervised learning paradigms, recent large-scale generative models show tremendous promise in synthesizing high-resolution images and text, as well as structured data such as videos and molecules. However, we argue that current large-scale generative AI models do not sufficiently address several fundamental issues that hinder their widespread adoption …

abstract advances arxiv availability challenges cs.ai cs.lg data generative generative modeling generative models images massive modeling opportunities scalable scale show text training training data type unsupervised unsupervised learning

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne