March 11, 2024, 4:41 a.m. | Thomas M. Sutter, Yang Meng, Norbert Fortin, Julia E. Vogt, Stephan Mandt

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.05300v1 Announce Type: new
Abstract: Variational Autoencoders for multimodal data hold promise for many tasks in data analysis, such as representation learning, conditional generation, and imputation. Current architectures either share the encoder output, decoder input, or both across modalities to learn a shared representation. Such architectures impose hard constraints on the model. In this work, we show that a better latent representation can be obtained by replacing these hard constraints with a soft constraint. We propose a new mixture-of-experts prior, …

abstract analysis architectures arxiv autoencoders constraints cs.ai cs.lg current data data analysis decoder diversity encoder imputation learn multimodal multimodal data representation representation learning tasks type unity variational autoencoders

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Research Scientist

@ Meta | Menlo Park, CA

Principal Data Scientist

@ Mastercard | O'Fallon, Missouri (Main Campus)