April 22, 2024, 4:43 a.m. | Marcel Hirt, Domenico Campolo, Victoria Leong, Juan-Pablo Ortega

cs.LG updates on arXiv.org arxiv.org

arXiv:2309.00380v2 Announce Type: replace-cross
Abstract: Devising deep latent variable models for multi-modal data has been a long-standing theme in machine learning research. Multi-modal Variational Autoencoders (VAEs) have been a popular generative model class that learns latent representations that jointly explain multiple modalities. Various objective functions for such models have been suggested, often motivated as lower bounds on the multi-modal data log-likelihood or from information-theoretic considerations. To encode latent variables from different modality subsets, Product-of-Experts (PoE) or Mixture-of-Experts (MoE) aggregation schemes …

abstract arxiv autoencoders class cs.lg data functions generative generative models machine machine learning modal multi-modal multiple popular research stat.ml type variational autoencoders

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne