Nov. 15, 2022, 2:12 a.m. | Ignacio Peis, Chao Ma, José Miguel Hernández-Lobato

cs.LG updates on arXiv.org arxiv.org

Variational Autoencoders (VAEs) have recently been highly successful at
imputing and acquiring heterogeneous missing data. However, within this
specific application domain, existing VAE methods are restricted by using only
one layer of latent variables and strictly Gaussian posterior approximations.
To address these limitations, we present HH-VAEM, a Hierarchical VAE model for
mixed-type incomplete data that uses Hamiltonian Monte Carlo with automatic
hyper-parameter tuning for improved approximate inference. Our experiments show
that HH-VAEM outperforms existing baselines in the tasks of missing …

acquisition arxiv data hamiltonian monte carlo hierarchical imputation

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Intern Large Language Models Planning (f/m/x)

@ BMW Group | Munich, DE

Data Engineer Analytics

@ Meta | Menlo Park, CA | Remote, US