Sept. 19, 2022, 1:12 a.m. | Ignacio Peis, Chao Ma, José Miguel Hernández-Lobato

cs.LG updates on arXiv.org arxiv.org

Variational Autoencoders (VAEs) have recently been highly successful at
imputing and acquiring heterogeneous missing data. However, within this
specific application domain, existing VAE methods are restricted by using only
one layer of latent variables and strictly Gaussian posterior approximations.
To address these limitations, we present HH-VAEM, a Hierarchical VAE model for
mixed-type incomplete data that uses Hamiltonian Monte Carlo with automatic
hyper-parameter tuning for improved approximate inference. Our experiments show
that HH-VAEM outperforms existing baselines in the tasks of missing …

acquisition arxiv data hamiltonian monte carlo hierarchical imputation

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Senior Applied Data Scientist

@ dunnhumby | London

Principal Data Architect - Azure & Big Data

@ MGM Resorts International | Home Office - US, NV