Web: http://arxiv.org/abs/2002.06873

Sept. 15, 2022, 1:11 a.m. | Swapnil Mishra, Seth Flaxman, Tresnia Berah, Harrison Zhu, Mikko Pakkanen, Samir Bhatt

cs.LG updates on arXiv.org arxiv.org

Stochastic processes provide a mathematically elegant way model complex data.
In theory, they provide flexible priors over function classes that can encode a
wide range of interesting assumptions. In practice, however, efficient
inference by optimisation or marginalisation is difficult, a problem further
exacerbated with big data and high dimensional input spaces. We propose a novel
variational autoencoder (VAE) called the prior encoding variational autoencoder
($\pi$VAE). The $\pi$VAE is finitely exchangeable and Kolmogorov consistent,
and thus is a continuous stochastic process. …

arxiv bayesian bayesian deep learning deep learning mcmc prior process stochastic stochastic process

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Product Manager (Canada, Remote)

@ FreshBooks | Canada

Data Engineer

@ Amazon.com | Irvine, California, USA

Senior Autonomy Behavior II, Performance Assessment Engineer

@ Cruise LLC | San Francisco, CA

Senior Data Analytics Engineer

@ Intercom | Dublin, Ireland

Data Analyst Intern

@ ADDX | Singapore

Data Science Analyst - Consumer

@ Yelp | London, England, United Kingdom

Senior Data Analyst - Python+Hadoop

@ Capco | India - Bengaluru

DevOps Engineer, Data Team

@ SingleStore | Hyderabad, India

Software Engineer (Machine Learning, AI Platform)

@ Phaidra | Remote

Sr. UI/UX Designer - Artificial Intelligence (ID:1213)

@ Truelogic Software | Remote, anywhere in LATAM

Analytics Engineer

@ carwow | London, England, United Kingdom

HRIS Data Analyst

@ SecurityScorecard | Remote