May 23, 2022, 1:12 a.m. | Matthew Willetts, Brooks Paige

cs.CV updates on arXiv.org arxiv.org

In this paper, we investigate the algorithmic stability of unsupervised
representation learning with deep generative models, as a function of repeated
re-training on the same input data. Algorithms for learning low dimensional
linear representations -- for example principal components analysis (PCA), or
linear independent components analysis (ICA) -- come with guarantees that they
will always reveal the same latent representations (perhaps up to an arbitrary
rotation or permutation). Unfortunately, for non-linear representation
learning, such as in a variational auto-encoder (VAE) …

arxiv learning representation representation learning unsupervised

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

IT Commercial Data Analyst - ESO

@ National Grid | Warwick, GB, CV34 6DA

Stagiaire Data Analyst – Banque Privée - Juillet 2024

@ Rothschild & Co | Paris (Messine-29)

Operations Research Scientist I - Network Optimization Focus

@ CSX | Jacksonville, FL, United States

Machine Learning Operations Engineer

@ Intellectsoft | Baku, Baku, Azerbaijan - Remote

Data Analyst

@ Health Care Service Corporation | Richardson Texas HQ (1001 E. Lookout Drive)