Aug. 12, 2022, 1:11 a.m. | Gabriel Loaiza-Ganem, Brendan Leigh Ross, Jesse C. Cresswell, Anthony L. Caterini

stat.ML updates on arXiv.org arxiv.org

Likelihood-based, or explicit, deep generative models use neural networks to
construct flexible high-dimensional densities. This formulation directly
contradicts the manifold hypothesis, which states that observed data lies on a
low-dimensional manifold embedded in high-dimensional ambient space. In this
paper we investigate the pathologies of maximum-likelihood training in the
presence of this dimensionality mismatch. We formally prove that degenerate
optima are achieved wherein the manifold itself is learned but not the
distribution on it, a phenomenon we call manifold overfitting. We …

arxiv deep generative models manifold ml overfitting

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Machine Learning Engineer (m/f/d)

@ StepStone Group | Düsseldorf, Germany

2024 GDIA AI/ML Scientist - Supplemental

@ Ford Motor Company | United States