Nov. 15, 2022, 2:12 a.m. | Simon Kneer, Taraneh Sayadi, Denis Sipp, Peter Schmid, Georgios Rigas

cs.LG updates on arXiv.org arxiv.org

Nonlinear principal component analysis (NLPCA) via autoencoders has attracted
attention in the dynamical systems community due to its larger compression rate
when compared to linear principal component analysis (PCA). These model
reduction methods experience an increase in the dimensionality of the latent
space when applied to datasets that exhibit invariant samples due to the
presence of symmetries. In this study, we introduce a novel machine learning
embedding for autoencoders, which uses Siamese networks and spatial transformer
networks to account for …

arxiv physics symmetry

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Intern Large Language Models Planning (f/m/x)

@ BMW Group | Munich, DE

Data Engineer Analytics

@ Meta | Menlo Park, CA | Remote, US