all AI news
Reproducible, incremental representation learning with Rosetta VAE. (arXiv:2201.05206v1 [cs.LG])
Jan. 17, 2022, 2:10 a.m. | Miles Martinez, John Pearson
cs.LG updates on arXiv.org arxiv.org
Variational autoencoders are among the most popular methods for distilling
low-dimensional structure from high-dimensional data, making them increasingly
valuable as tools for data exploration and scientific discovery. However,
unlike typical machine learning problems in which a single model is trained
once on a single large dataset, scientific workflows privilege learned features
that are reproducible, portable across labs, and capable of incrementally
adding new data. Ideally, methods used by different research groups should
produce comparable results, even without sharing fully trained …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Research Associate (Data Science/Information Engineering/Applied Mathematics/Information Technology)
@ Nanyang Technological University | NTU Main Campus, Singapore
Associate Director of Data Science and Analytics
@ Penn State University | Penn State University Park
Student Worker- Data Scientist
@ TransUnion | Israel - Tel Aviv
Vice President - Customer Segment Analytics Data Science Lead
@ JPMorgan Chase & Co. | Bengaluru, Karnataka, India
Middle/Senior Data Engineer
@ Devexperts | Sofia, Bulgaria