June 29, 2022, 1:12 a.m. | Tobias Leemann, Michael Kirchhof, Yao Rong, Enkelejda Kasneci, Gjergji Kasneci

cs.CV updates on arXiv.org arxiv.org

Interest in understanding and factorizing learned embedding spaces is
growing. For instance, recent concept-based explanation techniques analyze a
machine learning model in terms of interpretable latent components. Such
components have to be discovered in the model's embedding space, e.g., through
independent component analysis (ICA) or modern disentanglement learning
techniques. While these unsupervised approaches offer a sound formal framework,
they either require access to a data generating function or impose rigid
assumptions on the data distribution, such as independence of components, …

arxiv assumptions embedding ml

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Intelligence Analyst

@ Rappi | COL-Bogotá

Applied Scientist II

@ Microsoft | Redmond, Washington, United States