all AI news
Identifying Linearly-Mixed Causal Representations from Multi-Node Interventions
March 26, 2024, 4:45 a.m. | Simon Bing, Urmi Ninad, Jonas Wahl, Jakob Runge
cs.LG updates on arXiv.org arxiv.org
Abstract: The task of inferring high-level causal variables from low-level observations, commonly referred to as causal representation learning, is fundamentally underconstrained. As such, recent works to address this problem focus on various assumptions that lead to identifiability of the underlying latent causal variables. A large corpus of these preceding approaches consider multi-environment data collected under different interventions on the causal model. What is common to virtually all of these works is the restrictive assumption that in …
abstract arxiv assumptions causal cs.lg focus low math.st mixed node representation representation learning stat.me stat.ml stat.th type variables
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Research Scientist
@ Meta | Menlo Park, CA
Principal Data Scientist
@ Mastercard | O'Fallon, Missouri (Main Campus)