all AI news
Generative multitask learning mitigates target-causing confounding. (arXiv:2202.04136v3 [cs.LG] UPDATED)
Oct. 25, 2022, 1:15 a.m. | Taro Makino, Krzysztof J. Geras, Kyunghyun Cho
stat.ML updates on arXiv.org arxiv.org
We propose generative multitask learning (GMTL), a simple and scalable
approach to causal representation learning for multitask learning. Our approach
makes a minor change to the conventional multitask inference objective, and
improves robustness to target shift. Since GMTL only modifies the inference
objective, it can be used with existing multitask learning methods without
requiring additional training. The improvement in robustness comes from
mitigating unobserved confounders that cause the targets, but not the input. We
refer to them as \emph{target-causing confounders}. …
More from arxiv.org / stat.ML updates on arXiv.org
Mixture of partially linear experts
16 hours ago |
arxiv.org
Adaptive deep learning for nonlinear time series models
1 day, 16 hours ago |
arxiv.org
A Full Adagrad algorithm with O(Nd) operations
1 day, 16 hours ago |
arxiv.org
Minimax Regret Learning for Data with Heterogeneous Subgroups
1 day, 16 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote