all AI news
Hacking Task Confounder in Meta-Learning
April 18, 2024, 4:43 a.m. | Jingyao Wang, Yi Ren, Zeen Song, Jianqi Zhang, Changwen Zheng, Wenwen Qiang
stat.ML updates on arXiv.org arxiv.org
Abstract: Meta-learning enables rapid generalization to new tasks by learning knowledge from various tasks. It is intuitively assumed that as the training progresses, a model will acquire richer knowledge, leading to better generalization performance. However, our experiments reveal an unexpected result: there is negative knowledge transfer between tasks, affecting generalization performance. To explain this phenomenon, we conduct Structural Causal Models (SCMs) for causal analysis. Our investigation uncovers the presence of spurious correlations between task-specific causal factors …
abstract arxiv cs.lg hacking however knowledge meta meta-learning negative performance stat.ml tasks training transfer type will
More from arxiv.org / stat.ML updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Developer AI Senior Staff Engineer, Machine Learning
@ Google | Sunnyvale, CA, USA; New York City, USA
Engineer* Cloud & Data Operations (f/m/d)
@ SICK Sensor Intelligence | Waldkirch (bei Freiburg), DE, 79183