March 7, 2024, 5:41 a.m. | GuanWen Qiu, Da Kuang, Surbhi Goel

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.03375v1 Announce Type: new
Abstract: Existing research often posits spurious features as "easier" to learn than core features in neural network optimization, but the impact of their relative simplicity remains under-explored. Moreover they mainly focus on the end performance intead of the learning dynamics of feature learning. In this paper, we propose a theoretical framework and associated synthetic dataset grounded in boolean function analysis which allows for fine-grained control on the relative complexity (compared to core features) and correlation strength …

abstract arxiv complexity core correlations cs.lg dynamics feature features focus impact learn network neural network optimization performance research simplicity the end type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne