all AI news
Demystifying Lazy Training of Neural Networks from a Macroscopic Viewpoint
April 9, 2024, 4:41 a.m. | Yuqing Li, Tao Luo, Qixuan Zhou
cs.LG updates on arXiv.org arxiv.org
Abstract: In this paper, we advance the understanding of neural network training dynamics by examining the intricate interplay of various factors introduced by weight parameters in the initialization process. Motivated by the foundational work of Luo et al. (J. Mach. Learn. Res., Vol. 22, Iss. 1, No. 71, pp 3327-3373), we explore the gradient descent dynamics of neural networks through the lens of macroscopic limits, where we analyze its behavior as width $m$ tends to infinity. …
abstract advance arxiv cs.lg dynamics foundational iss lazy learn network networks network training neural network neural networks paper parameters process stat.ml training type understanding work
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Business Intelligence Architect - Specialist
@ Eastman | Hyderabad, IN, 500 008