March 18, 2024, 4:42 a.m. | Markus Gross, Arne P. Raulf, Christoph R\"ath

cs.LG updates on arXiv.org arxiv.org

arXiv:2311.14120v2 Announce Type: replace
Abstract: We investigate the stationary (late-time) training regime of single- and two-layer linear underparameterized neural networks within the continuum limit of stochastic gradient descent (SGD) for synthetic Gaussian data. In the case of a single-layer network in the weakly underparameterized regime, the spectrum of the noise covariance matrix deviates notably from the Hessian, which can be attributed to the broken detailed balance of SGD dynamics. The weight fluctuations are in this case generally anisotropic, but are …

abstract arxiv case cond-mat.dis-nn cond-mat.stat-mech cs.lg data derivation gradient layer linear network networks neural networks stochastic synthetic training type variance

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Security Data Engineer

@ ASML | Veldhoven, Building 08, Netherlands

Data Engineer

@ Parsons Corporation | Pune - Business Bay

Data Engineer

@ Parsons Corporation | Bengaluru, Velankani Tech Park