April 10, 2024, 4:43 a.m. | Philip M. Long, Peter L. Bartlett

cs.LG updates on arXiv.org arxiv.org

arXiv:2309.12488v5 Announce Type: replace
Abstract: Recent experiments have shown that, often, when training a neural network with gradient descent (GD) with a step size $\eta$, the operator norm of the Hessian of the loss grows until it approximately reaches $2/\eta$, after which it fluctuates around this value. The quantity $2/\eta$ has been called the "edge of stability" based on consideration of a local quadratic approximation of the loss. We perform a similar calculation to arrive at an "edge of stability" …

abstract arxiv cs.lg cs.ne edge gradient loss network neural network norm stability stat.ml the edge training type value

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Security Data Engineer

@ ASML | Veldhoven, Building 08, Netherlands

Data Engineer

@ Parsons Corporation | Pune - Business Bay

Data Engineer

@ Parsons Corporation | Bengaluru, Velankani Tech Park