March 21, 2024, 4:42 a.m. | Abhinab Bhattacharjee, Andrey A. Popov, Arash Sarshar, Adrian Sandu

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.13704v1 Announce Type: cross
Abstract: The Adam optimizer, often used in Machine Learning for neural network training, corresponds to an underlying ordinary differential equation (ODE) in the limit of very small learning rates. This work shows that the classical Adam algorithm is a first order implicit-explicit (IMEX) Euler discretization of the underlying ODE. Employing the time discretization point of view, we propose new extensions of the Adam scheme obtained by using higher order IMEX methods to solve the ODE. Based …

abstract adam algorithm arxiv cs.ce cs.lg cs.na differential differential equation equation improving machine machine learning math.na network network training neural network ordinary shows small stochastic through training type work

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Engineer

@ Cint | Gurgaon, India

Data Science (M/F), setor automóvel - Aveiro

@ Segula Technologies | Aveiro, Portugal