March 21, 2024, 4:42 a.m. | Abhinab Bhattacharjee, Andrey A. Popov, Arash Sarshar, Adrian Sandu

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.13704v1 Announce Type: cross
Abstract: The Adam optimizer, often used in Machine Learning for neural network training, corresponds to an underlying ordinary differential equation (ODE) in the limit of very small learning rates. This work shows that the classical Adam algorithm is a first order implicit-explicit (IMEX) Euler discretization of the underlying ODE. Employing the time discretization point of view, we propose new extensions of the Adam scheme obtained by using higher order IMEX methods to solve the ODE. Based …

abstract adam algorithm arxiv cs.ce cs.lg cs.na differential differential equation equation improving machine machine learning math.na network network training neural network ordinary shows small stochastic through training type work

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US