Feb. 13, 2024, 5:44 a.m. | Anastasia Koloskova Nikita Doikov Sebastian U. Stich Martin Jaggi

cs.LG updates on arXiv.org arxiv.org

In machine learning and neural network optimization, algorithms like incremental gradient, and shuffle SGD are popular due to minimizing the number of cache misses and good practical convergence behavior. However, their optimization properties in theory, especially for non-convex smooth functions, remain incompletely explored.
This paper delves into the convergence properties of SGD algorithms with arbitrary data ordering, within a broad framework for non-convex smooth functions. Our findings show enhanced convergence guarantees for incremental gradient and single shuffle SGD. Particularly if …

algorithms behavior cache convergence cs.lg functions good gradient incremental machine machine learning math.oc network neural network optimization paper popular practical stat.ml theory

Research Scholar (Technical Research)

@ Centre for the Governance of AI | Hybrid; Oxford, UK

HPC Engineer (x/f/m) - DACH

@ Meshcapade GmbH | Remote, Germany

Senior Analytics Engineer (Retail)

@ Lightspeed Commerce | Toronto, Ontario, Canada

Data Scientist II, BIA GPS India Operations

@ Bristol Myers Squibb | Hyderabad

Analytics Engineer

@ Bestpass | Remote

Senior Analyst - Data Management

@ Marsh McLennan | Mumbai - Hiranandani