all AI news
Bolstering Stochastic Gradient Descent with Model Building
March 14, 2024, 4:43 a.m. | S. Ilker Birbil, Ozgur Martin, Gonenc Onay, Figen Oztoprak
cs.LG updates on arXiv.org arxiv.org
Abstract: Stochastic gradient descent method and its variants constitute the core optimization algorithms that achieve good convergence rates for solving machine learning problems. These rates are obtained especially when these algorithms are fine-tuned for the application at hand. Although this tuning process can require large computational costs, recent work has shown that these costs can be reduced by line search methods that iteratively adjust the step length. We propose an alternative approach to stochastic line search …
abstract algorithms application arxiv building computational convergence core costs cs.lg good gradient machine machine learning optimization process stochastic type variants work
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Data Science Analyst
@ Mayo Clinic | AZ, United States
Sr. Data Scientist (Network Engineering)
@ SpaceX | Redmond, WA