all AI news
Variational Stochastic Gradient Descent for Deep Neural Networks
April 11, 2024, 4:41 a.m. | Haotian Chen, Anna Kuzina, Babak Esmaeili, Jakub M Tomczak
cs.LG updates on arXiv.org arxiv.org
Abstract: Optimizing deep neural networks is one of the main tasks in successful deep learning. Current state-of-the-art optimizers are adaptive gradient-based optimization methods such as Adam. Recently, there has been an increasing interest in formulating gradient-based optimizers in a probabilistic framework for better estimation of gradients and modeling uncertainties. Here, we propose to combine both approaches, resulting in the Variational Stochastic Gradient Descent (VSGD) optimizer. We model gradient updates as a probabilistic model and utilize stochastic …
abstract adam art arxiv cs.lg current deep learning framework gradient modeling networks neural networks optimization state stat.ml stochastic tasks type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Data Engineer
@ Cint | Gurgaon, India
Data Science (M/F), setor automóvel - Aveiro
@ Segula Technologies | Aveiro, Portugal