March 20, 2024, 4:42 a.m. | Rapha\"el Barboni (ENS-PSL), Gabriel Peyr\'e (CNRS,ENS-PSL), Fran\c{c}ois-Xavier Vialard (LIGM)

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.12887v1 Announce Type: new
Abstract: We study the convergence of gradient flow for the training of deep neural networks. If Residual Neural Networks are a popular example of very deep architectures, their training constitutes a challenging optimization problem due notably to the non-convexity and the non-coercivity of the objective. Yet, in applications, those tasks are successfully solved by simple optimization algorithms such as gradient descent. To better understand this phenomenon, we focus here on a ``mean-field'' model of infinitely deep …

abstract architectures arxiv convergence cs.lg example flow gradient math.oc networks neural networks optimization popular residual study training transport type understanding

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Global Data Architect, AVP - State Street Global Advisors

@ State Street | Boston, Massachusetts

Data Engineer

@ NTT DATA | Pune, MH, IN