March 14, 2022, 1:11 a.m. | El Mehdi Achour (IMT), François Malgouyres (IMT), Sébastien Gerchinovitz (IMT)

cs.LG updates on arXiv.org arxiv.org

We study the optimization landscape of deep linear neural networks with the
square loss. It is known that, under weak assumptions, there are no spurious
local minima and no local maxima. However, the existence and diversity of
non-strict saddle points, which can play a role in first-order algorithms'
dynamics, have only been lightly studied. We go a step further with a full
analysis of the optimization landscape at order 2. We characterize, among all
critical points, which are global minimizers, …

analysis arxiv landscape loss math networks neural networks

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Stagista Technical Data Engineer

@ Hager Group | BRESCIA, IT

Data Analytics - SAS, SQL - Associate

@ JPMorgan Chase & Co. | Mumbai, Maharashtra, India