Oct. 6, 2022, 1:13 a.m. | Arthur Jacot

stat.ML updates on arXiv.org arxiv.org

We show that the representation cost of fully connected neural networks with
homogeneous nonlinearities - which describes the implicit bias in function
space of networks with $L_2$-regularization or with losses such as the
cross-entropy - converges as the depth of the network goes to infinity to a
notion of rank over nonlinear functions. We then inquire under which conditions
the global minima of the loss recover the `true' rank of the data: we show that
for too large depths the …

arxiv bias networks

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Machine Learning Engineer (m/f/d)

@ StepStone Group | Düsseldorf, Germany

2024 GDIA AI/ML Scientist - Supplemental

@ Ford Motor Company | United States