all AI news
Implicit Bias of Large Depth Networks: a Notion of Rank for Nonlinear Functions. (arXiv:2209.15055v2 [stat.ML] UPDATED)
Oct. 6, 2022, 1:13 a.m. | Arthur Jacot
stat.ML updates on arXiv.org arxiv.org
We show that the representation cost of fully connected neural networks with
homogeneous nonlinearities - which describes the implicit bias in function
space of networks with $L_2$-regularization or with losses such as the
cross-entropy - converges as the depth of the network goes to infinity to a
notion of rank over nonlinear functions. We then inquire under which conditions
the global minima of the loss recover the `true' rank of the data: we show that
for too large depths the …
More from arxiv.org / stat.ML updates on arXiv.org
Estimation Sample Complexity of a Class of Nonlinear Continuous-time Systems
2 days, 17 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Machine Learning Engineer (m/f/d)
@ StepStone Group | Düsseldorf, Germany
2024 GDIA AI/ML Scientist - Supplemental
@ Ford Motor Company | United States