all AI news
Lazy vs hasty: linearization in deep networks impacts learning schedule based on example difficulty. (arXiv:2209.09658v1 [cs.LG])
Sept. 21, 2022, 1:11 a.m. | Thomas George, Guillaume Lajoie, Aristide Baratin
stat.ML updates on arXiv.org arxiv.org
Among attempts at giving a theoretical account of the success of deep neural
networks, a recent line of work has identified a so-called `lazy' regime in
which the network can be well approximated by its linearization around
initialization. Here we investigate the comparative effect of the lazy (linear)
and feature learning (non-linear) regimes on subgroups of examples based on
their difficulty. Specifically, we show that easier examples are given more
weight in feature learning mode, resulting in faster training compared …
More from arxiv.org / stat.ML updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Analyst (H/F)
@ Business & Decision | Montpellier, France
Machine Learning Researcher
@ VERSES | Brighton, England, United Kingdom - Remote