all AI news
Lazy vs hasty: linearization in deep networks impacts learning schedule based on example difficulty. (arXiv:2209.09658v2 [cs.LG] UPDATED)
Nov. 23, 2022, 2:13 a.m. | Thomas George, Guillaume Lajoie, Aristide Baratin
stat.ML updates on arXiv.org arxiv.org
Among attempts at giving a theoretical account of the success of deep neural
networks, a recent line of work has identified a so-called lazy training regime
in which the network can be well approximated by its linearization around
initialization. Here we investigate the comparative effect of the lazy (linear)
and feature learning (non-linear) regimes on subgroups of examples based on
their difficulty. Specifically, we show that easier examples are given more
weight in feature learning mode, resulting in faster training …
More from arxiv.org / stat.ML updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Principal Data Engineering Manager
@ Microsoft | Redmond, Washington, United States
Machine Learning Engineer
@ Apple | San Diego, California, United States