all AI news
Identifying Spurious Biases Early in Training through the Lens of Simplicity Bias
March 8, 2024, 5:42 a.m. | Yu Yang, Eric Gan, Gintare Karolina Dziugaite, Baharan Mirzasoleiman
cs.LG updates on arXiv.org arxiv.org
Abstract: Neural networks trained with (stochastic) gradient descent have an inductive bias towards learning simpler solutions. This makes them highly prone to learning spurious correlations in the training data, that may not hold at test time. In this work, we provide the first theoretical analysis of the effect of simplicity bias on learning spurious correlations. Notably, we show that examples with spurious features are provably separable based on the model's output early in training. We further …
abstract arxiv bias biases correlations cs.cv cs.lg data gradient inductive networks neural networks simplicity solutions stochastic test them through training training data type work
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US