all AI news
Identifying Spurious Biases Early in Training through the Lens of Simplicity Bias
March 8, 2024, 5:42 a.m. | Yu Yang, Eric Gan, Gintare Karolina Dziugaite, Baharan Mirzasoleiman
cs.LG updates on arXiv.org arxiv.org
Abstract: Neural networks trained with (stochastic) gradient descent have an inductive bias towards learning simpler solutions. This makes them highly prone to learning spurious correlations in the training data, that may not hold at test time. In this work, we provide the first theoretical analysis of the effect of simplicity bias on learning spurious correlations. Notably, we show that examples with spurious features are provably separable based on the model's output early in training. We further …
abstract arxiv bias biases correlations cs.cv cs.lg data gradient inductive networks neural networks simplicity solutions stochastic test them through training training data type work
More from arxiv.org / cs.LG updates on arXiv.org
Digital Over-the-Air Federated Learning in Multi-Antenna Systems
2 days, 10 hours ago |
arxiv.org
Bagging Provides Assumption-free Stability
2 days, 10 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Field Sample Specialist (Air Sampling) - Eurofins Environment Testing – Pueblo, CO
@ Eurofins | Pueblo, CO, United States
Camera Perception Engineer
@ Meta | Sunnyvale, CA