March 8, 2024, 5:42 a.m. | Yu Yang, Eric Gan, Gintare Karolina Dziugaite, Baharan Mirzasoleiman

cs.LG updates on arXiv.org arxiv.org

arXiv:2305.18761v2 Announce Type: replace
Abstract: Neural networks trained with (stochastic) gradient descent have an inductive bias towards learning simpler solutions. This makes them highly prone to learning spurious correlations in the training data, that may not hold at test time. In this work, we provide the first theoretical analysis of the effect of simplicity bias on learning spurious correlations. Notably, we show that examples with spurious features are provably separable based on the model's output early in training. We further …

abstract arxiv bias biases correlations cs.cv cs.lg data gradient inductive networks neural networks simplicity solutions stochastic test them through training training data type work

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Field Sample Specialist (Air Sampling) - Eurofins Environment Testing – Pueblo, CO

@ Eurofins | Pueblo, CO, United States

Camera Perception Engineer

@ Meta | Sunnyvale, CA