Jan. 3, 2022, 2:10 a.m. | Jinghui Chen, Yuan Cao, Quanquan Gu

cs.LG updates on arXiv.org arxiv.org

"Benign overfitting", where classifiers memorize noisy training data yet
still achieve a good generalization performance, has drawn great attention in
the machine learning community. To explain this surprising phenomenon, a series
of works have provided theoretical justification in over-parameterized linear
regression, classification, and kernel methods. However, it is not clear if
benign overfitting still occurs in the presence of adversarial examples, i.e.,
examples with tiny and intentional perturbations to fool the classifiers. In
this paper, we show that benign overfitting …

arxiv classification overfitting

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Lead Data Engineer

@ JPMorgan Chase & Co. | Jersey City, NJ, United States

Senior Machine Learning Engineer

@ TELUS | Vancouver, BC, CA

CT Technologist - Ambulatory Imaging - PRN

@ Duke University | Morriville, NC, US, 27560

BH Data Analyst

@ City of Philadelphia | Philadelphia, PA, United States