Web: http://arxiv.org/abs/2201.05149

Jan. 14, 2022, 2:10 a.m. | Hamed Hassani, Adel Javanmard

cs.LG updates on arXiv.org arxiv.org

Successful deep learning models often involve training neural network
architectures that contain more parameters than the number of training samples.
Such overparametrized models have been extensively studied in recent years, and
the virtues of overparametrization have been established from both the
statistical perspective, via the double-descent phenomenon, and the
computational perspective via the structural properties of the optimization
landscape.


Despite the remarkable success of deep learning architectures in the
overparametrized regime, it is also well known that these models are highly
vulnerable to small adversarial perturbations in their inputs. Even …

analysis arxiv for random regression training

Statistics and Computer Science Specialist

@ Hawk-Research | Remote

Data Scientist, Credit/Fraud Strategy

@ Fora Financial | New York City

Postdoctoral Research Associate - Biomedical Natural Language Processing and Deep Learning

@ Oak Ridge National Laboratory - Oak Ridge, TN | Oak Ridge, TN, United States

Senior Machine Learning / Computer Vision Engineer

@ Glass Imaging | Los Altos, CA

Research Scientist in Biomedical Natural Language Processing and Deep Learning

@ Oak Ridge National Laboratory | Oak Ridge, TN

W3-Professorship for Intelligent Energy Management

@ Universität Bayreuth | Bayreuth, Germany