Web: http://arxiv.org/abs/2206.08918

June 20, 2022, 1:11 a.m. | Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis

cs.LG updates on arXiv.org arxiv.org

We study the fundamental problem of learning a single neuron, i.e., a
function of the form $\mathbf{x}\mapsto\sigma(\mathbf{w}\cdot\mathbf{x})$ for
monotone activations $\sigma:\mathbb{R}\mapsto\mathbb{R}$, with respect to the
$L_2^2$-loss in the presence of adversarial label noise. Specifically, we are
given labeled examples from a distribution $D$ on $(\mathbf{x},
y)\in\mathbb{R}^d \times \mathbb{R}$ such that there exists
$\mathbf{w}^\ast\in\mathbb{R}^d$ achieving $F(\mathbf{w}^\ast)=\epsilon$, where
$F(\mathbf{w})=\mathbf{E}_{(\mathbf{x},y)\sim D}[(\sigma(\mathbf{w}\cdot
\mathbf{x})-y)^2]$. The goal of the learner is to output a hypothesis vector
$\mathbf{w}$ such that $F(\mathbb{w})=C\, \epsilon$ with high probability,
where $C>1$ …

arxiv gradient learning lg noise

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY