Oct. 11, 2022, 1:15 a.m. | Daniel Kunin, Atsushi Yamamura, Chao Ma, Surya Ganguli

stat.ML updates on arXiv.org arxiv.org

In this work, we explore the maximum-margin bias of quasi-homogeneous neural
networks trained with gradient flow on an exponential loss and past a point of
separability. We introduce the class of quasi-homogeneous models, which is
expressive enough to describe nearly all neural networks with homogeneous
activations, even those with biases, residual connections, and normalization
layers, while structured enough to enable geometric analysis of its gradient
dynamics. Using this analysis, we generalize the existing results of
maximum-margin bias for homogeneous networks …

arxiv bias maximum margin networks neural networks

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Sr. BI Analyst

@ AkzoNobel | Pune, IN