Web: http://arxiv.org/abs/2206.08704

June 20, 2022, 1:10 a.m. | Tejaswi Kasarla, Gertjan J. Burghouts, Max van Spengler, Elise van der Pol, Rita Cucchiara, Pascal Mettes

cs.LG updates on arXiv.org arxiv.org

Maximizing the separation between classes constitutes a well-known inductive
bias in machine learning and a pillar of many traditional algorithms. By
default, deep networks are not equipped with this inductive bias and therefore
many alternative solutions have been proposed through differential
optimization. Current approaches tend to optimize classification and separation
jointly: aligning inputs with class vectors and separating class vectors
angularly. This paper proposes a simple alternative: encoding maximum
separation as an inductive bias in the network by adding one …

arxiv bias lg

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY