March 25, 2024, 4:42 a.m. | S\'ekou-Oumar Kaba, Siamak Ravanbakhsh

cs.LG updates on arXiv.org arxiv.org

arXiv:2312.09016v2 Announce Type: replace
Abstract: Using symmetry as an inductive bias in deep learning has been proven to be a principled approach for sample-efficient model design. However, the relationship between symmetry and the imperative for equivariance in neural networks is not always obvious. Here, we analyze a key limitation that arises in equivariant functions: their incapacity to break symmetry at the level of individual data samples. In response, we introduce a novel notion of 'relaxed equivariance' that circumvents this limitation. …

abstract analyze arxiv bias breaking cs.lg deep learning design functions however inductive key model design networks neural networks relationship sample stat.ml symmetry type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Intern Large Language Models Planning (f/m/x)

@ BMW Group | Munich, DE

Data Engineer Analytics

@ Meta | Menlo Park, CA | Remote, US