March 14, 2024, 4:43 a.m. | Francesca Bartolucci, Ernesto De Vito, Lorenzo Rosasco, Stefano Vigogna

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.08750v1 Announce Type: cross
Abstract: Studying the function spaces defined by neural networks helps to understand the corresponding learning models and their inductive bias. While in some limits neural networks correspond to function spaces that are reproducing kernel Hilbert spaces, these regimes do not capture the properties of the networks used in practice. In contrast, in this paper we show that deep neural networks define suitable reproducing kernel Banach spaces.
These spaces are equipped with norms that enforce a form …

abstract arxiv bias cs.lg function inductive kernel math.fa networks neural networks spaces stat.ml studying type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Intern Large Language Models Planning (f/m/x)

@ BMW Group | Munich, DE

Data Engineer Analytics

@ Meta | Menlo Park, CA | Remote, US