all AI news
Neural reproducing kernel Banach spaces and representer theorems for deep networks
March 14, 2024, 4:43 a.m. | Francesca Bartolucci, Ernesto De Vito, Lorenzo Rosasco, Stefano Vigogna
cs.LG updates on arXiv.org arxiv.org
Abstract: Studying the function spaces defined by neural networks helps to understand the corresponding learning models and their inductive bias. While in some limits neural networks correspond to function spaces that are reproducing kernel Hilbert spaces, these regimes do not capture the properties of the networks used in practice. In contrast, in this paper we show that deep neural networks define suitable reproducing kernel Banach spaces.
These spaces are equipped with norms that enforce a form …
abstract arxiv bias cs.lg function inductive kernel math.fa networks neural networks spaces stat.ml studying type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Intern Large Language Models Planning (f/m/x)
@ BMW Group | Munich, DE
Data Engineer Analytics
@ Meta | Menlo Park, CA | Remote, US