March 5, 2024, 2:44 p.m. | Teun D. H. van Nuland

cs.LG updates on arXiv.org arxiv.org

arXiv:2308.03812v2 Announce Type: replace
Abstract: The universal approximation theorem is generalised to uniform convergence on the (noncompact) input space $\mathbb{R}^n$. All continuous functions that vanish at infinity can be uniformly approximated by neural networks with one hidden layer, for all activation functions $\varphi$ that are continuous, nonpolynomial, and asymptotically polynomial at $\pm\infty$. When $\varphi$ is moreover bounded, we exactly determine which functions can be uniformly approximated by neural networks, with the following unexpected results. Let $\overline{\mathcal{N}_\varphi^l(\mathbb{R}^n)}$ denote the vector space …

abstract approximation arxiv continuous convergence cs.lg functions hidden layer math.fa networks neural networks polynomial space theorem type uniform universal

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Scientist

@ ITE Management | New York City, United States