Aug. 11, 2023, 6:44 a.m. | Juncai He

cs.LG updates on arXiv.org arxiv.org

This paper is devoted to studying the optimal expressive power of ReLU deep
neural networks (DNNs) and its application in approximation via the Kolmogorov
Superposition Theorem. We first constructively prove that any continuous
piecewise linear functions on $[0,1]$, comprising $O(N^2L)$ segments, can be
represented by ReLU DNNs with $L$ hidden layers and $N$ neurons per layer.
Subsequently, we demonstrate that this construction is optimal regarding the
parameter count of the DNNs, achieved through investigating the shattering
capacity of ReLU DNNs. …

application approximation arxiv continuous functions linear networks neural networks paper power relu studying superposition theorem

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Software Engineering Manager, Generative AI - Characters

@ Meta | Bellevue, WA | Menlo Park, CA | Seattle, WA | New York City | San Francisco, CA

Senior Operations Research Analyst / Predictive Modeler

@ LinQuest | Colorado Springs, Colorado, United States